Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.


  1. Advanced Patent Search
Publication numberWO1989011257 A1
Publication typeApplication
Application numberPCT/US1989/002218
Publication date30 Nov 1989
Filing date22 May 1989
Priority date23 May 1988
Also published asEP0368999A1, EP0368999A4
Publication numberPCT/1989/2218, PCT/US/1989/002218, PCT/US/1989/02218, PCT/US/89/002218, PCT/US/89/02218, PCT/US1989/002218, PCT/US1989/02218, PCT/US1989002218, PCT/US198902218, PCT/US89/002218, PCT/US89/02218, PCT/US89002218, PCT/US8902218, WO 1989/011257 A1, WO 1989011257 A1, WO 1989011257A1, WO 8911257 A1, WO 8911257A1, WO-A1-1989011257, WO-A1-8911257, WO1989/011257A1, WO1989011257 A1, WO1989011257A1, WO8911257 A1, WO8911257A1
InventorsLynn L. Augspurger, John D. Schlansker, James Mattiello, Bryan P. Shumaker
ApplicantAugspurger Lynn L
Export CitationBiBTeX, EndNote, RefMan
External Links: Patentscope, Espacenet
Method and system for making prosthetic device
WO 1989011257 A1
A prosthetic device (28) or a three-dimensional object (O) having surface characteristics derived from data obtained from a patient (22) and from data (MODEL) created to modify the surface characteristics of the object is obtained by sensing the object by transducer (21, T, 170, 302), and a solid modeling system (29, 201) with memory (24) and a processor, and process control elements (BF, BD, MODEL, CAD format) for constructing three dimension data files (BF) based upon mathematical creation of solid model (28) with cuberille data. Various transducers are illustrated, including free wand transducers (402) of plane image location and xyz coordinate location of subject elements, and a system for sampling data from ultrasound (170), CT scan, Magnetic Resonant Imaging (MRI) scan and other techniques are described, along with surgical methods of treatment and diagnosis. Numerically Controll (NC) machine tool (27) is an output tool of transfer of files to create a three dimension duplicate of solid object (28), which may have been broken before the unbroken duplicate is created.
Claims  (OCR text may contain errors)
What We Claim Is:
1. A prosthetic device, comprising a three dimensional object having surface characteristics derived from data obtained from a subject and from data created to modify the surface characteristics of the object.
2. A method of making a prosthetic device including the steps of creating an object based an data derived from a subject and with data created to modify the surface characteristics of the object.
3. A system with data reconstruction of object data comprising input means memory means processor means output means means for reconstruction of data obtained from said input means to produce data representative of a three dimensional object.
4. A solid modeling system comprising memory means, a processor and means for constructing three dimensional data files based upon mathematical creation of solid model data lay up of primative bit or byte maps.
5. A system according to claim 3 further including means for combining imags fron real three dimensional data with solid model data.
6. A system according to claim 3 wherein there is provided image reconstruction means for reconstruction of data in three dimensions from two dimensional data obtained from a transducer.
7. A system according to claim 4 including means for sensing the location in space of data derived from .scanning a subject.
8. A systam according to claim 4 including transducer means for sensing real three dimensional data.
9. A system according to claim 3 including means for causing xyz data to control a tool to provide for creation of a three dimensional object.
10. A system according to claim 4 including transducer means for obtaining three dimensional solid image data from an subject.
11. A system according to claim 3 having solid model means for creating solid images from CAD images and for transfering solid images to CAD images.
12. A systam according to claim 4 having means for reconstructing objects by combining data parts.
13. A systam according to claim 3 having means for reconstructing object which duplicate parts of a broken object and combine the parts in a reconstructed object.
14. A system according to claim 3 including means for creating shapes derived from CAD data in three dimensional primative form.
15. A system according to claim 14 wherein said primative form is cuberille data.
16. A system according to claim 3 including means for creating CAD data fron three dimersional primative form.
17. A system according to claim 16 wherein said primative form is in the form of cuberille data.
18. A system according to claim 3 including means for creating shapes derived from drawn or video two dinensional data, including means for creating drawings or video two dimensional data.
20. A system according to claim 3 including means for obtaining cptical slices of data and creating three dimensional reconstruction data based upon said optical slices.
2L. A system according to claim 4 including transducer means for sensing the two and/or three dimensional data fron a subject.
22. A systam according to claim 3 including transducer means for sensing two dimensional data fron a subject and means for converting said two dimensional data to three dimensional data.
23. A system according to claim 3 including means for clipping, thresholding and interpolation of data in a preprocessing step.
24. A system according to claim 3 including device control means for causing an operation to be performed on a subject.
25. A system according to claim 3 including masking means for creating masked data derived from uninterpolated data.
26. A system according to claim 4 including means for combining primatives from primatives derived from solid model data and from primatives derived from data derived from a real subject.
27. A systam according to claim 4 having means for constructing snapes from elements of the shape by combination of data representing said elements by and, or or erosion.
28. A systan according to claim 3 having means for detecting the spacial location of one or more origin points in a plane related to a movable object for deriving the spacial location of other points in said plane.
30. A system according to claim 3 having a signal sender and a signal receptor and means for determining the time of flight or intensity of a signal moving between the sender and receptor to derive a spacial value related to the distance between the sender and receptor.
31. A system according to claim 3 having means for creating extensions and additions to data representative of a subject and for making deletions therefrom to create a data format representative of a desired object.
32. A system according to claim 3 having means for controlling a tool to effect a desired result.
33. A method of data reconstruction of object data comprising altering characteristic data representative of views of said subject and creating object information adapted to create and/ or display a replicate of said said subject or part thereof.
34. A method according to claim 33 including the step of using said data to control a tool.
35. A method according to claim 33 including the step of using means to provide for reproduction of a three dimensional object derived fron said characteristic data.
36. A method according to claim 33 including the step of transfer of 2 dimensional images into three dimensional coordinates.
37. A method according to claim 33 including creating three dimensional shapes fron two dimensional input of data with solid modeling.
38. A method according to claim 33 including a step of creating images and, shapes from video input.
39. A method according to claim 33 including a step of creating images and shapes fron line drawings.
40. A method of creating shapes and images from data including a step of converting CAD images into solid model data.
41. A methiod of construction images and shapes including a step of creating a primative form of an object by cuberille modeling.
42. A method according to claim 41 including the step of creating the primative form of an object fron data derived from sensing a real subject.
43. A method according to claim 41 including construction of primatives by solid modeling.
44. An optical transducer of slice images including a microscope and image capture means, means for stepping the focal point of the microscope, and data capture means to capture successive spaeed images.
45. A method of subregioning images in a system of three dimasional imaging fron captured including a step of masking images before interpolation of captured data.
46. A method of processing images including a step of general preprocessing including selective interpolation of captured data in which the interpolation is performad en density data wh does not fall entirely inside of or entirely outside of a predetermined range, and subregioning and threshholding.
47. A method of converting cuberille images into outpt XYZ format of an object including the step of placing the xyz format of a modified surface of a capured image after cuberille processing in a display buffer and transfering the format into format recognizable by an output devioe.
48. A systam for imaging and/or shaping objects comprising a memory, means for receiving captured data and far transposing data received, preprocessing said information, and creating by boundry detection means surface coordinates of said objects far display and/or shaping said objects.
49. A system according to claim 48 including means for creating extensions and/or additions to said objects.
50. An image plane map point detector comprising emitter means for earitting waveforms fron and point having a fixed distance with respect the plane, receptor means for deternining sensing the distance to said emitter, and means for calculating the xyz coordinates of the emitter hased on information supplied Dy said receptor.
51. A metnod of naking an object inclining a step of solid modeling.
52. A metnod of making an object including a step of using information created by solid modeling.
53. A method of making an object including a step of using inrormatLon created by sensing a subject and creating control data tor naking said object from data derived from sensing said subject.
54. A system for controlling a tool, including interface means, preprocessing Deans for processing data derived from said interface means, tormater means for formating three dimersional data of a suriace to be formed fron data derived from a subject with additions thereto or subtractions therefron, and driver means responsive to said formater means.
55. An ultrasound system, including means for formating information for three dimensional reconstruction and display of data directly captured from a plurality of ultrasound image views.
56. A microscope three dimersional system, including microscope means, and means for formating information for three dimensional recorstruction and display of slices of data captured by transmission of waveforms through a .subject of microscopic examination.
57. A transducer of xyz points comprising a wave emitter, an wave receptor, means for sensing the xyz point of the wave emitter, and means for calculating the distance of said wave emitter fron a point in or on a sersed object.
58. An operation performed by a controlled tool resporsive to irstructions obtained fron a subject including the steps cf providing at least one xyz point upon which the tool is to operate and control means for said tool to destroy or treat matter at the xyz point, or move tool means to, adjacent to, towards or frcm said xyz point.
59. An ultrasound systan, including means for capturirg f raies from said system, and assembly means for assanbling than into a three dimensional form, means for thresholding images captured aid for display of surfece characteristics of said three dimensional images.
60. A method of examination of a patient including the step of capturing data from a transducer movable in xyz directions about said patient and displaying images derived from said data in three dimensions.
61. A product made by a method including the step of the nethod of claim 2.
62. A product made by a method including the steps of the method of claim 47.
63. A product made by a method .including the step of the method of claim 52.
64. A product made by a method including the step of the nethod of claim 53.
65. A product made by the systam according to claim 3.
66. A product made by the systam according to claim 4.
67. A copy system system comprising meats for copying real three dimersional objects and enabling modification of the surface of three dimensional objects to vary fron a direct copy of the real three dimensional object.
68. A xyz coordinate detector comprising, sensing means moveable in xyz directions about the area of an object for detecting the xyz coordinates of the object, and feedbadc means for sensing feedback of said sensing means, and calculation means for cemputing xyz coordinates of the surface and/or internal structure of the object.
69. A medical procedure including the step of using data derived from a three dimensional scan of a subject and operating a tool with control instructions derived at least in part from said scan.
70. A diagnostic procedure including a step of analysis of ultrasound data derived from three dimensional reconstruction of ultrasound data.
71. A system including means for planing a surgical operation, including means for detecting the structure of structure within a patient, means for displaying the structure prior to an during the progression of the course of operation.
72. A systan including means for displaying the surfece of an object found within other matter and for displaying a view of the object viewed in combination with views of part of the other natter.
73. Apparatus for transfering images over a carrier for display and/or printing or other purpose comprising, a first stare for storing a first image attribute signals, a plurality of additional stores for storing additional image attribute signals a first parallel/serial converter means coupled to said first store with an I/O channel, a plurality of additional parallel /serial converter means each coupled to a respective one of said additional stores each having a separate I/O channel, a first moden coupled to said first I/O channel, and a plurality of additional modems each coupled to a respective one of said other I/O channels, synchronization recognition means for providing and recognizing a predetemined succession of signals representing successive pixels of an image and for distributing them to or from at least one of said frame stores and one or more of said I/O channels.
74. An apparatus for trarsf ering images according to claim 73, wherein said synchronization recognition means are provided for each frame store, and there are means including for resetting counters to a predetermined address upon receipt of a predetermined number of successive pixel signals.
75. An apparatus according to claim 73 operable as a receiver/transmitter of image attribute signals representing attributes of color and grey scale.
76. An apparatus according to claim 74 further including a switching network for receiving an image attribute signal fron each modem and traismitting it fron a modeam of a transmitting apparatus to a modan of a receiving apparatus.
77. An apparatus according to claim 75 wherein each image attribute signal conveyed to and fron the network over a single carrier connected to a sending modem of a transmitter apparatus and a receiving modan of a receiving apparatus.
78. An apparatus according to claim 73 including a image generating apparatus which creates a plurality of image attribute signals each kind of which is conveyed to a respective universal synchrαnous/asynchrcnous receiver/transmitter in digital form, and including means for converting analog image attribute signals to said digital form in the event the originating image attribute signals require conversion to digital form.
79. An apparatus according to claim 78 wherein said image generating apparatus is a camera which sends three different attribute signals representative of an image to be sent, and there is provided analog/digital converter means for each of said different attribute signals for converting said signals to 8 (or greater) bit digital data representative of a color segnent and grey scale of the image attribute signal, the associated frame store for said image attribute storing the said digital data representative of said image, the associated one of said universal synchronous/asynchronous receiver/transmitter converting said digital data frcm paral lel data to serial data on transmission to a coupled modem, and there is including means for sending a send complete signal fron said modan.
80. An apparatus according to claim 73 wherein each of said modem receives a image attribute signal of its carrier channel, means are provided for indicating an image complete signal for said image attribute signal, and means are provided for completing the presentation of a camposite image including a plurality of image attribute signals when all necessary image attributes for forming said image are complete, regardless of the carrier line over which they are conveyed, and wherein at least some of the attribute signals represent shades of grey and color, and three different attribute signals are combined to form a composite color image, and wherein the modems are coupled to a recorder store for display on a TV monitor, and wherein said composite image is conveyed to a recorder coupled to a tuner of a channel of a TV, and wherein there is provided a A/D converter .separator reconbiner, for trarsmitting image attribute signals to and frcm said frame stores.
81. An image capture system comprising wave transmission and receiving means adaped to capture image data fron a wave passing through a subject, rotation scan means adapted cause captured image data frcm said wave to be representative of a each of a plurality of slices along a plane passing through said wave and said subject as a rotational scan of said subject about an axis of rotation.
82. A system according to claim 81 including means to store said capture image data representative of a rotational scan of a subject as data points associated with said scan.
83. A system according to claim 82 including means to assemble all data points of a scan as a three dimensional logically ORed array of said data points representative of said subject.
84. A system according to claim 81 including means to display a two dimensional view of said image data, and wherein said rotation scan means includes driver means to cause said wave transiission and receiving means to rotate about an axis of rotation, and signal trasfar means to transfer a signal from said wave transmission and receiving means to a non-rotating transmission element, and wherein said signal transfer means and said non-rotation trananission element are coils, and wherein said wave is an ultrasound wave, and wherein there is included display means to display a capoured image, and wherein there are image display means to display segients of data fron said image in two dimensions.
85. A system according to claim 84 wherein said image display means includes thresholding means to segnent data fron said captured data and to display by image rotation a three dimensional view of a disarticulated part of the scanned subject.
86. A system, comprising processing means, means for preforming defined functions including step of processing data within said processing means.
87. A systan according to claim 86 for reconstructing a set of subject data points further comprising means for representing said subject data points with threshold parameters and means for creating a mask of data points corresponding to data points within a subset of said set of subject data points containing points having said thresshold parameters, and means for thresholding the said subset to derive a representative array of data points of said set of subject data points.
88. A systam according to claim 87 including means for interactive segrention, said means for interactive segnentation including interactive scalpel means for delineating the specific data within subject data points for thresholding, further, wherein there are provided means for represent said subject data points includes clustering means, further, including erase means for erasing masking points to show underlying data points, further including color means for representing with color a subset of subject data according to a prescribed criteria, further including color differentiation means for coloring different objects with said subject data points with a color corresponding to their actual color and/or a mock color and for recording a plurality of said objects for display, printing or further processing as a combined set of colored objects, further for reconstucting a set of data there is provided means to locate a boundry of points of an object contained within a set of slices of representative data and for retaining scale parameters of edge points of said object and/or of edge points a distance from the edge points of said object, further including means for representing said object with said scale data as representative of the surface of said object, further including means for reconstructing objects based upon a surface view of edge points of said object, further including means for representing the surface of an object with a scaled data set of a 3-D surface of the object, further for operative use there are provided means for representing the inside and surface of an obscured object, further including means to represent the object with adjacent material obscuring the object and for removing selected overlying material successively during a procedure. 89. A system according to claim 86 for surgery further comprising means for driving a numerically controlled device with a data set including a three dimensional representation of an object, further , means for adjusting a color map to correspond to a target display device different from a monitor display of a central part of a processing system, further a disc file and controller a tape drive and controller, a central display monitor an analog display monitor means to provide R,G, B and NISC/PAL video signals film printer means a printer for making record prints, and a frame grabber, and display buffer with means for connecting a video camera and for providing video sigpals out of the system to video based systems, and means for driving digitally controlled devices for providing solid articles and/or removing or burning away materials represented by data within the system, further including means for representing data originating with different 2-D coordinate location data to rectangular coordinate data, and means for determining and reconstructing data to real dimeisions.
90. A system according to claim 86 for representing a data, set further comprising means for obtaining one or more free tilt intersections across a three dimensional data set.
91. A system according to claim 90 wherein said free tilt intersection is a planar section of said three dimensional subject data set obtained after rotaticn of the a three dimersional subset of said data set.
92. A system according to claim 90 wherein said free tilt intersection is obtained by trilinear interpolation of a detected three dimersicnal object data set.
93. A system according to claim 90 where said free tilt intersection is obtain by sync function interpolation of a detected three dimensional object data set.
94. A systam according to claim 90 wherein said free tilt intersection is obtain by rotation of said subject data set with trilinear and or sync function interpolation and subsequently a reconstruction of an object within said data set is obtained.
95. A systam according to claim 86 for surgery further comprising means for driving a numerically controlled biopsy and/or laser unit with data obtained after thr ee dimensional reconstruction of a suspect object and after a view of a surgeon is obtained and the said unit is driven to follow coordinates within a three dimensional data set determined by the view of the surgeon and/or data within planes of the object normal to the view of the surgeon obtained by free tilt reconstruction of a thresholded subset of scan data.
96. A system comprising means for displaying a 2-D view of transducer data, a 2-D reformated view of said trarsducer data, and rotatable 3-D views of said data, said display means including a monitor.
97. A system comprising display means, means for displaying the surface of an object related to a grey scale or true color of the surface of an object and including one or more of the following means, inclusion means including means for displaying a pixel value adjacent and edge and/or a defined distance from an edge, means for rendering or or more inside surfaces and an external surface of an object, means for taking progressive views of an obscured object as if certain elements obscuring a desir ed ultimate view were successively removed either wholely or by a layered removal process, means for unwarping imeges, means for combining multimodality images as a composite image for display and/or subsequent processing, means for displaying during rotation views of internal objects within an obscuring element including means for making the view transparent, conversion means for converting a grey scale pixel into a matrix of ones and zeros for display of the grey scale pixel on a monochrone monitor or other output element displaying dots, such as facsimile.
98. A system for surgical planning and/or opration comprising, means for providing a plan for and/or execution by an element to be used in a surgical operation, such as image glided stereotactic elements.
99. A system, comprising means for displaying desired dements in combination, in color or grey sales fron one or more modalities in three dimensions so that the display represents the three dinensional spacing of various elements of the composite display.
100. A stereotactic diagnosis and planning system, comprising means to determine a trajectory to a point in three dimensions.
101. A system, comprising means to locate an object in position for preplanning, such as an element used in connection with surgery.
102. A method in accordance with one or more of the described inventions. 103. An apparatus in accordance with one or more of the described inventions.
104. A system in accordance with one or more of the described invaitions.
Description  (OCR text may contain errors)


Background of the Invention

The inventions described herein relate to prosthetic devices and to the method and apparatus far manufacturing devices, such as the prosthetic devices for which the inventions were developed. Various inventions described herein are of a broader scope and in a more general manner relate to to methods and apparatus for detecting, displaying, copying and making other objects and and for controlling operations of tools for surgical, diagnostic and treatment methods, including destruction of undesireable tissue. In addition, the implementation of the inventions herein relate as well to automated diagnostic and sensing apparatus and methods employed for surgery and diagnosis.

This invention was developed to assist in pre-surgical planning and in making prosthetic devices for implantation in the body of a patient. A typical prosthetic device which can be manufactured by utilizing the inventions of the present application is a bone implant. In order to manufacture this device we have created a new system capable of duplicating real parts and modifying the parts before manufacture. Files can be kept of these parts for combination with other imaginary parts at a latter time.

While we have chosen to illustrate a femur extension, the object could be another bone or segment or duplication of particular tissue structure, including a skull cap or other facial implant used in facial reconstruction. Additionally, the prosthetic device nay be a tissue implant, or support for tissue, as may be used by plastic surgeons. The methods may be employed in the treatment of diseases, such as the eradication of cancer, with or without surgery as well as in the diagnosis of such diseases and treatment by precise placement of tissue or instruments. The method used in manufacture employs various elements, some of which are currently in existence and manufactured by others which have been made to interface with and become in an overall combination part of the inventions of the present application. The applicants have developed an improved three dimensional imaging system for reconstructing three dimensional views of a human body part from slice data obtained from a CT transducer system. Computer systems which have been created earlier by others can display three dimensional images. Sane commercial CAD systems, perhaps 80% of mainframe packages have something called "true" 3D generation which work in 3D and Z) by specifying z coordinates as well as the normal x and y coordinates. Some of these systems allow shading of the two dimensional surfaces displayed or allow for generation of wire meshes used for finite-analysis. These systems normally can delivar coordinates for a numerical control systems.

On the other hand there are image creators for video display of data captured by CT transducers, and these systems are available from General Electric and others in the U.S.A. These systems are also known as 3D systems, and are generally based upon the system known as Display 82 which is described herein. These systems utilize voxel representation, instead of the line systems used in CAD. We have combined our CT reconstruction system with other system elements and devices, data and methods to obtain and reconstruct images from (Nuclear) Magnetic Resonant Imaging transducers, PET systems, etc. and developed new transducers to accomplish our purpose. We hare also created a way to reconstruct images as well from Ultrasound devices and other transducers which do not capture regular slice data. Data has been captured from a subject, usually the patient. It also has been captured from another subject, such as a ceramic tile or even a working drawing. Two dimensional images and two dimensional copies have been created in the past by the well known systems originally developed by the Xerox Corporation, Implementation of the objects of the systems described herein permits three dimensional copies to be made of real sub jects from data in three or two dimensions for viewing as a two dimensional representation and creating the replicate of the subject modified by design as a three dimensional object. Transducers such as CT, MRI, ultrasound and video cameras are offered in commerce. Likewise offered in commerce are so called CAD and CAM systems, computer aided design and computer aided manufacturing systems, as well as numerically controlled machine tools as has been mentioned. Some of these commercial elements may be part of and form a part of the overall system as part of combination inventions described herein. However, the present state of the art has not implemented the three dimensional capabilities which are required and are a result of our endeavors. Summary of the Inventions

Overall the system of the present invention enables the user to create or reconstruct or copy of a three dimensional object. The system is implemented in cuberille format and enables data captured from real subjects in three dimensions, with real three dimensional surface data and solid three dimensional data, or from two dimensional representations, to be viewed in two dimensions and created in three dimensions with modifications made as needed car desired. The system can copy a real subject or part thereof, create a solid model, and combine real and imaginary solid parts to create a new object. We use data bit or byte map pri matives which are created from files of real or mathematical model data. Building blocks allow solid modeling of desired features of the object. The object which may be a bone or body part viewed through a transducer in slices or created by utilizing the solid modeling elements of the system, and if desired modify it, create a new element image in three dimensions, and then make the part, by directly driving an output device, or indirectly through a CAD three dimensional driver for a CMC machine, with the capability of modifying the part during the manufacturing process to add elements to the part to be manufactured, subtract parts, to assemble various parts and to take them apart.

Objects can be created from newly acquired primatives, a library of files, or created from parts or combinations of files, or from mathematical models in our system, in accordance with the various methods which are described and illustrated. The device which is manufactured may be created by way of machining of a substance, such as plastic or metal, or by creation of a mold or mold precursor from which many duplicate parts nay be manufactured. The system allows one to scale parts in whole or in part to different dimensions from the original, or to faithfully duplicate the original, by machining, molding, stamping or otherwise forming the object to make the desired object. The system can be implemented for basic performance in micro computer orworkstation systems, as well as by mainframes and parallel processors for faster performance and greater combination capability.

As a simplistic example of the capability of the system described herein, a clay cup may be thrown down on the ground, broken, and an unbroken original duplicate of the broken part nay be produced in metal, plastic, glass or clay. A more practical use of the present invention is the creation of bone inserts to replace shattered bones , as the cup was created, or as an insert for an extension of a shortened femur. Hip replacament parts can be manufactured.

A mere generalized manufacturing application of the system described herein could assemble machine parts from a library of stored data, or from solid models, assemble them together, determine whether they properly interact, and then make all parts from the assembled items to specified tolerance. A shattered link from an old broken device can be created by the system, or a new object created from the mind or from copying real objects from nature or by a combination of both. The system can be used for computer aided diagnosis and computer glided surgical procedures, some of which are described herein.

In addition to the discription of this invention there is a requirement for what we call "interactive segmentation" of elements requiring separation from artifacts, enabling the creation of true to subject colored representation of unseen objects, creation of elements for numerical control and laser resection, by free tilt of data sets and particularly of 3-D sub-sets of an entire data set, and moveable view of the surgeon in three dimensions, creation of real time views of surgery and enabling the direct drive of surgical lasers and safer and more efficient preplanning of treatment and surgical procedures. There are many more improvements which we have had to develop which are described in derail for working with images and for creating displays in various formats, in color and in grey scale, and for creating objects and for controlling devices for making objects and burning away tissue, and for comparing and merging different modalities in the process. All of these improvements make it possible to detect and portray tumors and other soft tissue in correct anatomical free space for surgical pre-planning, and enable better use of stereotactic frames for surgical process. With the use of the developments where have teen able to locate tumor growths by direct thresholding and interactive segmentation and to display small structures within the brain, such as arteries which are connected to a tumor, which we ware not able to detect with the original modality. An inclusion function allows display of composite or combination multimodality images, in 3d and in color (hue) allowing better planning of stereotactic operations and diagnois. In addition, too, in order to make cur inventions available throughout the world, the invention relates to image transfer apparatus, and more particularly to apparatus for conveying television and computer images by voice grade telephone line by dividing the signal into separately transmitable digital attributes. It is well recognized that voice grade telephone lines are readily available throughout the world. However, the transmission of images to various stations or receiver telephone installations is slow and time comsuming. The bandwidth of voice de lines varies substantially, but it is low compared with various digital alternatives or cable lines which have a much higher bandwith. The general solution is to increase the higher bandwidth lines, but that is expensive, and it is unlikely that there will ever be a universal standard. There exist certain teleconferencing apparatus, such as the equipment known as Visitel, a small telephone receiver device currently manufactured by the Mitsubishi Visual Telecom Division, in Santa Clara, CA, U.S.A. which with speciali zed equipment can transmit an analog 95 95 pixel grey scale image at 1.7 hertz through ordinary telephone lines. Other more complicated specia li zed equipment exists, such as the Kodak still video equipment.

However, for medical applications, which is of special interest, and for other applications this equipment is not suitable, as it is incapable of handling the transmission of appropriate images. Other photophone equipment is either unsuitably slow, does not provide full color, and unduly expensive to purchase and operate.

Accordingly, the purpose of the apparatus described is to enable the large installed base of personal computers to act as visual phones and television equipment to be utilized to transfer images originated from a TV camera and/or from computer generated graphics to locations about the land over voice grade lines. In addition the purposes of the apparatus described is to provide a transmitter and receiver which can be installed at a location and be able to transmit through voice grade and other wider bandwith lines and through switching circuits.

This overview of the inventions use will become more apparent from a review of the claims appended hereto which are understood to be part of the description characterizing the novel and useful elements of the inventions herein, and from revised hip made in accordance with our inventions, as seen by ore viewing a display of the reconstructed hip. The three views show three different views of the same hip and femur connection.

Figure 1D is a view of a reconstruction of veins and arteries of a heart, enhanced by arteriogram dies.

Figure 2 is a plan view of a reconstructed femur replacement part.

Figure 3 is a schematic view of a femur display during a process of making the part of Figure 2 and Figure 4. Figure 4, is a schematic perspective view of the replacement part of Fig. 2.

Figure 5 is a schematic illustration of a configuration in accordance with the present invention.

Figure 6 is an illustration of the format supplied to a CAD-CAM system, illustrating how cuberille data is supplied to CAD format.

Figure 7 is an illustration of a data descriptor used by the boundry detection portion of our system.

Figure 8 is a flow chart of the preferred embodiment of the system of our inventions.

Figure 9 is a portion of the code used to create a duplicate of the femur part shown in Figure 1A.

Figure 10A represents a broken tile, which with the system and methods of the present invention can be replicated in original form without breakage, as shown in Figure 10B.

Figure 11 is an illustrative example how snail portions of a cuberille file cat be used as building blocks, and combined with other data to form the basis for reconstruction and constriction of a desired three dimensional object.

Figure 12 illustrates how CAD line data having three dimensional values can be converted into a cuberille model,

Figure 13 A shows six line views of an object which is to be converted into a solid model,

Figure 13B illustrates steps involved in creating the three dimensional model from line drawings of Figure 13A,

Figures 14A, 14B, 14C illustrate schematically sensor elements for locating the xyz coordinates of images of a free floating transducer, such as for ultrasound.

Figure 15 represents schematically an ultrasound system,

Figure 16 represents schematically an optical transducer system,

Figure 17 represents a free wand coordinate measuring transducer system.

Figure 18 is a schematic representation of a free tiltable green line and data set.

Figure 19, illustrates schematically a view of the monitor of a laser resection unit having a numerically controlled machine controller for positioning and firing a laser for laser resection.

Figure 20 illustrates very schematically a biopsy, and alternatively a laser resection.

Figure 21 illustrates a method of locating the pixels within the plane of a three dimensional reconstruction of a lesion in order to permit free tilt views of any cross section of the lesion to be viewed and used for robotic controls.

Figure 22 illustrates schematically a astrocytoma located in the brain in which the brain dura is removed and the tumor illustrated in anatomical free space in relation to the skull, the falx, and interconnected arteries.

Figure 23A and 23B illustrate the usefulness of retaining the grey scale data adjacent the object of interest in order to qualify the adjacent material and utilize it to reconstruct adjacent material of interest.

Figure 24 illustrates the application of sliced data sets showing the inside section of an object for presurgery. Figure 25 illustrates interactive segmentation and thresholding. Figure 25A is a step by step illustration of the structured elements to reach a three dimensional reconstruction with boundry detection using the MC and ET files after threshold parameter review and interactive segmentation of slices which require interactive segmentation. Figure 26B illustrates the coupling of major system hardware elements. Figure 27A-D illustrate an apparatus for obtaining a full data set which can be created at the time of surgery, or before, and in which: Figure 27A shows schematically an ultrasound transducer, and

Figure 27B shows schematically the transmission of image data from a primary to secondary coil, and Figure 27C illustrates and exploded view of the coils of Figure 27B, and Figure 27D illustrates the transducer of Figure 27 in connection with preplanning and surgical equipment. Figure 28 represents schematically the transmission end of a transmission systam. Figure 29 illustrates the frame store of the transmission system. Figure 30 shows the receiver (transmitter in reverse) of the transmission system. Figure 31 shows exemplary TV signals.

Figure 32 illustrates receiver and transmitter synchronization.

The unpublished registered (7-27-87) U.S. copyright work TXu302 471 of Lynn L. Augspurger, may be referenced for detailed coding which may be υsed to implement sane aspects of the inventions described fully below and in the drawings. There is there represented new code which perfoms BX, BN,CN, BD, DS (from DISPLAY 82 with extensive modifications described herein) as well as BF, a CAD formater which may also be employed to drive any numerical controlled machine having such input and MCDEL and the appendix herein below illustrates C code for SG, an interactive segmentation routine fully described herein. These programs which may be implemented in hardware or software or combination to implement the system and method aspects of the inventions. Referring now to the drawings in greater detail, in order to assist in the understanding of the present inventions, a description of the manufacture of a bone-segment replacement will be described.

A transducer, in the present first example, senses the presence of an object to be copied, or an object to which additions are to be made during the manufacturing process. In the first example, the object to be copied is a bone from the hip of a subject, a patient, shown in Figure 1 (A,B,C) and the object to be manufactured is a prosthethic device is a bone segment extension shown in Figure 2 aid 4 which includes as part thereof a portion of the femur of Figure 1 and an extension to which an addition is to be made for USE in connecting the bone segment replacement to the bane in the patient and extending the spacing between existing ramaining bone segments.

The resultant prosthetic device is a bone segment replacement which is to be attached to the remaining segment in the patient from which the original bone segment copied is removed by a surgical procedure.

In the first example the patient illustrated is assumed to have had a foreshortened femur which foreshortening arose during growth. The bone did not grow as long as would the normal bone, so the gait of the patient was affected. An extension of the femur is desired. The procedure will be to identify where the segment is to be inserted, make a matching extension segment insert prosthetic, perfrom surgery, remove the original segment and insert the extension segment into the remaining portion(s) of the femur in the patient.

A CT scan of the patient's famur is made. A three dimensional reconstruction of the CT scan is made which nay be displayed after reconstruction as shown in Figure 1. A determination is made where the famur will be cut (sliced by the physician). The place where the femur is to be cut is identified in the process of three dimensional reconstruction as shown in Figure 3. A slice through the reconstructed two dimensional view of the image of famur 14 is made as the slice will be cut during prospective surgery planning by subregioning, clipping or contour tracing frames 11, 12, 13, 16.. The portion of the bone which is "cut away" from the patient is placed in computer memory, and that part is used to construct the model for the prosthetic device. The copy of the original bone part 15 framed by frame 16 which has been cut away is displayed at face 150 . After decisions ace made as to "what is to be added a reconstruction of the new part, shown by Fig. 4 nay be displayed. In the present first example, an extension 17 of 3 centimeters will be added to the "cut away" part 15 making the prosthetic insert 3 centimeters longar than the original. In addition a mating extension or addition (in the form of a tubular insert 18 to mate with the bone to which the extension is matched (which will remain in position in the patient) is added to the extension 17.

The surgical procedure will bore a matching hole in the remaining bone to mate with the extension. The tubular insert will be inserted within this bone and bonded in place. The extension will match the slice so that a matching bearing surface between the original bone part and the extension will be formed, and the parts bonded into place. As a result the new femur will have part original bone, and part an extended prosthetic porticn matching the part or the original femur which has been removed, except that it is essentially 3 centimeter longer with a support reinforcanant tube 18. When the surgery is completed the patient will have a more even gait.

For the purposes of illustration in the first example we have illustrated the addition of the extension to the femur. The extension was manufactured as an insert between two parts. The extension could only be 3 centimeters long, but with faces matching on each end which mate with the original bone and with the tubular insert formed en each end and with an aperature which, as shown in Fig. 4 could extend through the insert . Alternatively half of the femur could be removed and a ball connection made at one end to another bone, and and extension achieved by a longer section with the tubular insert. The tubular insert is illustrated to allow for fluid passage through the insert, and to permit cellular growth throughout the passage. The insert is preferably made of strong material acceptable in the body without rejection activity, such as stainless steel, for rigidity. Without live bone in the fanur extension the bones are not expected to "knit". Plastic "bone" or "tissue" inserts for facial reconstruction are within the scope of the inventions, and they nay be chosen from substances which allow adherence of living tissue to the insert. The insert of Figure 2 could be a nose cartilage replacement or skull cap reformatted to conform to the new surroundings.

As will be discussed hereinafter, the method of manufacture of the prosthetic device has other aspects which are useful, and further examples will be illustrated. Initially however, the reader will be interested as to how the prosthetic device and other devices are manufactured.

For the sake of illustration of systems, reference may be had to Figure 5. A CT scan nay be made by many devices marketed throughout the world today. It is within the scope of data capture that the transducer, which may be the computed tomography, magnetic resonance imaging device, ultrasound scanner, positron emission tomography or other forms of tomography, or data capture transducers 21 that can create machine readable images of the properties of real objects or subjects 22 , such as human organs and bones. In addition some tranducers can be employed to capture data from inanimate objects. For the purposes of our reconstruction of bones, we prefer that a commercial CT scan be made of a patient with close spacing between slices. A range of slice spacings is available with current CT transducers. For certain purposes the CT is normally set for diagnostics with a range between 1 and 10 millimeter spacings. The CT computer system 23 captures data and the slices captured may be displayed in 2D form of monitor terminal 25. TMs sane terminal may display the images we create. Reproductions of the images can be printed by the graphics printer 26. The larger the spacing the lesser the quality of the the 3D reconstruction. We prefer to reduce the intensity of radiation, and week in the lower range of spacings, preferably less than 3 millimeter spacings, and even most desireable below 1.5 millimeter spacings even down to a range between .5 to 1.5 millimeter spacings. As the average pixel size is 1 to 1.5, we prefer to match the pixel to millimeter spacings between slices, in the example 1 mmllimeter spacings which is the typical pixel size. The output of a CT scan is a pixel by pixel de finition in raster format of each slice through the body of a patient expressed as integers. Today, commercially, these slices are stored via computer 23 having a processor and memory on disc or magnetic tape memory 24, and can be displayed en. a graphic display terminal 25, a slice at a time. Pseudo X rays films (photographic type films similar to X ray films) can be created by printer 26. Certain skilled radiologists can read several slices and imaging what would be a reconstructed image. Our system with programs resident in the system 29 (later referred to as 201 in other drawings) also contemplates manufacturing capability via the numerically controlled tool 27 as an output device. Input may be from reading magnetic tape having images captured by a transducer 21, which may be the CT, or MRI or other transducer, or directly from a wand transducer or other transducer system which is described later. We prefer to implement the central computer controls on a workstation 29 (coupled for interactions with the keyboard and monitor 25), such as an IBM AT with Imaging Technology, Inc. Series 100 image card, with a tape input, and High Resolution Monitor such as an NEC Multisync, illustrated collectively as 25,29,24, and with connection to a NC machine tool 27 (representing all forming devices) for completing the manufacture of the object 28 copied. There also exists commercially, we understand, other computer programs which will fi ll a mamory of a computer system with slice data and reconstruct a three dimensional image far display on a commercial graphics display computer, such as Display 82. We understand that the commercial three dimensional reconstruction programs are able to display images for review by radiologists and surgeons for surgical planning. At least some such systems are based an the program called Display

82 developed by Dr. Gideon Frieder, Dr. Gabor Herman and others. A description of

Display 82 nay be found in a manual written by Jayaram D. Udupa called Display 82 - A

System Of Programs For The Display Of Three Dimensional Information In CT Data, published by the Department of Radiology, University of Pennsylvania, in April, 1983 by the Medical Image Processing Group as Technical Report No. MIPG67. That document, while not necessary for the understanding of the present invention is incorporated herein fully by reference for the purpose of illustration of a prior art program which may be employed in the present system with modification to obtain certain of the important results, such as creation of three dimensional models and copies. However, the program

BF accomplishes the results illustrated and differs aid incorporates important improvement with respect to the prior Display 82 system. We have rewritten such a program in part along the same lines, but with a higher performance and improved characteristics as a part of our development. he have developed our own three dimensional reconstruction program illustrated by the programs and examples and description herein, of which the important features of programs implementing the development are illustrated by BF, CAD Formater and MDDEL. We also employ BD and the Interface routines equivalent of Display 82 used to obtain an interface images captured en a CT device and saved as slice data. The images be be straight grey-scale CT data, or created images such as voxel based data. A voxel is a rectangular parallelopiped with usually a square cross section representing a size between sample points of the sensed object, often simple cubes with eight faces representing data locations. For the purpose of commonality of discription, we illustrate as a preferred embodiment a voxel based system in which the typical CT slices have a x by y frame of pixel values which are the recorded intensity. We know the coordinates because that is where the intensity values are stored. We accord x and y values which are equal directly from the CT data. By program manipulation we assign a z value equal to x and y, or alternatively a z value based on slice separation, or dependent upon the particular transducer, a z value dependent upon slice spacing. In typical CT data, each voxel is expressed with certain quantities. We include the three xyz values indicating the spacial position obtained from where the data point is stored. From the original data we obtain an integer railed the density from the CT data, which we use only in the threshholding step.

We prefer to place our program in a terminal or workstation 29 which can directly commumicate with the transducer 21 (in CT via the CT system 23, which alternatively is a MRI scanner, but via RS170 through the Imaging Technology Frame Grabber Series 100 in a typical ultrasound system). Our system can read tapes from these machines. From the file header of the tape the pixel size as a dimensional number (real number) included in our data discriptor, expressed as a floating point number, is obtained. The resolution is also obtained from the header and expressed as an integer, as is the size of the slices (expressed also as an integer). Utilizing tie density integer, which may be some scalar property, it is possible to assign binary data to voxels having a "density" within a range, a 1 for a density within the range, and a 0 far densities outside the range. In this manner a bone nay be segmented from soft tissue in a thresholding process.

By using the x,y,z coordinate spacial position integers values a buffer memory may be filled with representations of the slices, slice by slice in a three dimensional array. Interpolation can fill spaces between data slices, and a full three dimensional image created. We prefer to define a voxel with grey scale data, however, now in view of the size of computer capability, we define a voxel as 1 within the density range and 0 when it is not, thus in our less powerful mini-comput ers, a single bit nay be assigned to each voxel. For sake of ease of illustration, the code illustrating such a version utilizes a single bit assigned to each voxel. It should be understood that with larger memories than normally available for use with todays mini-computers, we contemplate and illustrate the assignment of bytes to a voxel. With such an assignment we can use the words to retain the original grey scale date, and make possible color assignments to the voxels. Byte description is also of assist in constructing data into 3D objects from 2 dimensional transducers. In the current and preferred embodiment illustrated, which is the best mode for implementation in smaller computers, the byte of the more powerful version is eliminated in favor of a bit, and during preprocessing, the grey scale of the original data is discarded. It is retained in the byte information for larger scale memory implementation. With the larger scale mamory, pseudo color may be assigned also to any individual data byte such the the display of particular pixels can be colored. Thus for a heart, we could code the veins and arteries blue and red respectively, and the color is retained at the assigned location for display en rotation. In the current illustrated embodiment, pseudo color is assigned on the basis of intensity, which enables the viewer to better distinguish three dimensional depths on two dimension image displays of the three dimensional images created by the system. In byte systems, the primatives ex bit maps in the illustrated bit maps are the primatives or byte maps of a byte system.

For the sake of definition, the slices are stacked with the z and y definition being assumed to be perpendicular to the z-axis. As with the Display 82 system, the images can be rotated about predetermined axes. While some systems allow rotation about a few predefined axis, our system allows for rotation about arbitrary axes. Some systems allow only for rotation about a longitudinal axis, about a vertical axis, and/or about a third arbitrary axis.

Preferably there are at least 20 or more slices to each grey-scale image file scene. Each slice is a block of data. The exact format of each slice is usually considered proprietary to the specific CT manufacturer.

In a known 3 dimensional program known as Display 82 the tasks accomplished by the program included converting reconstructed data from a GE, EME, Philips and the Dynamic Spacial Reconstructor of the Mayo Clinic into the procedure of the program, in a manner similar to our own procedure. For the purpose of this description, information captured by the "image transducer" (such as the CT scanner) is stored as "original image data". Such original image data is then transposed by a "transposition conversion preprocessing" step into data which may be accepted by the "reconstrunction" system. In our view the commonality of the "reconstruction" system" with standard initial data format, requires only the preparation of a "transposition conversion preprocessing" file adapted for the transducer which "captures" the original image data. This is an interface program, of the kind illustrated in microfiche which is representative of the prior art interfaces which nay be used to interface original data into our systam. In Display 82 it created a file known as DT, which required further preprocessing.

In the reconstruction system, a preprocessing step known as subregioning may be included in order to reduce the time for further processing. In Display 82 the slice images were preprocessed in a step known as subregioning where parts of the object not lying in a user specified sating were eliminated. Here we distinguish subregioning from clipping. In order to improve performance subregioning must be a square. If subregioning is used, it does not eliminate preprocessing.

We use clipping preprocessing also known as isolation. In isolation a rectangular subregion 16 in each of a sequence of slices was created in a process known as clipping. Our system utilizes a clipping system as the first step of a reconstruction process.

The next step of the reconstruction processing in Display 82 was a process known as interpolation. Interpolation in the Display82 system created new slices between clipped ones so that the resulting slice thickness equals pixel size. Our system improves the interpolation step and others of Display 82 , reducing the processing tine, as will be described, and in addition adds other function. However, in order to accomplish the end result, the same kind of system used in Display 82 may be employed. In this respect the creation of at least some of our own ultimate results will be obtainable; however, elapsed time will not be satisfactory with Display 82.

Thresholding by image segmentation is performed in BF in our process. In addition in perhaps 5% of cases this is performed after a mask (a three dimensional binary image) is created. In Display 82 system entry was required of a density level and a half width for threshholding. For masking Display 82 required a trackball and functions buttons of a graphics system, such as the Data General Eclipse s/200 with GOMPTAL image processor.

Thresholding may be in our system accomplished by discarding data on the basis of density. Binary data is assigned to voxels, a 1 to those within a given density range, and a zero to those outside of the range. Display 82 uses a similar process. The thresholding acts as a method of segnentation, as it separates bone from soft tissue. Thus the clipped sequence of slices is interpolated and then segnented in Display 82. As in Display 82 the mask of our system may be optionally used to restrict segmentation to any arbitrary subregion for a cutting and incision effect useful in surgical planning, and we use it to separate regions of similar or identical properties or densities. Our system differs from Display 82 in that the clipping (isolation) , threshholding and interpolation process are accomplished in the BF routine, which allows masks to be created from original interfaced data, that is data created by transposdtion conversion preprocessing. For an understanding of the data flow through the system, reference may be had to Figure 8, where the names of the programs are illustrated in a flow sheet. In that drawing "T" represents the transducer system, CAD an intermediate optional CAD system, and CNC a forming tool. "0" represents the object formed from the subject whose data is determined by the T step.

We combine the function accomplished by three steps (DT,BX,BN, (CN)) in the BF step of our process known as general preprocessing which optomizes the procedures, but eliminating many unnecessary calculations, including elimination of unnecessary interpolation and masking after interpolation. However, as shown in Figure 8, the Display 82 system nay be used as a part of the system described, with modification as there shown. By way of example, our selective interpolation used in BF does not interpolate on data density values outside of the selected range (when it can be determined by an advance decision sequence which determines if it is inside the range, or outside the range, in both of which instances interpolation is eliminated) ; we have eliminated the Display 82 BX creation and reading of BX into BN by directly reading data one time and during that process creating a BF file from the interfaced (not original) data file. We only read data within the clipped region within the frame (from disk by computing word and block offsets) directly in the clipped region which is then used for computing the 3D binary scene) .

To illustrate one aspect of the power of the BF system, by only the achievement of second example above, in Display 82 20 frames after interpolation in BX creation produces 60 frames in the BX file, and these 60 frame have to be reread in the BN process. In the BF process 20 frames produce directly a BF file, which is equivalent to the BN file. In addition, in our system, masks are created from the original data, while in their system requires masks are created from the interpolated data requiring substantial additional processing.

Accordingly the the output of a direct threshholding of original data (as opposed to the interpolated data) is a truer and more direct representation of the spacial pixel point orientation (even though not necessarily apparent upon visual observation), so that a physician's confidence of point following in robotic or computer aided surgery (as in stereotaxis) is enhanced.

As an example, in the preparation of the prosthetic device, an arbitrary subregion on the image of the frame 16 is retained by clipping or isolating in the 2D view shown in Fig. 3. Other frames 11, 12 could isolate other portions of the subject. Then, the bone image if it has a density within the defined bone range will be isolated, otherwise it is discarded in our threshholding process, which is still a 2D operation at this stage of reconstruction.

With this 2D preprocessing, a selected set of data is created for 3D reconstuction. After all slices have been processed a 3D scene is created. Utilizing a boundry detection algorithm the outline of the retained "bone" is outlined as the organ of interest as a representation of xyz coordinates of each voxel (also optionally along with, shading information used in the display) , and the enclosed volume computed as an optional output. For the purpose of illustration we illustrate Display 82' s boundry detection, algorithm. For model making we have chosen to include the shading process, and instruct it to be ignored in the model creation.

Boundry detection (BD) is the last phase of the process known as preprocessing.

Now arbitrary views can be displayed by using a display routine, DS, which computes shading information (if necessary for display) and deletes hidden surfaces. Movie files can be created by saving individual views and replaying them rapidly at a later time. In the preferred system for direct replication DS is modified and xyz surface is directly passed to CAD or CNC without shading. By way of example, the modification of DS shown in the CAD formatter can be implemented in a DS routine, which is sometimes referred to as DSCT.

In Display 82, as with other known systems, the entire sequence of processing is for the purpose of video recording and demonstration, and accordingly there are routines for displaying the surface of an organ, with transparency if desired, as a movie of selected slices taken from about a user-specified axis of rotation. With our own system, a similar demonstration is possible.

For the sake of simplicity of illustration of how to make a replica of a bone or other object, we will describe the original data as being from a typical CT unit.

At this tine in order to enhance processing transformation of original data values to final object output values we employ the steps described for a 3D display (with the elimination of parts not necessary for duplication of a part without display and elimination other options) through the output of BD as the "3D reconstruction" process which creates xyz values for all of the faces on the .surface.

This process can be by accomplished by Display 82 or our improved 3d reconsrtruction process. A similar point nay be reached by a process known as a back to front processing step. Other 3D reconstruction image display systems exist or at leasst have been proposed. One system which has been described is that described by Dr. Gideon Frieder, Dan Gordon and R. Anthony Reynolds believed working with Dr.Gabor Herman in a paper of the Department of Computer and Information Science, Moore School of Electrical Engineering, University of Pennsylvania, Philadelphia P. A. 19104 entitled Back-To-Front Display of Voxel-Based Objects. That paper is incorporated herein by reference, as it reflects the efforts of the primary innovators of extraordinary ability in this field. At that time they discarded the developments of Display 82 which they had worked to create.

All such "3D reconstuction" processes of Display 82 creates an xyz value for all of the faces on the surface to be displayed (sometimes with hidden surface removal) in order to display them on a graphics video system. The GE systems which are believed to have incorporated Display 82 in their main processor normally used original, data captured from a transducer for the 3D reconstruction process after passing through the interface, DT,BX,BN etc. in order to obtain a display of the 3D reconstruction.

In order to accomplish the manufacture of an object, we have appreciated that we can utilize the base xyz values of the faces on the surface of an object to be duplicated, and we can also utili ze the saved BF files which are truly filled 3D images representing not only the shaded surfaces, but all internal xyz information captured from a subject or created by MODEL, or a combination of the same.

In the BD-D3 system we reroute xyz data out of DS for use in image construction, using information in the Z buffer of the computer system which information is rerouted to the display buffer. If a system were implemented with back to front preprocessing we would place xyz data in the display buffer, and if display is desired reroute it to compute face intensity levels. In either case xyz data resides in the display buffer. Fran the display buffer the xyz data is moved to a file (which may be like a movie file) for each of the views (6 or more for a complete model). The xyz information for these views are passed to an interface output CAD formatter which can be employed in the DS routine, reformatted to interface with a format acceptable to the CAD system, and placed in a output file which is a format free output file and converted into an ASCII format which may be placed on magnetic tape or other transfer device in the general format illustrated in Figure 6, and specifically as Figure 9 for a portion of the data format for the construction of a bone like the femur illustrated in Figure 1.

In this process we convert image integer data into ASCII xyz data or other format of the tool, such as a milling machine. Our construction system works with data related to the surface of the object to be created. In order to create such a surface (which can also in the alter native or in combination with construction processing) it will also be possible to create a graphic image for display on a CRT terminal. So that a comparison can be made with features present in Display 82, the program description used to illustrate the present invention has utilized common identifies present in Display 82, such as BX, BD, CN and other overall files. A careful review of the system will show similarites of resultant fraction, however processing speed in increased.

Display 82 had output files which were called the same as the input files unless otherwise named at the start of execution of a callable routine. For ease of illustration, we have provided our program in a higher level language in Fortran which nay be executed by various systems which have suitable Fortran compilers, such as the DEC 850 from which the code was printed. The file names are those which were used in Display 82 with respect to functions present in Display 82, even though some of the functions have been improved, as will be appreciated upon reading this detailed disclosure.

In Display 82 there are several main callable routines with included subroutines for performing certain functions.

A DT routine will perform transposition conversion preprocessing of data captured from the transducer and create a DT file. A DT routine could include known routines such as GLTOBX and others described in Display 82. For further processing the input data is converted in something called a DT (or BX) file. These files are organized randomly, a pixel value is stared in integer wards. Image data is stored in memory row by row and in slice by slice order.

A "scene" is those values of data capture from a three dimensional object by the transducer, in a CT by capture of spaced slices. Each slice has x and y data. For each slice a column number (x-coordinate) and a row number (y-coordinate) are assigned. In many transducers the slice images of an input file to the reconstruction system start with their pixels distributed in a circular region. A squared up image from each slice is outputed to the DT file. The user specifies the starting row and column. The output of this preprocessing by transpositiai conversion is a DT file.

The first step of a reconstruction is an input file, designated as a BX file. The BX input may be a DT file or an already prepared BX file which is reinputed to the BX routine. ALternatively, and in our preferred embodiment, we go directly from a trasposition conversion preprocessing to BF. The BX routine performs EX input file preparation and functions to clip slices to contain just a subregion of data corresponding to an item of interest. A rectangular frame such as those described as frames 11, 12 in Fig. 3 is spaced around a cross section and only that part of the data within the frame is saved. Since the input date to BX has a density integer, a range of densities can be selected for display. The BX routine also does interpolation to create voxels which are cubical. The number of slices outputed by a BX routine may be greatly increased over the original input slices because of linear interpolaticn which is performed by BX. For example, a linear interpolation of a 20 slice scene with a slice interval (spacing between slices) and a pixel size of 1.5 ran upon interpolation has ((20-1)3)/1.5)+1 or 39 output slices. As the spacing increases and pixel size decrease, the number of slices of BX output increases. To overcome access processing time problems with I/O routines of such a process, was one of the reasons that the BF routine was formulated.

In the BX or BF routine, placing a frame about 1/2 of an object (along the transaxial for CT or sagital or coronal views with other systems) enables the right half or the left half of the object to be saved. By doing this twice, both halves can be saved as BX or BF output files for further use. A transaxial frame 13 nay be used for cutting a bone 14 or segment 15 in half and such a section will enable the surface of the inside as well of the outside half to be obtained. For similar purposes a combination of a solid model image mask routine using a call , such as cylinder, and the image data will also provide the inside view. The inside and outside halves can be later combined. The original input file may be renamed (renumbered) for subsequent file retrieval based upon an operator input.

As a result of the BX routine, slices from the input file are filled into an image matrix, one at a time, the rectangular frame overlayed, the data within the rectangle saved as a BX output file. Thus the BX output is now a clipped (subregioned) and interpolated file of 3D matrices (with x,y,z coordinates, density and size data) in the form of the format shown in Figure 7. The x and y coordinates of the frame are contained in the descriptor format, as wall as the pixel size, and voxel size the pixel size and the slice separation in real numbers, the density (as an integer) and the size of the slice in x and y directions (an integer). These can be stored in the BX file row by row with a one integer word ( 16 bits) per voxel. like Display 82 a movie file can be created by BX as an organized continuous set of frames (slices) which can be displayed consecutively to make a moving actual video display en a graphics display terminal. Since the function of such a system will be understood by those skilled in the art, the details are not further elaborated here.

Optional processing during the course of reconstruction can be accomplished by call of the CN routine which performs masking on the interpolated data. In BF masking is performed on data introduced into the system directly from the interface without prior interpolation. The CN routine includes mask preparation, interactive region of interest definition and tracing useful in surgical reconstruction. One or more of these operations can be performed prior the preparation of the construction surface during later reconstruction processing. For this purpose, CN type routines have been incorporated into a BF routine in the preferred illustrated Fortran embodiment where the function is performed prior to any interpolation yielding enhanced "truth" to the three dimensional location of data.

Basically a CN routine has as its input the scene described by a BX file. The image is "segmented" or threshholded into a EN file, as for example a 1 is given to voxel s with densities within a specified range and a z ero (0) assigned to the rest and the zeros discarded, creating a 3-D binary scene. Optionally this can be also smoothed in BN car BF and resmoothed in BF which provides smoothing functions.

Optionally the CN routine provides the capability to create a mask - a 3-D binary scene - in the form of a CN output file, to represent a user specified subregion. Isolation of an organ of interest by the BN file restricts thresholding of a given scaie to a user specified subregion. The specification in BN (which also performs contour subregioning) of the subregion is done by tracing contours in each slice of the scene, as an option. Input is a BX file and output is a CT or CN file. The output CT file has images defined by traced contours of the user trace, which can be later modified. A CN file is creating by filling the contours traced by the CT file and the result is a filled binary 3-D scene in the CN file which is written in an integer number of blocks in memory. During the operation the EX file is displayed in user specified sequence, the contours of the item traced and overlayed on to the slice display. The contours may be traces either to modify a" CT file or create a new one, using a trackball or other cursor mover to trace the image. Previous and following slices can be displayed and then a CN file can be created by filling between contours. Such a capability exists in the CN file of Display 82. Such capabilities result in an optional input to BN which is a segnentation routine (one performed by threshhold segmentation and/or contour subregioning) as wall as a smoothing routine. The primary input to BN is the BX file (a clipped and interpolated subregion) containing the 3-D input scene. A saved BF file also is a clipped and interpolated subregion of higher image integrity. It outputs in a 3-D binary scene in the form of a BN file, or a smoothed version of the image of the BX input. The BN output format is identical to the BX input format. These files now contain 3-D binary scenes with xyz data, etc. This program allows for automatic creation of an image based upon image density integers. A mask is created and a region defined by a set of l's in the mask slice is then taken as the region to which segmentation is to be restricted. If the displayed BX image is a joined part scene, then CN is employed to create the masking file to separate the two regions. The output BN file is a binary scene (filled with the object of interest) which is the output of the segientation and smoothing and interpolation routines which occur within BN. In the BN file, l's represent points of the item of interest and the zeros (O's) those of the background.

Finally the surface of the item (or organ or bone of interest) may be detected by a boundry detection program routine performed by BD. This occurs in the BD routine. While the BD routine can in the illustrated embodiment only detect boundries of 256 256 pixel items and the basic routine requires improved thresholding as provided in the following discussion to be able to detect images from poor original input data (as from ultrasound), the basic boundry detection is a known routine. While our illustrated version of BD is a 256 256 version, such a restriction is not necessary far implementation of the inventions herein, larger versions of BD can be written; however, with current CT input data such larger versions are not necessary. The images handled by the current version can accommodate files from 6464 pixel systems, and they an entire BF file can be added to another BF file, to enable larger parts to be assembled, as described with respect to Figure 11, which illustrates very small building blocks of pixels in voxel format being made into larger and larger objects. larger versions such as 512 x512 written in parallel in conformance with MODEL (suitable for parallel and serial processing) which interfaces to BD will allow processing of data in apparent real tine with parallel processors. In the binary array of voxels in the BN file, 1 's represent the item of interest and O's those of its background. For the item of interest when l's are connected they are in the item of interest. In connection with boundry detection, for those interested, reference may be had to an article by Dr. Frieder and others in Comp. Grap. Image Processing, vol. 15, pp.1-24, 1981. A typical image data structure of BD and BF is shown in Fig.7. As shown in the BD descriptor the file header constitutes 256 words, including 150 words for other header sections, the BD file name, three words one for each of the x-,y-,z - coordinates of the starting point for boundry detection expressed as integers, two words expressing the volume enclosed by the surface as a real number, five words expressing the number of faces in sectors 1, 2 and 3 of the descriptor as real numbers, three words for the minimum x-,y- and z- coordinates of the faces facing the x, y and z directions, respectively (integers), three words expressing the maximum x-, y- and z- coordinates of the faces facing the x, y, z directions respectively (integers) and 82 wards for housekeeping functions (day of use) and other header sections. Sectors 1 through 6 represent image faces. Sector 1-3 for -x,-y,-z faces respectively, and Sectors 4-6 +x,+y,+z faces respectively. The number of blocks L occupies by each sector is the sane and is equal to ([interger part of ](M/1024)+1) *8, where M is the maximum of the number of faces in Sectors 1, 2, 3. Each sector is stored in words of 32 bits (or two 16 bit words) as r' (2), 1' (2), r (2) , 1 (2) , x-coordinate (8), y-coordinate (8) and z-coordinate (8) as a boundry face descriptor sector. The three coordinates are computed as in Display 82.

ED is executed once for each surface detected. The user specifies the three coordinates of a voxel in the vicinity of the boundry to be detected. The program iteratively adds 1 to the x-coordinate of the specified voxel aid searches for an initial boundry. Thereafter all boundry faces connected to the initial face are detected by binary tree search. The four neighboring faces are determined aid encoded into the descriptor of the face for shading based upon neighborhood context. BD also can compute automatically the enclosed volume of the boundry surface, of optional use. The output of BD is a surface of the object of interest in the form of a BD file. It does not connect to the display device directly.

For a further detailed analysis of the various programs described, reference should be had to the appended code which set forth in detail the representative programs which nay be used to implement our inventions in Fortran and C. The actual programs are illustrative of the described system and not necessary for an overall understanding of the function and elements of the claimed system. For instance with a different interface the BF process of the system will operate with MRI machines, and others which capture data from a subject, and the disclosure is not restricted to the transducer used, We have described our process in terms of Interface to DT, BX,BN to BD. Alternatively in our preferred embodiment the process is interface, BF, to BD, with display DS incorporating CAD formatter, and with MODEL formed as part of the modular system allowing base primative files to be moved throughout the system. Far further understanding of the interrelationship reference may be had to the flo wcharts of Figure 8.

Refering now to Figure 1D there is illustrated a heart having veins and arteries after reconstruction. By the u se of a byte system a tagged identifier adds a pseudo color at each pixel 7 to the view, assigning blue to veins and red to arteries. During rotation, the colors remain constant, so that the rotation tracks the point by reference color . The original heart was injected with die to cause the blood to contrast with surrounding tissue to create the image shown.

In the view of Figure 1A is a hip portion with femur which can been seen created by the instructions shown in Figure 9. We have chosen to illustrate a femur because of ease of illustration. We further illustrate and comment on a famur insert. A femur insert will not knit unless made of bone, requiring reinforcement. Other bones can be made solely of synthetic material, such as certain facial reparations and skull inserts. Certain plastics can be molded to be suitable implants, such as those used for the Jarvix heart. However, for the purpose of illustration the image can be in the form of an original data file, or in the form of a BX file like Display 82 of a femur. The image can be cut by interactively placing a rectangular frame about the bon e. The part to be removed can be framed by BX or BF, and retained for further processing. This can be done by employing the CN function of framing and/or the CT function of tracing a contour. Further processing is only of images retained in the frame.

The rectangular frame 16 can intersect at the "cut point" so as to excise the part 15 to be replaced, as illustrated in Fig.3 . If the inside is desired, another rectangular cut is made of the longitudinal section of the femur.

Alternatively, we can use a "CN" type file or cursor at each cross section to trace the boundry of what is desired to be saved. The same is dene for the second part, and the two can be later combined with an and/or function or erosion from SOLID. This ability to combine files which are set bits or not set bits in our cuberille system make possible rapid combination not possible with other systems like CAD. In addition because MODEL as illustrated is a paral lel system, the use of parallel processing power is extremely rapid. With a parallel processor, indeed "real time" boundry detection and display is possible. The construction using the alternative illustrated parallel processor enables almost instantaneous construction by combination function. If we just want to duplicate what is sliced out, we utilize the 3D reconstruction processing steps described above. The combination after BD is delivered to DS to obtain the new xyz surface coordinates of each of six or more views with three dimensional coordinates. They are placed xyz to file, and transferred in ASCH format (σr like format) to the CAD program for further modification if necessary, or from there to the driver of a mining machine. An origin point is needed and an xyz common for the format of the PRIME system' s CAD program using the format of Fig. 6, as illustrated by the data represented in Figure 9. With the utility CAD formater shown in the modular program on microfiche TXu302 471 referenced above we generate a format with a 0,0 origin for each of the views, and output format data fro n that reference point as illustrated in detail in Figure 9.

This file which can be called BONE will have all surface coordinates of the insert, with the exception of the reinforcement spacer and any extension desired.

In order to extend the bone, we replicate the pixels of the surface of the end face 150, shown in Fig. 4, reiteratively until the desired extension is reached. Thus the 3 centimeter extension 17 is created which can form the basis of the finished product. Knowing of the direction of the extension, the surface face which is normal to the direction of the desired extension, is extended in the direction for the distance determined by a cursor set desired endpoint place at the point of furthest movement of the face. The, coordinates of the face are changed, by interpolation, to have coordinates matching the cursor set endpoint. Then a rectangle between the moved face and the face where it originally set it filled with l's. Simply put, we grow the .surface of the closest face to match the desired face. In lieu of a cursor point, a specified face can be directly applied to the image and the image made to grow to reach this face. This is done with BN or BF file output. For simplicity, we employ this as a routine in MODEL, allowing the same treatment of any primative. The primative can also be added to grew to a cursor, or other line from drawn input. At this point we have already a primative copy of the original bone segient. To this primative has been added a computer generated extension. The result is a new primative three dimensional binary scene (equivalent to a BF or BN file) .

Now we want to or need to add to that primative another primative using a solid model with MODEL from FTLE (a primative BF file having the combination of 15 and 17), which may be another object primative three dimensional binary scene stored in FTLE or directly generated by MODEL. Far purposes of illustration the primative is desired to be used as a spacer reinforcement and shaped as a tubular body 18 as shown in Figure 4.

This is done by using the MODEL routine. A three dimensional binary scene of a tube is created with the solid modeling program. As MODEL is one of the important aspects of our invention, a detailed description will enable better understanding of the program.

MODEL, of which an illustrative program has been described, which nay be implemented in connection with the unpublished copyrighted work above incorporated but which can be programed in other languages and with other methods in accordance with this description has been developed to create 3D solid models that can be displayed using the Display 82 image processing system or other image processing systems such as those illustrated herein, including those implemented for parallel processing. It is one of the features of MODEL that it cam be implemented for serial processors or by parallel processors. The models are generated by logically combining various kinds of solid objects. The illustrated objects are a sphere, cone, cylinder and parallelopiped, as well as any other primative created by the system. After a review of the illustrations, other objects will become apparent for implementation by those skilled in the art, both now and in the future, as is true of all aspects of the present inventions. The capability of MODEL and the modular system described which interacts with it, BF and BD enables one to create full solid primatives which are fully filled with xyz data.

Commercial developments available to a designer today use line or stick figures for CAD and CADD for CAM with CNC equipment. While 3D is a term used, the resulting images, which in the most advanced systems allow shading of images for viewing, nevertheless relate to surfaces, rather than three dimensional solid filled scenes, as does our cuberille system, from which cubic copies can be created, not only om the surface by of inner structures of an object. The MODEL system allows combination of real (derived from the real world) and imaginary (derived from mathematical calculations, or from the mind) image scenes or surfaces. These image scenes can be placed in file, and both real and imaginary image scenes combined for display or for the creation of real objects.


Models are built by creating various kinds of primatives (understood here to be bit maps, or byte maps compatable with the other elements of the overall system, but MODEL can also be seperately implemented) which can be combined logically to form a multitude of complex objects. The composite slices are a composite of smaller slices that are systamatically (here horizontal l y, but they could be vertical or at other angles) cut from one or more of the available objects. To construct a composite slice, the user must decide what objects are to be included, and for each object, the origin, the rotation angles (spherical coordinates), the dimensions, and a logical operation. The smaller slices are combined in a buffer where the logical operation determines of the slices should be combined with an "and" , "or" , or and erosion function. When the attributes of the composite slice are entered by a user, the user then sets the limits and increment of the horizontal cutting planes.

Ear a fuller understanding of this aspect of the described invaitions, we will discuss the task of filling an individual slice from one of the objects. The first step is to derive an algebraic expression for each of the rotated objects. Next we need to find the ymin and ymax of each type of slice. Finally we will look at the methods used for horizontal scanning and line filling. The slice is scanned from the left most x value to the right most x value and vertically from the y rain to the y max.

In the preferred embodiment, the output files when created are in a BN slice format that can be processed directly by BD.


The first step in developing the algebraic expressions for the rotated object boundaries, is to derive rotation equations. The equation below is the general form for conic sections.

((AX)(AX)) + BXY + CX + ((DY)(DY)) + FZ + F = 0 By the use of rotation equations, we ha*\e derived this general formula tor our slices by substituting the rotation equations into the non rotated quadratic equations. In this case we are interested in rotating about the z and y axis (spherical coordinates). To derive the rotation equations we must first look at the individual transformation and rotation matrices.

Translation to the origin:

"The" is the angle of rotation about the z axis, and "Phi"is the angle of rotation about the y axis.

Multiplying these matrices together: (X' ,Y',Z' ,) = (X,Y,Z) * (T * Rz(The) * Ry(Phi)), we have

X' = cos(The)*cos(Phi)*X +sin(The)*cos(Phi)*Y - sin(Phi)*Z - Xo*cos(The)*cos(Phi) + Zo*sin(Phi) - Yo*sin(The)*cos(Phi),

Y' = -sin(The)*X + cos(The)*Y + Xo*sin(The) - Yo*cos(The) ,

Z' = cos(The)*sin(Phi)*X + sin(The)*sin(Phi)*Y + cos(Phi)*Z - Xo*cos(The)*sin(Phi) - Zo*cos(Phi) - Yo*sin(The)*-sin(Phi).

By substituting (X' ,Y' ,Z' ,) for (X,Y,Z) into the non- rotated quadratic equations, we can derive the desired general formula.

One can easily see that a direct substitution would lead to an enormously complicated equation. This is alleviated by making further substitutions. MODEL uses these expressions for that purpose:

A1 = cos(The),

A2 = cos(Phi),

B1 = sin(Phi),

Ex1 = A1*A2,

Ex2 = B1*A2,

Ex3 = -B2,

Ex4 = -Xo*A1*A2 + Zo*B2 - Yo*B1*A2,

Ex5 = -B1, Ex6 = A1,

Ex7 = Xo*B1 - Yo*A1,

Ek8 = A1* B2,

Ex9 = B1* B2,

Ex10 = A2,

Ex11 = -Xo*A1* B2 - Zo*A2 - Yo*B1*B2, Now we get

X' = Ex1*X + Ex2*Y + Ex3*z + Ex4,

Y' = Ex5*X + Ex6*Y + Ex7,

Z' = Ex8*X + Ex9*Y + Ex10*z + Ex11.

These equations are directly substituted into all of the quadratic equations, which will be discussed later, to yield the coefficients of the general conic equation.


The cone is defined by the equation X + Y - R(Ht-z) /Ht = 0, which if actually plotted, would look like two cones touching at their tops and extending infinitely in the opposite directions. All of the slices will yield a circle, ellipse, or hyperbola until the base is considered. The rotated equation is obtained by substituting (X' ,Y' ,Z') for (X,Y,Z) which yields

A = -R**2*Ex8**2+Ex1**2*Ht**2+Ht **2*Ex5**2,

B = 2*(-R**2*Ex9*Ex8+Ex2*Ex1*Ht**2+Ht**2*Ex6*Ex5),

C = 2*(-E**2*Ex10 *Ex8+Ex3*Ex1*Ht**2)*Z + 2*(R**2*Ht*Ex8-R**2*Ex11*Ex8+Ex4*Ex1*Ht**2+Ht**2*Ex7*Ex5) ,

D = -R**2*Ex9**2+Ex2** 2*H t**2+Ht** 2*Ex6 **2,

E = 2* (-R**2*Ex10*Ex9+Ex3*Ex2*Ht**2)*Z + 2*(R**2*Ht*Ex9-R**2*Ex11*Ex9+Ex 4*Ex2 *Ht**2+Ht**2*Ex7*Ex6) ,,

F = (-R**2*Ex10* *2+Ex3**2*Ht**2)*Z**2 + 2*( R**2*Ht*Ex10-R**2*Ex11*Ex10+Ex4*Ex3 *Ht**2)*Z -R**2*Ht**2 +2*R**2*Ht*Ex11-R**2 *Ex11** 2+Ex4**2*Ht**2 + Ht**2*Ex7**2, as the coefficients for the general conic equation.

Next we need to define the base of the cone. Before the cone is rotated, the base is simply the plane Z = Zo. To rotate the base, Z' is substituted for Z which yields Ex8*X + Ex9*Y + B10*Z + Ex11 - Zo = 0. After slicing, Z becomes a constant, so this equation represents a line. With the substitutions

G = Ex8,

H = Ex9,

I = Ex10* Z + Ex11 - Zo = 0. the line now becomes Gx + Hy + I = 0.

Cylinder: The cylinder is defined by the equation X + Y - R = 0, which extends infinitely from the top and bottom. All of the slices will yield a circle or an ellipse until the basse and top are considered. The cylinder is anchored at the center of its base. Substituting (X' ,Y ' ,Z' ) for (X,Y,Z) yields A = Ex1**2=Ex5**2,

B = 2*(Ex2*Ex1+Ex6*Ex5), C = 2*Ex3*Ex1*-2 + 2*(Ex4*Ex1+Ex7*Ex5), D = Ex2K*2+Ex6**2,

E = 2*Ex3*Ex2*Z + 2* (Ex4*Ex2+Ex7*Ex6) , F = Ex3**2* Z **2+2*Ex4*Ex3*Z-R* *2+Ex4**2+ Ex7**2, as the coefficients for the general conic equation.

The basse of the cylinder before rotation is Z = Zo, and the top is Z = Zo + Ht. Z' is substituted for Z which yields

Ex8*X + Ex9*Y + Ex10*Z + Ex11 - Zo = 0, for the base, and

Ex8*X + Ex9*Y + Ex10*Z + Ex11 - Zo + Ht = 0, for the top. With the substitutions G = Ex8, H = Ex9,

I = Ex10*Z + Ex11 , J = I - Ht, The top and the base become the lines Gx + Hy + I = 0 and Gx + Hy + J = 0 respectively. Parallelopiped:

The parallelopiped is comprised of six planes. One for each side, and one for the top and base. The parallelopiped is also anchored at the center of its base. The six planes before rotation are at the center of its base. The six planes before rotation are

Z = Zo, (base)

Z = Zo + Ht, (top)

Y = W/2, (side faeing north)

Y = - W/2, (side facing south) X = L/2, (side faeing east) X = - L/2 (side facing west) where W is the width, and L is the length. Substituting Z' (the first farm) for Z yields

A = B2*A1,

B = B1* B2,

C = 2*A2 - A2*Zo - B1*Yo*B2 - B2*A1*Xo,

D = C - Ht,

E = -B1 ,

F = A1,

G = -W/2 + B1*Xo - A1*Yo ,

H = W/2 + B1*Xo - A1*Yo,

I = A1* A2,

J = B1*A2,

K = -(-L/2 + (Z - Zo)*B2 + A1*A2*Xo + A2*B1*Yo),

M = -( L/2 + (Z - Zo)*B2 + A1*A2*Xo + A2*B1*Yo), as the coefficients for the lines in standard line form. The final equations are:

Ax + By + C = 0, (base)

Ax + By + D = 0, (top)

Ex + Fy + G = 0, (side facing north)

Ex + Fy + H = 0, (side facing south)

Ix + Jy + K = 0, (side faeing east)

Ix + Jy + M = 0, (side facing west).

EXTREMITIES Before one can fine the y min and y max of the slice, we need to know the extremities of the equation below. When computing the extremities, the base and top of the surface are not considered. Many times the extremities go beyond the scope of the slice when the top and base are considered.

Ax + BXY + CX + DY + EY + F = 0. The above equation is differentiated with respect to x, set equal to zero and then solved for x. Below are the substitutions used to find the x value at the y extremities.

AA = 2*B, BB = B**2 - 4*D*A,

CC = 2*B*E - 4*D*C,

DD = E**2 - 4*D*F,

FF = AA**2*BB - 4*BB**2,

GG = AA**2*CC - 4*BB*CC,

HH = AA**2*DD - CC**2. The expression following just below: x = (-GG (+,-) sqrt(GG**2 - 4.*FF*HH)) / 2*FF, is substituted for x, into cur slice equation, and then solved for y to get the y extremities. (MODEL uses the functions Y1(x) and Y2(x) to do this).

The extraiάties of the parallelopiped are defined as the points of intersections between the cutting plane and the four planes that make up the sides.

Yext1 = Xva11*sin(The) + W/2*cos(The) + Yo,

Yext2 = Xva11 *sin(The) + W/2*cos(The) + Yo,

Yext3 = Xva12*sin(The) - W/2*cos(The) + Yo,

Yext4 = Xva12*sin(The) + W/2*cos(The) + Yo. where, if the rotation is above the horizon, then

Xva11 = (-L/2 + (Z-Zo)*sin(Phi))/cos(Phi),

Xva12 = ( L/2 + (Z-Zo)*sin(Phi))/cos(Phi), else,

Xva12 = (-L/2 + (Z-Zb)*sin(Phi))/coe(Phi),

Xva11 = ( L/2 + (Z-Zo)*sin(Phi))/cos(Phi). YMIN AND YMAX OF THE SLICE The Ymax is defined as tie largest value of y on the slice. This is not necessarily one of the extremities on the curve, but could be one of the points of intersection between the curve, the base or top, and the cutting plane. The Ymin is the smallest such y value.

To help in understanding how the ymin and ymax are established, illustrations of the variables that MODEL uses for this purpose will be introduced, as follows: Yest1, Yest2,

Yext3, Yext4: the y values of where the four planes that make up the sides of the parallelopiped, intersect the z cutting plane. Ypt1, Ypt2: y values of the two points where the cutting plane and the surface intersect at the base. Ypt3, Ypt4: Y values of the two points where the cutting plane and the surface intersect at the top. Xpt1, Xpt2: x values of the two points where the cutting plane and the surface intersect at the base. Xpt3, Xpt4: x values of the two points where the cutting plane and the surface intersect at the top. Dymax: the largest y extremity (maxima of curve) Dymin: the anallest y extremity (minima of curve) Xdymax: the x value at Dymax.

Xdymin: the x value at Dymin.

Quad: the quadrant that the top of the solid points to (starting at (x+,y+) and moving counterclockwise, the quadrants are number 1,2,3,4). The solution for finding the ymin and ymax of each type of slice is divided into six cases, as follows: case 1: (full ellipse - cone and cylinder)

This occurs when the surface is cut above the base and below any part of the top.

Ymin = Dymin Ymax = Dymax case 2: (ellipse cut from base of the surface - cone and cylinder) The process below is for the first quadrant only. In Model, the process is generalized for all of the quadrants by reversing the direction of the inequalities for Quad equal to 2 or 3, and interchanging max (Xpt1, Xpt2) with min (Xpt1, Xpt2) for

Quad equal to 2 or 4.

if Quad = 1 then if (max (Xpt1, Xpt2) < Xdymin) then

Ymin = Dymin else

Ymin = min (Yptl, Ypt2) endif

if (min (Xpt1, Xpt2) < Xdymax) then

Ymax = Dymax else

Ymax = max (Ypt1, Ypt2) endif endif. case 3: (ellipse cut from the top of the surface - cylinder only) The process below is for the first quadrant only. In Model, the process is generalized for all of the quadrants by reversing the direction of the inequalities for Quad equal to 2 or 3, and interchanging max (Xpt3, Xpt4) with min (Xpt3, Xpt4) for Quad equal to 2 or 4.

if (Quad = 1), then if (max (Xpt3, Xpt4 > Xdymin), then

Ymin = Dymin else Ymin = min (Ypt3, Ypt4) enlif

if (min (Xpt3, Xpt4 > Xdymax), then

Ymax = Dymax else

Ymax = max (Ypt3, Ypt4) endif endif. case 4: (ellipse cut fro n base and through the top of the surface - cylinder only)

If Quad is equal to 1 or 2, then the ymin is determined by case 2 and the ymax is determined by case 3. If Quad is equal to 3 or 4, then the ymin is determined by case 3 and the ymax is determined by case 2. case 5: (hyperbola - cone only)

This occurs when the cone is sliced through its base and the resulting slice is a hyperbola.

The expression B**2 - 4*D*A > 0, indicates the slice is a hyperbola.

if (Quad = 1 or Quad - 2), then

Ymdn = min (Ypt1, Ypt2)

Ymax = Max (Dymax, Ypt1, Ypt2) else Ymin = min (Dymin, Ypt1, Ypt2) Ymax = max (Ypt1, Ypt2) endif. case 6: (rectangle - parallelopiped only)

if (Quad = 1 .or. Quad = 2), then

Ymin = max (min (Ypt1, Ypt2), min (Yextl,

Yext2)) Ymax = min (max (Ypt3, Ypt4), max (Yext3, Yext4)) else

Ymin = max (min (Ypt3, Ypt4), min (Yext3, Yext4)) Ymax = min (max (Ypt1, Ypt2), max Yextl,

Yext2)) endif.

HORIZONTAL SCANNING Horizontal scanning is done from the left most x value to the right most x value. These values shall be denoted as Xleft and Xright. Model uses three functions to find the value of Xleft and Xright. X1(y) and X2(y) are the values of x at y on the curve created by slicing a surface. Ln(A,B,C,y,Dir) is a function that computes the x values along a line where the line is defined as

Ax + By + C = 0. Dir is used to determine of the line should be equal to +infinity (Dir = 1) or -infinity

(Dir = -1) when A = 0. The top and base of the quadratic surfaces, and faces of the parallelopiped, are all represented by Ln(A,B,C,y,Dir) after the surface has been sliced. The cone is the most difficult object to scan, so we will discuss it first. The cylinder and the parallelopiped are fairly straightforward. Cone:

When scanning the cone, three cases occur. The first case occurs when the sliced cone is an el lipse. This casse is by default. The second casse occurs when the sliced cone is a hyperbola and the slope of the upper directrix of side closest to the origin is less than zero. This is denoted by Test3 = true. The third case occurs when the sliced cone is a hyperbola and the slope of the upper directrix of the side closest to the origin is greater than or equal to zero. This is denoted by Test3 = false. Model makes this test by evaluating the x values of the hyperbola at y equal to the y value of the hyperbolas origin (Ht*sin(PHi)*sin(The)+Yo). If the x values are indefined, then we are dealing with case 2. If the x values are legitimate, then we are dealing with case 3. Eelow is the routine for scanning the cone, if (Quad = 1 or Quad = 4), then if (Test3 or not Test2), then (case 1 or 2)

Xleft = max (rain (Xl(I) , X2(l)), In(G,H,I,y,-1))

Xright = max (Xl(1) , X2(1)) else

Xleft = In(G,H,I,y,1) Xright = rain (Xl(1) , X2(1)) endif else if (Test3 or not Test2), then (case 3)

Xleft = rain (Xl(1) , X2(1)), Xright = nin (πax (Xl(1) , X2(1)),

Ln(G,H,I,y,1)) else

Xleft = πax (Xl(1), X2(1)) Xright = Ln(G,H,I,y,1) endif endif. Cylinder: xleft= max(mi n(x1(1) , x2(1)) ,min(1n(G,H,I,1, -1) ,1n(G,H, J,1 ,1) ) ) xright = min(max(x1(1),x2(l1),max(1n(G,H,I,1,-1),1n(G,H,J,1,1))) farallelopiped: if (cos(Phi) >= 0), then (top of object points above the horizon) if (Quad = 1) then

Xleft = max(Ln(A,B,C,y,-1),Ln(E,F,G,y,-1),Ln(I ,J,K,y,-1)) Xright = min(Ln(A,B,D,y,l),Ln(I,J,M,y, 1),LnE,F,H,y, 1)) else if (Quad = 2) then

Xleft - max(Ln(A,B,D,y,-1)-,Ln(E,F,G,y,-1),Ln(I,J,M,y,-1)) Xright = min(Ln(A,B,C,y,1),Ln(I,J,K,y,1),In,(E,F,G,y,1)) else if (Quad 3) then

Xleft = max(Ln(A,B,D,y,-1), ln(E,F,H,y,-1) ,In(I,J,M,y,-1))

Xright = min(Ln(A,B,C,y,1),Ln(I,J,K,y,1),Ln(E,F,G,y,1)) else

Xleft = max(Ln(A,B,C,y,-1),Ln(I,J,K,y,-1),Ln(E,F,H,y,-1)) Xright = min(Ln(A,B,D,y,1),Lzι(E,F,G,y, 1),In(I,J, M, y, 1)) endif else (top of object points below the horizon) if (Quad = 1) then

Xleft = max(Ln(A,B,C,y,-1),Ln(E,F,G,y,-1),In(I,J,M,y,-1)) Xright = min(Ln(A,B,D,y, 1),Lii(I,J, K,y,1),Ln(E, F,H,y, 1)) else if (Quad = 2) then Xleft = max(Ln(A,B,D,y,-1),Ln(E,F,G,y,-1),ln(I,J,K,y,-1))

Xright = min(Ln(A,B,C,y,l),Ln(I,J, M,y,1),Ln(E,F,H,y,1)) else if (Quad = 3) then Xleft = max(Ln(A,B,D,y,-1),Ln(E,F,H,y,-1),In(I,J,K,y,-1)) Xright = min(Ln(A,B,C,y, 1) ,Ln(I,J,M,y,1),Ln(E,F,G,y,1)) else

Xlef t = max(Ln(A,B,C,y,-1),Ln(I,J, M ,y,-1), Ln(E,F,H,y,-1))

Xright = min(Ln(A,B,D,y,l),Ln(E,F,G,y,l),Ln(I,J,K,y,1)) endif endif,

FILLING SLICES Slices are filled by first projecting them onto a bit map and setting all of the bits that lie in the bounded area. MODEL does this very quickly by setting entire words rather than individual bits.

When filling, four cases occur. In the first case, only the first part of the word is inside the bounded area. In the .second casse, both sides of the word protrude frαn the slice \bile part of the interior is inside (top and bottom αily) . In the third case, the entire word is enclosed. The fourth case occurs when only the left part of the word is inside the bounded area.

To fill the words that are not completely enclosed, we need a left mask (Imask) and a right mask (Rnask). The masks are 32 bit integer arrays where: Lmask(0) = 00000001, Rmask(0) = FFFFFFFF, Lmask(1) = 00000003, Rmask(1) = FFFFFFFF, Lmask(2) = 00000007, Rnask(2) = FFFFFFFC,

Lmask(31) = Rmask(31) = 80000000,

rules for filling: case 1) Word = Lmask (Xmin - Xleft), case 2) Word = and (Lmask(Xmin Xleft), Rmask(Xmin -Xright)), casse 3) Word = -1, case 4) Word = Rmask(Xmax - Xright), where

Xleft is the left most x value, Xright is the right most x value, if Xleft < 0

Xmin = int (Xleft/32) * 32 else

Xmin = int((Xleft+31)/32) * 32, if Xright < 0

Xmax = int(Xright/32) * 32 else

Xmax = int((Xright+31)/32) * 32 . Vertical scanning is done from the ymdn to the ymax. Horizontal scanning starts at the left most x value and proceeds to the right most. Either case 1 or case 2 is considered first, next the interior is filled using case 3, and finally scanning ends with case 4.


To implement Model on other systems, subroutines - Itall, Sttom, Display, and Screen Clear illustrated by way of example will have to be replaced. These are all utility routines used to display the slices and their modification will be apparent to those skilled in the art after a full understanding of the systan. Itall initiates the GPR primatives, Sttonn oonverts bitstrings to integers, and Display (ES with CAD formater) writes the display memory.

Then using MODEL we combine the primative of the extended spacer BONE stored in FILE with the primative of the TUBE stored in file, by various logical operations slice by slice. In this case for simplicity, we combine the extended space bone with a solid cylinder TUBE with a logical OR. Then in order to bore the hole, we combine a slightly smaller cylinder BORE with the previous OR combination, by way of the erosion function. Another way to accomplish the result is to use an erosion to drill the hole in the insert first, then OR two true tubes at either end. We prefer the first in this instance.

The output primative is now returned to become a BN or BF file as output. Then BD is used, and then DS with the construction routine to create the final xyz output file used by the milling machine.

We have explained a simple duplication of the image created by SOLID (the insert). The process will be seen to allow for the creation of numerous BF or BN priative files, or primatives fron either real data (CT or other transducer) or mathsmatical data (from SOLID or other CADlike image filled with SOLID) whose surface image data' s xyz ooor dinates are retained in a file for use on demand or call, for the purpose of finally producing the object to be created from the 3D image.

From a transducer (a CT) we obtain can obtain original xyz code. We can also start with, code created by a solid imaging routine. The system can modify the initial image and create the surfece xyz coordinates of the solid device to be duplicated or reproduced. This code is delivered to a program to reconstruct a 3D image. The reconstruction is performed by BN or BF, BD, DSCT (the version of DS for construction), and the output is a new xyz code, this is delivered to a CAD system, in xyz format of fanur instruction data shown in Figure 9, where the format of xyz coordinates can be further modified by comnercial CAD software, or there can be passed tack to our system for display aid storage in a file or oi to a mmerical control device accepting the xyz format so that numerical controlled milling machines can be driven, to make parts, either directly creating the object or master for casting as by milling the duplicated object with output device tool or by making a mold or die from which the -part can be made, and engineering drawings can be created if desired for future reference by xyz plotting under CADD. To handle course images for drawings or display because of cuberille an anti-aliasing filter discussed below is incorporated.

For a mold, the use of water soluable wax can be used to create the inner surface, and wax to create the object, as for making by a lost wax process, or other known steps employed. Frangible material such as expanded plastic material used comercially to make parts for single part casting can be directly milled to create the fill chanter core after the plastic is vaporized, or the mold die can be made by the milling operation by inversion of the z data of the xyz surface, to create the negative image of the object which will be reproduced when the finished mold is filled with casting material cr injected, depending upon the form of desired reproduction of the part. For a mold, you can cut the image in two parts along a desired parting line.

If the bone is brokei, instead of direct reconstruction of the whole, each part is reconstructed with BN and BF, BD and DSCT, to obtain new xyz code. This is fed to CAD and the parts reconstructed and put together by hand, then these parts are fed back to the transducer or surface to the xyz, and with a fill, reconnected or BD conversion, fed to BN or BF , BD, DSCT, to xyz to Cad. CAD systams employ scaling functions for any size (which can also be implanaited in the DS routine), and that is fed to a tool, such as a milling machine. As an additional simplified example shown in Figure 10A and 10B, a broken tile or other such solid product can be put back together by computer processing and an image created can be used to make a duplicate unbroken copy of the broken tile or other such originally broken 3d object. In this simplified example, each half of the tile is put into a data capturing device and separate primatives are created. The primatives when captured will have a definite z value at the surface of the fracture line 32F. By searching from the corner 31 of one of the primatives to a point 32 where the z values drop to zero will give the length of that primative on that edge. A like search on the other primative from the edge point 33 in the opposite direction to where the z values drop off at point 34 will give the length from the break to the edge opposition the break of the first primative, i.e. along the coordinate y in the drawings. By taking the difference of these two lengths, one knows exactly how much the two primatives have to move torrd each other for a perfect match. A new primative is created from a combination of both halves after one half has been moved to its original place before the break, using the logical function CR in MODEL. This new primative is then used to create a duplicate of the original unbroken part 35.

If we are concerned about the inside, as well as the outside of a bone or other object, the reconstruction of a whole bone is accomplished by processing two files, one of each appropriate half. These are reconstructed and milled or used to make molds.

As stated, for molds, instead of direct xyz output, we recompute z by using Zmax-Z. We subroute to the xyz information to DSCT for this inversion, or this could be done by manipulation on the CAD systam employed. Dies can be made in a similar manner, incorporating either direct or inverted xyz output as required for the application. Any of several commercial CAD sysstems can be employed, such as VERSACAD, AUTOCAD, the Prime Computer CAD system, and Applicon or Intergraph can also be manipulated in a suitable fashion. The format supplied for transfer and manipulation back and forth between CAD systems can be tased upon the Initial Graphics Exchange Standard (IGES) so that drawings betwaen systems can be moved fron one system to the other. The CNC machines of Cincinnati Millicron, Allen Bradley and others receive instructions from these systems today and enable creation of molds, dies, and masters with these tools.

We have used the above CT version to illustrate data capture by a transducer . Other transsducers can provide the xyz coordinates. We have mentioned MRI, Ultrasound, and other such medical transducers. Other tranducers can provide the xyz information. Other transducers which are suitable far data capture are laser coordinate measuring systems utilized to measure the surface dimensions of an object. These devices often have a xy table mounted laser which .sequences the position of the laser about the xy table and the feedback of the laser's reflection off the surface of the object is measured. The time travel between pulse start and reflection detection provides the basis for the z coordinate, which is one half of the time it takes light to travel to the object and return to the point where the clocked reflectance is observed. Other transducers utilizing the same xy with reflectance beam can employ sonar for mating ocean surfeces (such as an ocean velocimeter), radar for mapping surfece elements, and low-frequency (2 kc) magentic field transmitter-receivers can be used in a similar fashion to nap caves. All of these transducers provide xyz coordinate information as to the surface of an object (as opposed to the earlier discussed transducers which provide xyz information not only of the surface, but of internal and hidden parts of the object).

After we get the xyz coordinates from these transducars, and thus obtain xyz coordinates of surface of an object to be copied, we use a fill algorithm like CN or a fill routine which fills the coordinates. In other words, xyz coordinates are used to set the appropriate bits in the 3D binary bit map (which is like the CT file), then these are filled in to create a BF file, and not we have a primative far further locical manipulation and duplication. For the duplication we use DSCT (a DS routine having the CAD formater capability illustrated in the drawings), which may be dene after selected steps illustrated above. Such a program (c) Lym L. Augspurger (unpublished) might included the following code. integers2(512**), max, min pp integer numfrm, mumvew,ist,iblk, filnum,x,y,z integer hdr(128),nblk,frmsiz,depjth integer fview,lview,nview character iname*80,oname*80,ans*1 logical eol 5 print #,Imput DS file name ' call getwrd (5, eol, inane, 80) open (1,file=iname,access='direct'status '= old,recl=128,iostat=ist) if ( then print *,File does not exist' go to 5 endif

75 write (*,80)

80 format (x. 'Cast, Mold or Exit (c,m,e) \$) call getwrd (5,eol,ans, 1) if (ans . ne. 'C' .and. and .e . 'M' .and. .e. 'E' ) goto 75 if (ans .eq. 'E' ) goto 5000 call rdblk (1,0,hdr,1,ist) ! get DS -header numvew = hdr(125) !number of views frmsiz = hdr( 105) write (*.,125) numvew

125 format (x, 'There are ' ,i3, ' views: endter first and last ' , ' to process?' ) call getint (5,e01,fview) call getint (o,eol, Iview) if (fview .le. 0 .or. Iview .t. runvew .or fview .gt. Iview) then print *, 'Illegal file specification ' goto 6 endif nview = Iview - fview + 1 nblk -= fnmsiz**2/512 iblk = (fvie w-1)*nblk + 1 oname = inamne do 500 1 = 1, nview oname (11:11) = 'M' onarne ( 12:112) = char91+48) open (2,file=oname, status='new' call rdblk (1,iiblk, ibuf, nblk, ist) ! get inrage do 300 i = 1, fnmsiz**2/2 max=amax0(max,iand( ibuf (i), 255, iishf t(ibutg(i ) ,-8) ) min = imin0(mrin,iand(ibuf(i)255),ioishft(ibuf(i),-8) 0 continue if (ans .q. 'C' ) then depth = max else depth = min andif iblk = iblk + nblk do 700 y = 1, frmsiz do 800 x=1, frmsiz/2 z = iand ( ibιf((y-11)*frmsiz/2 + x),255) if (z .ne. 0) write (2,100) 2*x-1,y,abss(depth-z) z = ishft (ibut((y-1)*frmBiz/ 2+x), -8) if (z .ne, 0) write (2,1000) 2*x,y, abs (depth-z) 1000 format (x,i3,2x,i3,2x,i3 800 continue close (2) 500 continue goto 75 5000 close (1) end It will now be appreciated that fron the transducer the system user can obtain xyz code, and fill it to create 3D bit maps (a BF file) or obtain already created bit naps fron a BF or BN input file with which it can be combined with, other 3D bit maps from the library which are also BF or BN files, if desired. From this step of having an input bit map the process proceeds, the output of this program is fed to solid modeling if "synthetic" manipulation is desired, with instruction to create new 3D bit maps with the desired added features which are outputed into a BF file. Thus the sclid transfer step is used to combine bit maps with logical operations, an/or functions and erosion functions and output a new BF file. This BF output the is fed to BD, or alternatively the 3D bit maps as BF files which have be created from the transducer fed directly to BD. This ED operation has an output to an altered DS routine ( DSCT). DS would be used far rotation and display and hidden surface removal. DSCE would be used to create new xyz coordinates for the surfece of the object to be manufactured, as illustrated hereinabove. This new xyz code is obtained through the z buffer action illustrated and the output placed in a numerically controlled machine tool, such as a milling machine, which is then programed to create (mill) a new object or a mold or dies for a new ob ject.

The DSCT (DS with the CAD formater) creates xyz of surface after boundry detection through hidden surface removal. From the program, in addition to the shading information the output of the xyz coordinates is produced. The xyz code goes to the machine tool. We create six f rames as a minimum for each object to be milled for the outside surface. In addition, for objects which have an interior surface, additional frames are used to create the interior portion of the object by a separate reconstruction. Two half sections could be put together in a casting operation to make a master. These techniques are well known in the molding and casting arts.

For a more detailed description of the solid modeling , reference can be had to the MODEL detailed program for possible code. This solid MODEL is a system which as set forth is a generalized "cuberille" solid modeling system in which surfaces are filled if desired with data within the surfece boundries of the object. The cuberille system starts with a cubic replacement for a point. Other solid modeling systems have been created using filled line drawings. The cuberille model which wa have created differs from line drawing systems. However, as illustrated above, we can interface to line drawing systems by providing xyz code of the surface of the created model. Surfece xyz codes can be filled by scanning the slices, searching for enclosed regions, and filling the sliced regions one slice at a tine.

The MODEL cuberille system the ability of logically combining objects is unreleated to the complexity of those objects. Unlike CAD systems the MODEL system which we have developed makes combinations based on the presence or absence of set bits or NOT set bits or set bytes or NOT set bytes. The cuberille system creates objects as will be described herein. Reference may be had additionally to illustrative code provided in the microfiche registration referred hereto. It should be understood that MODEL outputs a primative which is identical to a BF file (or BN) so that the output can be used directly as part of the intercordination of the complete system. It includes display, constructicn, permits interfece and combination of other primatives derived from transducers fron the real world and fron mathematically constructed imaginary objects and combinations of either or both. MODEL uses two rotation angles to permit a model to be created with its axis along any three dimensional vector, so that the resultant primative can be formed to be combined with another image with the z axis pointing in any vector direction. The system allows also for rotation about the x axis. This combination is done a slice at a time and loaded into an array, each array contains slice of primative. Then choice, A (and), 0(or), or E (erosion) function. Erosion takes the complement of one function and :ands: to the other. Thus, after creating more than one image, two or more "scenes" can be combined. Ramember that a scene can be either an image created by the solid model or an image captured by a transducer and created or captured as a three dimensional image. You can start with an image which is captured with a CE, MRI, Ultrasound or other transducer which produces hidden image surfaces, or you can start with inages created from transducers which provide original x,y,z data of a surface (such as the bottom of an ocean or a caves surfece, or the surface of a sculpture, or the moon, which has its xyz coordinates determined by a reflectance or beam or code sender scan, or with images drawn en a drawing with six views which are combined, entered into a computer systam such as a video camera or other optical scamar . Lesser views can be filled by CAD or light pien correction. As an example a BD or BF file can be an input into the solid model section of the systam. We prefer to combine the image surfaces before taking the primatives up.

In the combination proceεss, as a second step, any stored image (as a BF file) primatives may be railed. Two or more primatives that already have been created are entered. Then slice by slice a combined image is created by combining names of two or more slices starting from the botton reference up.

A combination of "A" and "B" can create a new primative "C". The user can then start a new sequence of conbinations with C. In this manner, starting with small cubes or arbitrary shapes a complex object can be laid up and created as illustrated by Figure 11. In Fig. 11 a basic cube 41 nay be replicated and combined with other cubes. Facing curved volumes 42 may be added for smoothing, and otiiar shapes, mathematical or taken from the real world 43 can be added until a completed shape is created. This could be as complex as the shape of a skill 1 with a successive build up of BF file parts.

We prefer to store and retain in library a copy of the original image, which has the original primative data image stored as a BF file. This library is keyed for inspection of the directory or abstracts of a file, so that searching for desired work which has already been done and placed in file can be accomplished by well known key-wrd in context inspection of the files, or by searching far file names. We also prefer to transfer to to the usser for mining or machine tool operation a derived image, such as the xyz coordinate image. We could also store a BD image, or an xyz coordinate image, and by going back through a filling loop, create an new primative equivalent to the original BF file, for further processing as has been described.

After the desired image is created and one has its primative filled, the system can loop, which is treated like any other loop. The next step would be to go to xyz via DSCE. The user can eliminate the shading computation if he doesn 't wait to display. Shading can be done by table look up.

With line drawings sclid images can be made. Starting, as an example, with six drawn views of the solid image to be made. This line drawing can be inputed fron an optical scanner or they can be inputed from CAD. The process transforms standard CAD (.incorporating in this term CAD, CADD, CAE) drawings into cuberille drawings car cubix drawings as we prefer to call than.

Starting with a CAD drawing which is, for example line drawings of the egg showa in Figure 12. Starting with a 3D cubic BF file, which is empty. We find the xyz points 51,52,53,54 on the CAD surface and plot those and set the corresponding bits in the BF file. Then the regions between the corresponding bit points using the same routines used by CAD for region filling. For an exanple we have a plate and use the four points 51,52,53,54 as the boundry of the plane. Calculating the equation of the plane that lines within the set of points that make the two dimensional building block of the CAD image, the plane is filled within the bounds of the points describing the building block (usually snail rectangles). Now we have a BF file of the surfece of the egg. This is filled in the same manner as the other xyz object. Now, whatever the complexity of the CAD elements the combination of complex images can be combined with the simplicity of MODEL.

Additions can be added to the egg's surfece as described with respect to Figure 11. In addition, by use of curve anoothing routines, a line can be made to intersect the points of the CAD image captured. Because the plates are the bottom of the image to be smoothed by addition, the curves 56, 57 which intersect the xyz points can be smoothed by construction of a IF file, through MODEL, which is then added at the desired point to the model. Or, from library, smoothing surface inserts can be added to the surfaoe as described with respect to Figure 11.

Two dimensional objects can be captured by either a video or drawn input which is digitized, as by an RS170 input to a frame grabber. In the case of a video input, the two dimensional outline of the object to be drawn is traced using CN. As an example, six two dimensional views of the the object are created, in a simplified example. In the example of the object shown in Figure 13, only cne end view 131 will show a curve 132 as a parting line. The other end view 133 will show a rectangle. The top view 134 will shown a parting line. Of the two side views 137, 138, one ( 137) will shσw vertical parting line 138, and the other will show the offset 140. The bottom view 141 wculd show a rectangle.

At pressnt all views have xy coordinates. Tre xy coordinates 141, 142, 143, 144 of the square end 133 at the corners are given a z coordinate. A line is dram to interconnect these items, and a rectangle created in three dimensions using an extension as was illustrated for extension of the bone. The result is a cubical rectangle or bride 146 shown in Fig. 13A Now the rectangle is rotated, and the side viewv of the brick, is interposed over the brick, and using the frame slicing routine there.are two-halves 147, and 148, of which cne 148 is saved. This is the same subregioning process as used for a femur separated in two halves. One half 148 is retained as a BF file. The other half would have to have modification. This modification could be done by placing a frame corresponding to the curve in the end over the end view, and that part retained corresponding to the curved part of the object. However, in this instance, we prefer to use MODEL. As shown in Figure 13, a cylinder is made by model with the rectangle 133 having a corner 144 placed in the center of the circle defined by curve 132, and with MODEL these two functions are ANDed to create a part shown by part 147R in Fig. 13A. This part, and part 148 are ORed with MODEL to create the finished part 150 fron the two BF files. If the contour method was amployed, the file part 147 would be stripped of bits falling outside the subregion of 132, and the remaining part of 147 then joined with 148 using the OR function of MDEL. Mere complex parts can be made by successive use of the screen overlay or light pen or cursor to create a frame boundry used for slicing away the dimensions of a mass. The base mass is first segnented, and successive p)arts created in the illustrated manner , and these are combined logically using MODEL.

Ourr preferred example for capturing data fron objects uses transducers with reflection detection capability. However, as two dimensional information can now be useditxr create three dimensional images, the uss of the video camera can be used for creation of object which duplicate on a larger scale microscopic objects. In addition the system can be used simply for display of two dimensional images useful in making histograms, for karyotyping and microsurgery.

A DAGE/MTI video camera uses a vidioon .sensor tube shown in Figure 16 as camera 302, also used for cameras 181, 182, 183 in Figure 14C and provides a linear response of output voltage versus light intensity. This a microscope 301 provided with a photoport to a video camera, can provide digitized input of light intensity achieved through a microphotograph which is captured by a frame grabber, (in the system 29 and also incorporated in the computer system shown as 201 ) such as the various frame grabbers sold by Imaging Technology, Inc. as a Series 100 or 150 and others. A karyotype stain or dye such as Feulgen can enhance certain characteristics of a cell on a slide 304. For instance karyotyping is used to identify chronozones. Since cancer cells contain an abnormal amount of DNA, the amount of DNA in a cell can be oomputed by volumetric and other determination made by BD system, just as if it were a bone or other organ whose volume reeds to be known. Feulgen is absorbed by the DNA. It makes the nucleus of a cell darker. The microscopic image of the cell will have a cell periphery which nay be threshαlded against a lighter background. In addition the nucleus may be threshholded against the lighter background of the cytoplasn. The intensity of light from source 305 as it passes through cytoplasm is assumed to decay exponential ly, in this instance. Delay is assumed to be proportional to the concentration of the absorbing material. Thus if the optical density CD at any point in an image is defined as )D = -log(T/I) [where T is transmitted light intensity and I is incident light intensity] then the summation of the optical density over seme part of an image is proportional to the total amount of absorbing substance in the image region. Thus by measurement of the suimation of optical intensity gives an indirect quantification in a 2D sense of the the DNA of material. Others at Harborview Medical Center in Seattle, Washington have used Feulgen dye to indirectly quantify the DNA material in a cancer cell. This can be measured for each two dimensional slice. However, in addition, the actual volumetric measuranent of the DNA can be deduced by successive recreation of a 3D model (for display if desired) . The focus of any specific slice of a cell can be stepped mi li meter by milimeter or micron by micron by a geared drive 306, 307 and stepper motor 308 under control of computer 201 as shown in Figure 15 and 16. Each stepping which is grabbed by the frame grabber will create a two dimensional image with a nucleus which can be measured, whose outer cell boundrles can be determined by threshholding to automatically delineate the boundry of the darker image (based upon its intensity), and the layers of two dimensions slices passsed to MODEL to create a three dimensional tat map of the nucleus with theεse boundries. Alternatively two positions can be assigned to the position of the xy 2D dimensional map of pixels, also coordinated with stepping (as does MDEL) and this placed in an array, as we do with ultrasound. The surface calculations used in the three dimensional boundry detection routine ED automatically provides a volumetric measurement calculation. Turning now to specific examples which nay be employed using the systems described herein, we turn to robotic controlled operations, such as surgery by automatically controled tools surgery, such as lasser removal of lesions or tissue, or the insertion of graft tissue for causing increased production of hormones or for creating chimeras, or tissue removal by biopsy.

With respect to laser removal of lesions, the basic function is to remove by cauterization or like process a lesion which is embedded or on the surface of tissue. This lesion will have three dimensional characteristics.

As an example, in ultrassound and MRI images, lesions usually appear darker than the surrounding tissue. We can isolate that region using the CT and CN routines previously described. The cross sections of the lesions can be manually traced. These traced regions are filled creating a BF file. The filling process is the same as previously described. Now, we can do a standard reconstruction to display the lesion. One it has been determined that the lesion should be removed, we will use another process that we will describe now.

At the time of original data capture the patient has three fiducal points 207 which are placed so that they will be capjtured in the BF tile and in the inage 204 which can be seen by the attendant (as in the ultrassound configuration shown in Figure 15). These points preferably have a pin sized low radioactive element or non-radioactive identifier (e.g. one that floresces) located so that they will appear brightly an the slice.

Since these images will appear on the BF file, the offset with respect to any other xyz point in the BF file is known by derivation. The coordinate system of the laser is made to align to the BF file throtgh calibration to these three fiducal points at the tine of surgical operation by laser.

Prior to surgery the cross section of the lesions are traced, and the BF file created. The lesion which should be removed is input to a volume detection program. This program finds a starting burn point at an edge point which is a visable oi the surface of the lesion. This point can be determined by the reconstruction and identified by the surgeon during the review and surgical planning process. He will chose this point so that it would be on the surface of the lesion if the lesion were exposed at this time. The lesion can be identified as the darker image seen in Figure 15. The drawing of Figure 8 illustrates this operation as one from the transsducer to a tool (CNC) which in this case is the laser configuration, and the operation is illustrated in Figure 8 by "0".

Starting with that point all of the voxel elements in the volume are ordered in such a way that that the laser can remove them in an ordered sequential front to back sequence. The laser control simply reads this file of voxels as fron a file equivalent to that shown in Figure 6, causing laser firing to burn the tissue located at the voxel position one at a tine until it has read the entire file , at which time the surgery is ccmplete. The lasers can be several lasers focused to intersect at the voxel, so that there is no burn except at the explicit voxel which is to be bumed.

In a similar manner tissue can be removed by mechanical means, equivalent to the machine tool process. Under robotic control a quantity of tissue can be excised by boring, or a biopsy made at a specific point by inserting a biopsy needle directly into the tissue to the point where the de≤sired xyz coordinates are reached. In the same manner tissue for creating a chimera or graft can be inserted at a specific point, as can observation elements (scopes) or light sources (fiber optic pipes). For instance, graft tissue as from a kidney can be placed at the surface of a small hormone producing gland in the brain, or inserted in it, to cause the increased production of hormones used in bodily functions, such as used for reduction of the effects of Parkinson's disease. The location of the gland and the place where tissue is to be removed for transsfer can be accomplished by MRI scanning, and withdrawal and insertion can be accomplished by the sane coordinate controls used for laser burn. Similarly, as certain drugs which combine with abnormal cells and nay be altered by the exposure to light, insertion of a light probe to the desired xyz point will allow destruction of cancerous cells which absorb such drugs. Delivery of similar drugs or protein such as those which would carry a fluorescent label to undeεireable body growths, such as plaque, would allow identification and view by the three dimensional reconstruction process and removal by light delivered adjacent the material by passing through tissue or passing through the arteries and veins.

With the uLtrassound system whose input is shown in Figure 15 the volume of a tumor (in ultrasound showing as blacker than surrounding tissue) can be determined by BD. In addition the robotic control illustrated by Figure 8 allows placement of radioactive seed about the cancer. The presence or location of the seeds about a tumor can be detemined by sinning in three dimensions.

Wa have developed means for the use of transducers whose wave origin points can translate in any x,y, or z direction or move the origin points freely in space as free floating vands. With free floating vands, such as those fron ultrasound, frames of the image are captured with the axis of the planes having vectors not orgaized in the parallel manner of the CT, MRI, and contour measuring machines. In those cases the images are directly filled into the data capture file, row by row, column by oolumm. In other words one frame per file, with frame being parallel and in prefect alignment along the nominal z axis.

The frames from free wands are not in such perfect aligment. They are scattered in different directions, and indeed, in data capture it is possible to capture frames which interssect where the desired desired data to be captured is located and which have desired data in the frame at points in addition to these points located at the line of frame intersection.The frame is captured as the switch which may be located en the hand grip of the wand is closed to identify the specific frame to be grabbed.

After the frame have been captured, they have their orientation known by the transducer plane determinator which will be described. With the known orientation, the frames or their equivalent have to be transposed to be placed in a predetermined alligmment.

This is accomplished by first rotating about the z axis, aid thai translating vertically and horizontally. Now all frames line up in the BF box, i.e. now they have an origin point on the z axis.

The "crooked criginal "frames are the source of data from which new parallel framss are created by interpolation. The frames are reordered with points into coronal (frontal) slices. The coronal slices are interpolated. The interpolation is with a weighted linear interpolation. The closest points to The new point is a linear cσnbination of the two closest old points in both directions, with the weights of the old points being inversely proportional to the distance the old points are from the new point. These firanes are now ready for threshholding.

For ultrasound threshholding we first apply a filter to the images, and then a difference operator which enhances noisey regions while degrading the non-noisey regions of the image. Other enhancement processes can be combined with the thresholding function for segrentation cr subsstituted to perform the thresholding function, depending upon the characteristics of the image.

As we have said, one of the problems with free floating transducers, such as ultrassound wands is that while the image is created .by existing technology machines, sane of which have digital output, sane of which have video output, or both, the location of the image in space is left to the mind of the technician or doctor viewing the images as they are created. These images are not ordered with preset slice spacing in a regular pattern as is characteristic of CT and MRI image capture. In capturing these inages a plane determinator has had to be created.

There are several solutions which can be made for plane determination. With gravity as a reference point, three "Beckman pots", or level detecting potentiometers 161, 162, 163 can be uεsed to determine tilt, and a col lar contact switch 165 can be used to transmit the amount of rotation from an origin point. Translation of the wand in space can be mada known by three light or laser senders 171, 172,173 on the wand shining on photodetectors 175, 175, 177 on the wall of the room and time of flight from a triggered pulse used to determine the position of the laser senders and thus track translation as seen in Figure 14B. Alternatively three video cameras 181, 182, 183 could be used to track a light rod 184. A top and two side camera views will create lines on each of three images matching the light rod. If the light rod is connected to the transducer the rod can either be in the same plane as the captured frame image or in a plane lying at a know reference orientation thereto. From these lines, the xyz coordinates of the plane can be determined and thus the plane of a captured frame can be determined. Our preferred frame plane determinator is much simpler than the aforesaid alternatives. It substantiall y reduces the amount of computation and utilizes inexpensive componaits.

The frame plane determinator shown in Figure 15 utilizes light wave emitters in the form of infrared senders 191, 192, 193 of position to silicon photoreceptor(s). This is utilized in the ultrasound systam there illustrated. The ultrassound unit 209 has a port providing image data shown in 2D as frame 204A as digital or RS170 output to the system 201 having incorporated therein the f rare grabber. While it is within the scopo of the broader aspects of invention to use as a preferred embodiment a frame plane determinator which utilizes time of flight of the light to the photoreceptor to determine distance (x=ct) the increased cost of the necessary electronics may be reduced by utili zing the principle that light intensity is inversely proportional to the distance from the source squared, so that a direct voltage change at a nxment is related to the distance from the source in accordance with 1/(L*L)=V, which is our most preferred embodiment. This same principle is used by us to locate surface coordinates of an object sampled, which can be accomplished with a floating wand. In ultrasound, we use this principle to determine the plane of the image. We have discovered that by placing a stage or support 200 within two meters of the wand 170 the voltage of the photodectors receptors 195, 196, 197 is sufficient to determine with an inexpensive digital mάcrovoltmeter the voltage from the infrared emitter source 191, 192, 193 so that the distance can be measured within less than a micrometer (aid even less) and even smaller distances can be determined by more accurate measurement of the voltage of relatively sample photodectors. As two mrillimsters is more than sufficient to locate the plane of an ultrasound wave, this output can be used to calculate an exact x,y,z dimension by trlangulation of each sending light source. We use three sources to obtain a plane of the emitters, and three receptors to obtain more greater precision. The three receptors and their related circuitry are like those used for wand εcannirg shown in Figure 17. The voltages at the receptors are converted at the clocked tine of the reception to digital infor mation which is fed to the computer 201 which dees not only the dimension calculations, but which can be the same system which performs reconstruction of the ultrasound images.

Measuring the intensity for the photodetectors we used with a distance less than 500 mrillimeter we obtained the plot shown in Figure 15A with circuitry shown in Figure 15B. The photodetector ussed is a 400-1000 rm detector 511 with operational amplifier 512 supplied to a mmllivolmeter output 513. A red gel filter 514 is supplied to filter the photodetector to eliminate unwanted light. The filter passes light at about the 600 nm cutoff. The collimated TR output LED 515 is an emitter of about 880 nm with 20 degree 1/2 power rating. The voltage fells off as a function of the distance from the source whether the emitter is measured on either an on-axis or off-axis. The high intensity LED used in the example is highly collimated (20 degrees). On axis movement gives a linear ressponsse as shown in Figure 15A which varies as the distance is squared. As the distance between the emitter and detector is increased off axis an increase in the detected sigpal is seen as the emitter is rotated as a yaw. Yaw can be detected by the Beckman pots, illustrated in Figure 14A. While the 20 degraa emitter is adequate for normal use, the alternative calibration to compensate for yaw can be handled by a detector of yaw and an compensating factor added by the computer system, or a less highly collimated emitter can be utilized. In the most flexible system the source will have a wide-angle emittion of 180 degrees, and have a high intensity output. Such a source could anploy a lasser diode coupled by fiber optics to a spherical difussion bulb at the end of the fiber, as shown in Figure 15 as an emitter 192.

It should be appreciated here that the frame generated by the ultrasound wand picture is a result of sound waves intersecting tissue having a plane thickness of about 1-2 millimeters. Movements need to track 1 millimeter movement in any direction it a free wand is desired, as it should be in most clinical situations. The slices could be obtained by stepper motor controlled movement on a guide, and the troad teachings of reconstruction for ultrasound contemplate such a control. The motor would be the control of the optical stepped motor. However, we prefer our moveable wand, preferably also connected to the RS170 port of the ultrasound system. To accomplish this we have had to design a new kind of interferometer. We place three LED inrrared emitters 191, 192, 193 on the wand 1/0 (prererably in a plane which WiII include the plane of tne frame, although an adjustment calculation nay be made by the computer 201). These three LED emitters spread light in many directions. With a wand handle grip switch 202 a specific desired frame is identified and the switch fires the LED emitters 191, 192, 193 in clocked succession. Preferably three closely spaced pulses fire the three emitters and cause clocking of the voltage sampling photoreceptor output. The voltage at each firing is determined. Since the voltage far different distances has been calibrated, the exact distance of the emitting light nay be determined by the voltage (or other derived signal likewise calibrated). The exact distance of each light is known by the voltage output of each photoreceptor 195, 196, 197. These give xyz points of the three light and these xyz points, lying in the frame of the sample frane provide the exact xyz coordinates of the sample frame by reference to the origin point 203 of the ultrasound beam which also lies in the frame. One of the lights or origin, or another object or point lying in the frame may be determined as a reference point. For convenience, we prefer this object point to be in the image, such as the calculated position of point 204 on the outer surfece of the transducer's membrane which is placed adjacent to the skin and which is in the center of the ultrasound sweep, as it lies in the image of the frame displayed by the ultrasound two dimensional imaging system and is this captured by the frame grabber . Thus this object point being known, all xyz points of the image itself are known and the aforesaid placement of the images by the computer 201 in the system can proceed as previously described.

Instead of three spoeed pulses, three different wave length emitters also indicated by 191, 192, 193 can be employed with receptors , also 195, 196, 197 which are filtered to accept the specific wave length of the anitted light. In this instance three photoreceptors are used for each frequency to provide the xyz distance calculation and point location. This implementation is used when the manual dexterity of the operator causes wxry about hand and wand movement within the tine betwean light sequences. In order to remove the effects of ambient light fron the calculations, a filter chops light of frequencies not of the frequency of the aiitters. The analog to digital devices including the filter are located on a card within the computer 201.

While other transducers have been described above, and are effective for their purpose, and can be assembled or are available as coimarcial units, we have developed a wand coordinate detector based upon the same principles used for wand imags plane detection as illustrated in Figure 15. In Figure 17 the wand 402 is supplied with a wide angle infrared emitter 403 and a focused non-spreading emitter 404 with a receptor 405. The coordinate measuring function of amitter 404 and receptor 405 can alternatively be accomplished by the commercial coordinate measuring devices, in which the subject is mounted on a mount 410 and moved over known xy coordinates while 404 and 405 determine z coordinates. When the freely moveable wand is used it shines en the subject 401 with light, (this could be a penetrating waveform instead of light, in which case the receptor would be located on the other side of the subject) the receptor 405 determines voltages in the same manner as previously described. However the distance of the voltage measured is twice (in the event of reflectance sampling) the distance measured before since the voltage is caused by the reflectance of light fron sender 404 from the surfece tack to the receptor 405. As the beam is focused aid the distance to and xyz points of the origin of the beam 403, and 404 (and the receptor 405) is known by the three receptors 406,407,408 mounted en stand 200, again by triangulation, the xyz points are derived by the systam 201 for the surface of the object 401 (en mouit 409 and 410), which then can be duplicated by the process described. The beam is highly collimated and fixed to the wand so that the tilt of the wand is known, as by the system of liφts 191, 192, 193.

After the object has been sampled, an overlay process can display the image in ccmbination with aiother scene. Thus while the three dimensional view of the object under analysis is normally illustrated as a free floating three dimensional view, two or more movie frames can be overlayed to portray the scenes in relative three dimensional perspective. The frames of the images to be viewed are first assembled, and then the raster view of of the scene of each movie frame additionally processed. This entails a pick and place of row and column data fron each frame, deleting data and leaving a blank where another object will be seen. In an example, fro m every other pixel position of each row (leaving a blank) in one frame a new "transparent" frame is created, and in a second foams every other pixel position is processed in the sane mannar. Then "CR"ing the two frames a composite movie frame is created fro viewing, which combines both views. Similarly, two views can be combined for each frame as a title is added to a frams in conventional two dimensional television advertizing.

It is a function of the reconstruction process that the viewpoint of the transducer is not necessarily a viewing angle of an object in display. Viewing can be dene along any axis. For instance, with the CT systan we have described, we have created views of cancerous growths in a brain. Cancer dees not grow along predefined coordinates. In order to determine how to attach a cancer, the configuration of the cancer is determined. The slice data is enhanced by changing the intenstity of the data asssociated with the cancer, and after a determination is made by the physicial, based upon exporience, that the outline of the surfece has been sufficiently determined (background noise not touching the cancer is eliminated in the BF system) a reconstruction of the cancer is determined. The axis of attack of the cancer is determined by a physician. This is normally along the longest axis of the tumor. After a determination of this axis is made, the axis is projected to skull surface of the potient to locate the point of entry, as the xyz coordinate system allows the location of any paint in the scene. The system also allows location of that position for radiation, and the actual amount of radiation therapy can be computed from data derived for the three dimensional reconstruction. The solid modeling system can provide views of penetration of probes and other surgical tools and these can be added to the images created by reconstruction to plan the surgsry. As the data is arrayed in rows and columns in a xyz coordinate systam, planar parallel microtone sections can be reconstructed fron the original data which lie orthogonal to the planned axis of attack. This is done by picking data fron the arrays whicii would lie in the desired plane and assenbling them in a movie frame. These can be then peeled off frame by frame for presurgical viewing of the operation as it progresses toward the tumor (in the event of extraction or biopsy) and replayed during surgery to allow the physician to check the progress of an operation.

While biopy can utilize the free tilt routines described hereinbelow it is not normally used.

In the method of taking biopys a sampling needle, controlled by the view is inserted into the object, and samples taken at different planes in the susspected tumor area. The general strategy in determining a good trajectory in taking biopsys is to find one that has the longest traverse of the biopsy needle through the lesion without affecting sensitive surrounding tissue, vessels, or organs.

As we can display the trajectory as a "green line" (a line of sight linear trajectory) through the 3-D image, one can move the green line of the trajectory about the image until the opimal trajectory is found. This can then be fixed as the trajectory of the biopy needle. Once fixed, the map of the trajectory can be used to control the movaient of a slave rnicropositioner for the biopsy nsedle using a stereotactic frame coordinate system. In order to draw the "green line" two points are defined in free space, with the ability to charge the value of the xyz coordinates of each of the points. Tha green line passes through these points and protrudes through the. lesions as shown in Fig. 18.

In Fig. 18 the original points defined in free space are X, X, and a line is created between each of these points as shown. The line can be rotated by changing the value of the x, y and z coorditates of each point of the green line 18-2 so that the line will subsequently appear as dessired and shown by the dotted green line 18-3 between X' and X' . The method of accompli sting this may be done with alternative processing steps. Either the line nay be moved, or the line retained unchanged and the data set representing the lesion 18-1 moved relative to the green line, or both can be rotated as one data set together. This operation is built into the two dimensional reformating process, which nay include the known multi-planar processes as well.

When the green line has been defined, all planes orthogonal to the green line in the data set (o r iginat ing with the DT file) can be ccnfigured, as the xy z point an the green line is known and all points in the plane will have coordinates which include that xyz point and which line in the plane orthogonal to the line at the xyz plane origin. Since this is accomplished preferably after free tilt and formating the data set with interactive segnentatLon and thresholding, as described herein, each plane slice 18-4 of the lesicn 18-1 will have known xyz points which may be fed in order to drive the scan of the numerically controlled machine, as has been previously described.

When viewed on the screen of a laser resection and biopy monitor unit, the green line 18-3 nay appear as a dot 19-1 within the lesion slice 19-2 spacially located within the spacially and anatomically correct framework, including references such as a skull section 19-3 and reference grid 19-4 on the monitor 19-5 of a laser controller driven device, having joy stick 19-6 override controls, as schematically illustrated by Figure 19. The biopsy data is obtained (as will also be true of laser resection data) when the patient is scanned with a stereotactic helmet in place.

The computed CT coordinate system is the sane as that of the stereotactic frame. The stereotactic frame and helmut are placed on the patient during the original scan. 0,0,0 on the stereotactic frame is the sane as the 0,0,0 on the CT Computer. In this manner a point of the image on the computer enables the sane coordinates to be used to access the same data on the stereotactic frane in order to locate a point on the lesion.

In a normal CT and MRI coordinate system 0,0 is found at the upper left of the image, and 512,512 is found at the lower right, 512,0 at the lower left, and 0,512 at upper right. In other words the picture map is equivalent to the fourth quadrant of a cartesian system, with the origin at the upper left. Thus the center of the image is 256,256. The left of the picture on a CT image typically represents the right side of the patient.

As contemplated by the present preferred embodiment, the transducer coordinate system is modified where necessary to conform to the stereotactic frame coordinates system. Since the stereotactic frame coordinates used often, and desireably, conform to the cartesian systam, the image of the transducer is shifted so that the center line of the patient as ckfined by the stereotactic frame is the Y (or X) axis passing through the origin point of the stereotactic rectangular coordinate system. This requires redefinition of the origin point, and 256, 256 of the CT image typically becomes the 0,0 point, or origin of the image used in stereotactic operations. The exact distance between pixels is determined and the image is defined cr redefined to have the real distances from the origin point correspond to real distances from the origin, by reformating the DT files and/or a derivative file to real stereotactic rectangular coordinates.

By locating the lesion with the computed image, the computed image can be used to drive a mdcropositioner of the biopoy needle and move cause it to follow the points of the green line to the lesion and through the lesion by stepper motor robotic controls, as schematically represented by Figure 20. There the potient 20-1 is sampled with a biopy needle 20-2 driven and controlled by a mήcropositioner 20-3 oriented with respect to a stereotactic frame 20-4. (For purposes of illustration much of the associated hardware is not shown in order to ease understanding of the concept. ) Conceptually and alternatively, Figure 20, can be understood to illustrate laser resection, where the needle 20-2 represents the laser, and the micropositioner 20-3 the laser drive and controller under supervision of the monitor station 19' (Fig. 19). Because there exist comercial CO2 lasers having xyz coordinate controllers acting like numerically oontrolled devices, the monitor station 19' is understood to include these controls and the description herein need not be further amplified in this regard.

In order to control a laser for surgery (or other numerically controlled device) and for optimal viewing of the data set 18-1 on a display, as at 19-2, a free rotation of either the entire data set or a plane of the data set is useful. The green line X' ,X' 18-3 and 19-1 becomes the view of the surgeon in this representation. In order to obtain the view of the surgeon in combination with the "free tilt" viewing described the use of two views are utilized. The firsϊt is the 2-D scan data. The second is a reformated 2-D image corresponding to a view of the orthogonal plane across the 2-D scan data. In order to define a 3-D "green line" or view of the surgeon, the surgeon identif ies two points on one of the 2-D views. (S)he does this by placing a pointer at two locations on the image, and the system creates a line mapping all coordinates between these points in the control memory. Then a link is made to create the view of the surgeon, assigning the interεsecting paint of the first line to appear in the second tilted 2-D image, which may be orthogonal to the first image. Another point is created by the surgeon in the second A-D image, This second point creates the necessary Z coordinate position of tne view of tne surgeon. This is then represented as shown at 18-3 and 19-1 on the display of the three dimensional data set. Preferably the screen displays all three views (both 2-D views and the 3-D view) on the screen at the same time.

By the system described herein a full three dimersional data set. of all pixels (including where applocable interpolated til led data between slices) is available. A free tilt allows a piane to be selected with a completely arbitrary tilted intersection by the plane of the data set obtained fron slice data, as opposed to any arbitrary multiplanar or parallel plane, so not only the views of the prior art multi-planar views can be seen, but also additional planar intersection views (and coordinates of ail points or those views) can be obtained and utilized.

In order to obtain "free tilt" planar viewing ot planes of the data set, rotation of the pixels in 3-D is acccmplished. We start out with rotation of pixels with by the method described and illustrated by DS (including the nathematics therein described and incorporated herein by reference) for the composite transformation matrix for rotation of the pixels in 3-D. However, the method of DS is modified in accordance with the description herein with respect to free tilt. In ES we don't interpolate, we just round off to the nearest integral 3-D coordinate. Here we preform a trilineal interpolation. The result is a reduction of the staircasse effect when rotating.

The new 3-D address is now computed using floating point coordinates. While we describe hereinbelow with reference to Fig. 21. There is shown the process for one pixel, and it should be understood that the process is performed for all pixels ot the the particular plane that is desired to be viewed.

In the process, first of all the new coordinate will intersect eiφt pixels in the original data. In other words, part of each cube of the pixel has part of its volume oomposed of parts ot eight pixels from the original data set. It this were 2-D the volume parts would be area parts. In 3-D the pixels are not perfect cubes, but rather elongated boxes. To get the interpolation, it is done by interpolating the pixel values of the two points projected above and below fro n the original, planes and then interpolating these points linearly to get the final point.

Using the same basic strategy we could use sync functions for interpolating, always interpolating the point directly above and below and then interpolating the point in between the two points. This will be an optimally effective interpolation, however it is computationally exponsive, and affects performance.

With reference to Fig. 21, there are for the purpose of illustration represented eight pixels of the original data, four to a plane, shown as P11, P12, P13, P14 of the first plane, and P21, P22, P23, P24 of the second plane.

Thus with with reference to the drawing Figure 21, the four adjacent pixels in a tqp plane, and four adjacent pixels in a bottom plane have an imaginary pixel intersecting all four pixels shown as P1' and P2' respectively. With trilinear interpolation the intermediate plane pixel P value is found by distance trilinear interpolation, and its value will fall within a line intersecting P1' and P2', and that pixel P will fall in the free tilt plane. That is, the pixel to be interpolated P is illustrated as located somewhere in between the two planes. However, its value mathematically is found by the strategy of interpolation described, preferably by trilinear interpolation.

This describes the process for pixels. The goal is to rotate a plane or to take a cut of the original CT data along any arbitrary intersecting plane to thus obtain a "free tilt" view of the data. This free tilt view will be a conbination of the capabilities of tilt plane for oblique sagital and coronal, and oblique transverse and coronal views, but without the restrictions of those possibilities. The free tilt plane can be arbitrarily determined by pitch roll yaw of the plane, and with free tilt the cross section of the data normal or orthogonal to the view of the surgeon can be displayed. The views are obtained in the 2 dimensional reconstruction section of the processing, but they can be displayed in 3 D.

Because the computation efficiency of the programming for free tilt views, the necessary reconstruction and preplaning can be done efficiently with the use of fast processors and disc file access, as nay be accomplished by a Sun-4 260 (Sun Microsystems, Inc. Mountain View, CA) with a SMD disc file controller, but slower machines such as the Sun-3 systems can be used. We have utilized a Sin 3/100C for development, with a disc file and the color graphics card. The use of the Sun Windows and Sun View programs (active virtual machine window displays) and color graphics card allows 256 shades of grey (a 16,000,000) color palate to be used. Other 8 bit plane machines may be also used. It will be realized that 12 hit plane machines can be used to accomplish the methods described, as well as array and parallel processors. We have chosen to illustrate 8 bit plane machines as the preferred anbodiment in order to reduce the cost of the hardware at the present time.

This free tilt rotation can be done in batch mode for the full data set, or in interactive mode far a single slice.

By changing one coordinate of the view of the surgeon, the line known as the view of the surgeon can be changed to correspond with the new setting. Thus the planning surgeon can determine in an elective and arbitrary manner the position of the view of the surgeon. The free tilt allows the data to be mapped relative to the view of the surgeon.

This can be used for laser resection or used to create a data set for reconstruction of ultrasound images, as an alternative embodiment to that process, but our preferred embodiment of the laser resection may be described as follows.

The view of the surgeon is a line of sight, and we create axial, cuts of a lesion as at 18-4 from the 3-Dimensional data of the lesion normal to the line of sight 18-3, 19-1. We have found that in addition to tracing, thresholding wxks well with lesions using the described processes. By the use of 3-D data, the process of creating the cross section for laser resection is greatly enhanced. Free tilt of the entire data set requires tracing (and/or thresholding too) of the contours during surgery with the use of a joy stick or other control. By the use of slices normal to the view of the surgeon obtained fron the 3-D data the laser tracer can automatically follow the 3-D data coordinates of the plane robotically, as can a numerically controlled machine, and the process can be preplanned. If the boundry determined by boundry detection is not sufficient, additional ring parameters are added ( see the dotted outline to the plane 18-5 Fig 18) to the size of the normal slice so that the laser overtraces the boundry of the normal free tilt cross section 19-2, or this can be accomplished by the surgeon using the joystick override 19-6. Another application of the 3-D reconstruction is the location of implantation of iridium-192 or other radiation doses at the contours calculated from the reconstructed data. These can be inserted by the sane computer assisted stereotactic location procedures described for biopsy and laser resection. In connection with the laser reconstruction, all points on the plane 19-2 may be accessed by the laser, and the specific path may be controlled as described with respect to the outline of the object made by a numerically controlled device, as illustrated by Fig. 6.

In order to determine the view of the surgeon, which is arbitrary, the three dimensional view of the lesion 18-1 is created. Since lesions are not readily viewable fron 2-D the 3-D view allows the physician to determine if the tumor is shaped like a sphere, like a football or banana, amoeba or otherwise. Normally, the entry is chosen to reduce to a minimum the amount of disruption of surrounding tissue. Blood vessel locations are determined by interactive segientaticn (as described herein) and other intervening tissue areas are located, and a trajectory is determined so as to permit a clear line of sight for the view of the surgeon. Then the lesions data set is oriented so as to allow the line of sight of the surgeon to intersect the lesion, and cross sectional planes taken through the lesion with the cross section planes oriented normal to the view of the surgeon.

Each cross section is computed, and they are fed in one at a tine to a coupled numerically controlled laser unit to transfer to the laser unit the coordinates and the trajectory for the laser unit from the information supplied, instead cf from a manual trace using a joystick. With a joystick, the preplanning can be modified if necessary during the surgical procedure. Thus the surgeon oversees the operation which has been preplanned, and may, if desired, burn away additional .surrounding tissue to be certain that a tumor resection is complete. This nay be especially important if an attempt is made to resect glioblastomas. For this reason, the clustering technique and threshold range technique desscribed herein may be especially useful. it is important to distinguish this technique from the work conducted at the ι%yo Clinic. Their reconstruction was based upon an intensity detection program which required the surgeon to digitize the outline of the tumor for each Ct and MRI slice. Instead of using the joystick with respect to a two dimensional slice, the systam here utilizes the three dimensional data set coordinates to drive the laser directly from these data sets. Furthermore, with the present systan, only slices which have intersections to artifacts need to be manually outlined by the scalpel and not traced. Tracing eliminates the connection of issue which indicates invasive growth and the stems of genistocytic astrocytcmas. Furthermore, since the data is taken as directly as possible fron the lesion itself, all coordinates of the lesion are known and by the use of free tilt arbitrary views of the surgeon can be taken based on the free tilt planes, with the view of the surgeon defined as lines orthogonal to the free tilt view. Bother the data set and the view of the surgeon are capable of being rearranged by free tilt. Plane after plane, or point after point, can be burned away for laser resection.

In this manner, the lesion can then be burned away at all coordinate points of the free tilt plane, and then stepped forward to the next lower parallel plane, so that the next plane is burned across all coordinates as preplanned by a focused CO2 or Nd-Yag laser, and the process reiterated fron the first "upper" plane on down through the lesion to the lowest plane. Thus the lesion may be eradicated. Included within the laser resection process is the use of interference laser resection, by which we mean the intersection of a plurality of lasar pulses of irsufficiait energy to cause damage alone but with sufficient energy where the concurrence of pulses at a point can cause the desired bum or damage. These can be used for occluded tissue. Certain tumors such as metastatic turners which are located in areas where invasion will cause damage may be treated by this method.

In order to enable the view of the surgeon to be determined, the program creates a "green" line trajectory through the 3-D view of the lesion, which can be moved with pitch, roll, yaw through the 3-D view, and with this interaction by the physician, (s)he can finally determine the trajectory line of sight of the view of the surgeon as opposed to just locating a point through which the view of the surgeon passes.

Thereafter, the program creates a free tilt planar view normal to the trajectory line of sight of the view of the surgeon so as to create all necessary spaced planar views normal to the view of the surgeon. This is done on the 3-D recorstructed data. As an alternative, the cross sections could be created fron the grey scale using the free tilt computations from the grey scale data. However, since the 3-D is already created, it is computationally more eftective to utilize the existing data set for the 3-D reconsstruction to provide the free tilt views for lasar resection as cross sectional planes of that data set normal to the view of the surgeon.

This sane robotic technique could be inverted for machining away material, as expanded polystyrene, in order to make a model. Instead of traversing the cross section, the laser would traverse the outer portion and burn or cut away everything but the desired cross section. A plane by plane area could be removed in this manner. Since the laser could be moulted on a robotic arm, any axis could be used as a line of sight, and a full 3-D copy created in this manner . Additionally either the original and/or the master can be mounted on stepper motor controlled turning mounts or platforms and they can be stepper motor turned to permit a stationary or vertically movable laser to be utilized. However , for simplicity, the earlier described modeling technique is more effective, since it interfaces readily with most co mercial numerically controlled machines, and information can be supplied in the IGES or DFX format, or other simplified form, as for instance the form used by the Roland CAMM modeler.

All of the computations are most suitable for single processor applications, but also, unlike other possibilities, are useable on parallel processors.

Far rendering the three dimensional object, since the original subject is a living organ, it has many colors. Vessels typically are thought of as either red or blue, tones are write. Other coiors are used. in order to provide a desired rendering the system nas an internal display buffer. The pixel values in the display burfer are used as addresses to the color look up table whiαn contains the final display values. We divide the color look up table into various segmants, and thai we map each separate object into one of the various segments. For instance, bone, which is white or grey scale is mapped into a white or grey scale segment. Arterial blood, which is red, is mapped into a red area. Venal blood, which may be considered blue by artificial characterization can be mapped into a blue segment. Tissue can be napped into a flesh segnent.

There is one color look up table containing three principal field areas. Each field area typically has 128 (or 256) entry locations. Each field corresponds on one of the RBG, Red, Blue, Green, which are displayed. If the desired primary choices, as in the preferred embodiment, are grey (black to white), red, green, blue, flesh and yellow, then there are 21 entries which can be made to a color segient of the color lock up table (21 for each of the three color segments in the color look up table for a 128 entry area) in order to determine the ultimate color and shade or the object. Each of the separate oojects which may be saen, tat, heart, bone, tissue, blood, etc. are mapped into these 21 entries.

As an example, below is illustrated, by way of example, where "x"=the intensity at desired entries of the color map and "-" represents no intensity of that color, a color map far grey, red, green, blue, yellow (including brown), and pink:

Since the objects may be me combined in a final display in there respective colors, a full display will shew the object in a rendering according to the arbitrary values which nay be determined by the user. Thus the heart can be shown a shade of red, adjacent arteries nay be shown in a brighter shade of red, the veins in a bluish shade, fat in yellow, tis≤sue in pink, and lesions in greai or other color and shade, and bones in white.

It should be recognized that the color map is illustrative of an appropriate map for display on a monitor of the computer system, such as a Sun 3/150C. Our preferred systam as illistrated by Fig. 26B includes the central system 26-1, including a central processor, disc files, tape reader, color graphics monitor controller and color monitcr , as may be configured with a Sun 3/150C, and an additional image grab (digitize) and display system elarent 26-2 including a frame grabber accommodating camera input 26-2a (R,G,B or NTSC/PAL) and output 26-2b and additional internal display buffer associated with the image device . We prefer to output the image to recording devices via the display buffer ( inter nal and not shown) The preferred inpt and output is R,G,B because of image quality. An R,G,B analog monitor is coupled to the display memory for high quality display of the image itself, separating it fron the controls associated with the image on the computer monitor . While it is possible to video tape the full inage fron the monitor for subsequent use by ccmmercial recording device 26-3 which directly captures the monitor signal and reformats it to video format, the preferred embodiment utilizes an analog monitor 26-4 to capture the image. A preferred moniter is an K,G,B analog monitor 26-4 which accommodates input and output of R,G,B with sync on green. The output of the monitor, or the green signal can be used to drive a variety of satisfactory oommercial recording instruments, such as a camera for record on film 26-5, a thermal printer accepting NTSC signals or G with sync as for grey scale input for a printed record 26-6, and by connection via an R,G,B to NTSC/PAL converter 26-7, a record nay be made in standard video format, such as with a VH3 video tape recorder 26-8. Nunerical controlled devices such as a mil ling machine 26-9a and laser resection controller 26-9b are controlled by the central system's serial I/O ports.

However, because the resistances of the monitor devises and the video tape record very, the visual result of color and grey scale will not be identical. We acco imodate that variance by reformating the values of the color nap to adjust to the visual quality required of the actual output. Thus the above example 1 of the color nap which is suitable for display en the computer monitor will be adjusted to increase the red level for R, G, B display of pink. The same adjustment is appropriate for yellow. Example 2 represents an adjusted color map for a target display device.

Thus the view of the surgeon as seen on the control monitor 19-5 may be quite complex. This monitor, or another monitor, can display the image in color, and provide, not only the two dimensional view shown in Figure 19, but more oomplex three not only the two dimensional view shown in Figure 19, but more complex three dimensional views. Thus, as an example of one reconstruction which we have dene, there is shown in Figure 22, a lesion within a skull which is anatomically positioned and shows a skull section 22-3, the falx 22-4, the lesion 22-1, an astrocytora, and blood vessels 22-2. These blood vessels were not viewable in the CT scan of the patient, but did appear aid ware able to be detected and segiented by the process of thresholding and interactive segmentation described herein. For the reconstruction illustrated by Fig 22, each of the various objects were separately dissected by thresholding and interactive segmentation where required, and the composite image shown created. For illustration, the blood vessel is shown red, the felx yellow, the tumor pink, and the skull a white shade. The various objects can of course be made to exactly match the actual appearance of the object so that the physician can watch the colored progress of the operation. Additionally, the objects can be dissected from various modalities, including MRI and digital angiography using the methods and elements described herein.

In addition to that illustrated, the inclusion function can be used at this point of display.

In connectioi with the visuailization of the object to be viewed, we have an alternative preferred embodiment which combines functions already present with the pixel values obtained during boundry detection.

In this connection in the preferred embodiment because of computational speed, we detect the surface boundry of the threshold image, which is basically a monochrome, black or white, present or non-present image. In the alternative preferred embodiment the original grey scale data for the pixel is retained during three dimensional reconstruction.

Eight bits of the original data is retained, to provide byte data, as previously described. In order to take advantage of this retained data, a function whicii we term an "inclusion" function has been provided.

An indusicn function allows the adjacent pixels of the lesion or object rather than the object itself to be displayed via the inclusion function. The inclusion function is operated after thresholding and before boundry detection. The inclusion attaches the adjacent pixels to the object, which adjacent pixels do not necessarily fall within the thresholding level of the object itself. Thus for thresholding a bone, the pixel levels nay be in the range of 1400 to 1250, while the pixel values around the surface could be in any range. The inclusion function retains the adjacent pixel values, in addition to the threshold value. The adjacent pixel value are laid over the surface of the image of the object. Therefore the laid over values are the values used in the display. It will be appreciated that the inclusion function thus allows display of elements, even of different modaliti es such as CT, MR, Ultrasound, digital angiography, PET, Nuclear and X-Ray, as combinations, in color, or as grey scale images, to form a complete three dimensional view of the object. The separate views are segmented, either directly or by interactive segmentation, or clustering, and then combined in the composite image for viewing. Thus a skull can be sliced, showing internal parts positioned correctly and colored with the appropriate hue of those elements which need too be considered. As sane modalities show elements of the body better than others, this combination is useful in treatment. Thus for example, a trajectory can be defined to avoid elements which night be undesirable to penetrate. Also, with an overlay, which may be transparent, a two dimensional view can be overlayed over the three dimensional object. Radiation dosimetry calculations can define the proper seed size, and a data object duplicating the seed can be located in correct position, and a plan created for inserting the seed.

As an example, this can be useful in observing a scar on the surfece of the object, or the embryo outline on the surface of the yolk of an egg, or a lesion on the surface of the uterus. Far illustration, we have illustrated in Figure 23 the use of this technique to locate the a lesion. The lesion 22-1 appears on the surface of the object 22-3, in Figure 23A. When that has been recognized, the tumor itself can be ssperately reconstructed, and the view is obtained in which the tumor 22-1' appears as a three dimensional construct in relation to the object 22-3' as shown in Figure 23B.

The point is that once you see the scar, one nay be interested in viewing the scar in the full three dimensions as well as view it on the surfece of the object. The illustration of a scar is exemplary. This technique can be used in combination with interactive segnentation and erasse functions to better qualify the extent of a lesion. Using this technique adjacent material which may have a similar threshold value may be disorinated, and the range fore thresholding adjusted accordingly, as can the actual location of the interactive segmentation scalpel edge.

Now by using the separate thresholding of the object on the surfece, at its threshold value, or by segnentation, a combined three dimensional part of the two objects, as an example the full anbryo aid the yolk together, can be displayed together, both appropriately colored, if desired. Indeed, we have searched for dinosaur eggs which might be suitable for CT analysis with the hope of finding a fossilized embryo within the stone to reconstruct. To date that effort has not produced any results. However, it is possible to reconstruct with our system the original of a mumnified body.

For additional rendering of sections of a 34) object, in an alternate preferred anbodiment which requires additional computation, cuts into the 3-D reconstructed object can be made. This method will be useful in previewing the actual surgery, since it enables segnents to be cut away and viewed prior to surgery as if the surgery had been performed. It is also useful for anatcmical illustration. Instead of displaying a single grey scale of the surface, as is done in the preferred embodiment as previously described, and as effective for the laser resection, the cuts, whether they be simple slices across an object, such a lesion or bone or damaged skull , or a conical insertion, or V-cut, or pyramidal cut into the object. The purpose is to cut open a skull and see the damage inside, seeing both the outer surface of the skull, and the sliced portions of the brain inside. In order to make these cuts it is possible to use MODEL.

It is possible to display a skull cross section by a program known as back-to-front. However, this method only allows display of the interior structure of the bone itself , and as such the program is not generally useful.

However, the free tilt system previously described can be employed in the rendering of cuts and can use and combine different modalities. As an instance, the cut of an object determined effective in thresholding of CT can be combined with the data set from MRI, to render the cross section of the cut with MRI data, and and MRI data set used to create a 3-D object, can be rendered with data from a CT data set.

It will be recognized, that 3-D data sets from different modalities, can be combined also in the same 3-D display. For instance, certain soft tissue nay be shown to have better surfece characteristics when utilizing MRI. The brain of a MRI data set, can be conbinεd with a skull of a CT data set.

In addition, the free tilt system previously employed can be used to render data from the sane data set in cut-away views of the object.

So the description of the rendering of the f raa tilt surfece data on an object is applicable to either modality data sets. For convenience, we utilize CT data, since wa have found CT data with our algorithms most susceptible to thresholding of both soft tissue and bony structures in a patient. In order to start the data obtained from DT (the file containing the original grey scale from the CT or MRI transducer) is not discarded in the process previously described. The DT file contains the source pixel values for the paintirg in of the cut open 3-D surfaces for painting the contours of the cut away.

The particular data of the plane used for painting the surface of the cut away is obtained by the free tilt process previously described. This free tilt process is used an all of the needed DT data. Essentially a sub-set of the DT data applicable to the cut-away is employed in the free tilt process, and the other portion of the DT data is not used.

The free tilt process is used to obtain a plane of data with the grey scale values of the coordinates which were originally present. Thess values are painted onto the cut open contour where the cut away (plane or cone) intersected the 3-D reconstruction. Thus, the cut away view of a tumor can show the cross section of the cut, as illustrated in Figure 24. There the cross section 24-2 of the tumor 24-1 shows the cell configuration within the tumor.

In the preferred embodiment, if the object is cut, a smooth surfece is shown. However, in the alternative anbodiment, the cut away surface shows the inside of the object, as shown in Fig. 24. If it would be a bone, the narrow would be shown while this object rotates.

This process of selective and progressive cutting away can continue, to run the sirgeon through the entire operaticn. First the outer scalp tissue is removed showing the skull surfece, then an cut is made through the skull showing the brain surface, with the cross section of the skull retained. As cut proceed into the brain, the cross section of the brain is displayed. Generally this results in the view of the cuter surface of the object with an internal view of the rendition of actual surface features of the surfece of any arbitrary cut-away surfece, the topological information of the three dimensional object and the tissue density information in the cut away portion of the three dimensional object, which may be obtained fron the same modality data set or a different modality data set. This enables a complete anticipatory viaw of the surgery before it is actually accomplished allowing alternative procedures to be examined to determine expected results.

During the process of reconstruction and to create data sets representative of the object, we employ a process which we rail "interactive segmentation". Ebrlng the process of reconstruction a slice is displayed. Mst tumors and organs are automatically segnanted by thresholding, however, thresholding may combine attached elements or objects within the standard threshold range. Indeed as a step in the reconstruction process we have found that careful thresholding ranges need to be chosen. A range of 0 to 400 as an instance will gaierally be used to find tie skin or surfece of the subject. A CT threshold range between about 1200 and 1045 will be the threshold range (+/-100) where most tumors nay be located. As illustrated in Figure 25, thresholding in this rarge will show blood vessels 25-2 connected to the tumor 25-1 which do not appear in the grey scale images normally obtained by the scanners. Thus a pxreview threshold may be employed before interactive segnentation. Interactive segmentation is a construct utilized to separate artifacts from objects when both the artifact and object have the same threshold value. This is usually necessary to be used in only a few slices of a data set, as the artifacts usually do not appear in all slices as connected to the object. Thus for example an connected blood vessel, typically one which has grown from a capillary to supply the growing turner, having the same grey scale may be connected to a tumor only at a specific point and will not completely surround the tumor. This can be eliminated or included with interactive segnentation to define the object of interest. Other artifects can be elminated which connect to the object of intere st.

In order to define interactive segnentation we have developed it, in the form of the detailed C program in the appendix hereto illustrating unpblished code for achieving the interactive scalpel using a pixel scalpel drawing by movement of a pixel cursor match the speed of finger dexterity, which has many purposes, and as a program is called SG to describe it and for its use in interactively segnenting objects when thresholding alone does not work. As previously stated, thresholding fails to work when objects have excessive artifacts or vary low contrast features. The segnentation program allows the user to define a region interactively with a track ball or mouse by drawing a contour line on the 2-D slice about the region of interest which includes the suspect lesion. A border is left between the region of interest and the object lesion where thresholding provides for an adequate edge. There are tines when it does not appear that a discemable edge (one which can not be easily determined by operator and one of the few slices to a data set previously described) exists, then the operator judgient is used to intervene and trace or actually draw a separation line betweai the suspected two parts. This in computation actually becomes the parting eαge between two parts. Thus in Figure 25, a green line (not the same as the straight green orientation line above) 25-3 would be drawn as an interactive scalpel. It would be drawn through the intersection of the blood vessel 25-2 and the edge of the tumor 25-1 and then swing wide about the tumor. In the illustration, with thresholding paramaters set the tumor, the arteries, and the flx would appear blue as being within the parameter, as would various other dots there illustrated. The contour line of interest 25-3 would be green, and subsequently as described here the contour sscalpel line of interest 25-3 would be filled for the mask MK with values representing the region of interest of that slice for subsequent processing by BF.

As a method of reaching the point when interactive segnentation may be employed, it is possible to cluster threshold values.

This is accomplished by stepping the threshold levels. It is an alternative embodiment since interactive segmentation will accomplish the desired results without the need σf employing clustering in most cases. Clustering is accomplished by starting with a very narrow threshold range. When a suspect object starts to appear the next successive threshold range is added. This can be accomplished in two methods. The original, data marked by thresholding can be saved to a temporary display buffer, and the new threshold data also saved and combined into that display buffer (ultimately a mask is created), or the threshold area expanded to include the next adjacent narrow range. Either of these prooasses are repeated until the object appears which is capoble of thresholding. During this process interactive segmentation can be emplojed, however it is normally employed at the end of the process.

This clustering process can be illustrated by reference to Figure 25. Initially only one cr two dots will appear in the tumor area, but as the range is increased, the dots will join and fill until the outline shown wdthin the contour scalpel line of interest may be viewed. It is as if a definition by way of dots of increasing density come from tne vapor, as a method ot image enhancement. Clustering itself for image enhancement has been used, we understand, if various NASA projects ror viewing planets, however, this technique has not been previously employed in tne environment we descrioe and tne process actually difrers from that believed to have, been employed by NASA.

Ultimately, either through clustering or correctly initially picking a thresholding range, a general outline of the suspect object is obtained.

Thus by picking the range, initially you set the threshold parameters say at 1200 to 1180, which shows the general outline in blue of those parts on the original image. Then with interactive segmentation you can draw a tentative outline about the contour suspected lesion. This gives the line outline. This is then til led with all pixels. The format can then be erased by a wipe (a rub-out of the values in the display buffer which are used for automatic thresholding to define the edge of the object of interest within the threshcld parameter range) .

For ease of operator control, the image is di splayed as a background in grey scale on the screen. The next overlay of the background (not the sane data but with a like map) is determined by setting the threshold parameters. For each of determination, we have used a threshold parameter range to create in the display buffer a blue coloring to all pixels which occur within the threAold range. With clustering the blue coloring causes spaeed points to appear, and as the range is enlarged, these spaeed points will fill with blue until the object of interest appears. Because of the drawing requirement of black and white, this blue coloring is not illustrated in Figure 25.

New at that point, unless during the clustering process interactive segientation is used to discriminate, there may be artifacts connected to the object of interest. This sane feature will occur in some slices when threshold ranges are picked. It is interactive segmentation which allows the artifacts to be sagnented fron the object of interest. Interactive segnentation allows the operator to intervene and draw in a display buffer a scalpel line of interest 25-3 about the object of interest (25-1) so the underlying data set can be segnented with the scalpel aid so that it can be automatically thresholded within the region of interest. This is not the same as the box around the region of interest previously described, but the result is a free form contoured container drawn by the op)erator for the object of interest, which like the box can thereafter be accomplished in the same manner as a box for all values within the filled region constituting the contoured container.

The interactive segnentation starts with drawing a scalpel line 25-3 about the object of interest. It swings wide about the object ot interest 25-1 in the area siirrounding it in the 2-1) view where there is no interconnected tissue having the sane threshold values, and then is drawn at the edge of the object ot interest to define the edge between the connected artifact 25-2 and the object of interest where both are in the threshold raige being employed. This takes a certain skill, as sometimes the op>erator may inadvertently include part of the artifact. However, this inadvertent inclusion can be determined by a wipe or erase function. By the use of the inclusion function previously described, this line can be moved pixel by pixel until no adjacent artifact will be included in the contour line of interest 25-3. Unce the object has a line outlining the container contour defined, a pointer will locate the object of interest area which is then filled so that all points within the contour border are filled with the sane value. We prefer to draw the interactive line in green, and till it with green. All this is dene in the display buffer containing the blue thresholding colors and which overlay the underlying representation of the data set to be employed for thresholding. Since the area of the container is now colored in green, the erase function utilizes a pointer or cursor to reach the edge ot this area, and then where it overpasses the green it will wipe that color out showing the underlying grey scale.

Thus with this technique, in combination with the threshold parameters, use this technique to view the underlying data and stop at the true edge of the object. The true edge can then be drawn again with wide sweep about the area where the threshold parameters show that there is no tissue that has a threshold within the current range, and the container redefined, and filled. It is now the filled (with green) container which is used as the data set for thresholding. Thresholding of the underlying data set determines the actual scope of the date in the plane and determines the actual edge of the object of interest from the data in the place which falls within the interactive segnentation container.

Thus the operator looks at the suspect lesion in various views. Interactive segmentation is used for the 2-D views where it seems that direct thresholding nay be incorrect because of a connection to other artifacts. The cperator then starts the contour outline intersecting the interconnection to the artifact with the line scalpel to cut it and continuing to trace the scalpel footprint in the open space around the suspect object of interest. This is then filled to determine the container of the object which the subsequently processed as if it were a region of interest. In the process a mask is created. The mask can be compared with every point on the DT file which is retained unchanged, and when interactive segnentatioi is used in thresholding, the mask file MC is compared with the DT file, and if the point in the MK file is not green, that data in the DT file is ignored. The line written in memory is similar to a logical overlay, and all points within the curved line are then filled with value. During this process a MK file of the mask is created in the register. Subsequently, both this MC and the original DT file are inputs to BF during the process of thrediolding. This input is made after all slices requiring interactive segientation and slices where threshold parameter nay be used without interactive segmentation is completed. Thus after the first slice is reviewed, and decisions made whether to proceed with interactive segnentation or not, and if yes, the interactive segnentation is completed. Then the next slice is reviewed and the process repeated for only the ones where it is needed. For each slice the container of interest is entered. Fcr those slices which show that the separate mask which has been called a container of interest is not created, the entire area containing the threshold parameter indications is considered a region of interest. Then you enter the DT (the original grey scale data) file into BF and the MK masks (the results of interactive segnantation for each slice and where no container cfefinition is required the filled threshold parameter areas) into BF and the process described by BF is executed. BF extracts the data from the DT files that corresponds to the mask with a logical AND with the mask region and grey scale data. The mask is all one, everything outsize is zero, so everything outside of the region of interest is eliminated. Thus the elements shown in Figure 26A have been described where DT is the data file, which if fed to the TP threshold parameter review, which may ba directly fed to an MK file, but which also can be review with IS, interactive segnentation (SG) which also can create an MK file for that slice. Both DT and MK files are inputs to IF which is then an input to BD and DS, illustrated by BD. At each point a frame may be created by the movie file to show the steps of the process.

With modifications to the BF file controls, we can fill a image by crossing the edge, add optional antialiasing routines, and perform other things which can be used to edit the object data set so as to eliminate what might be considered as garbage. We prefer to do this in the BF file, as editor subroutines. Morpohological operators to clean up cluster scatter are appropriate for the BF file and employed. However, these are only done as a secondary operation when tumors are involved, since the extent of growths fron the tumor defined by thresholding can serve to show the amount of inf iltraticn to surrounding parenchyma, and the resection coordinates can which come fron the act of thresholding can be altered to allow the surgeon either computationally or with the aid of an override (a trackball or joystick on the laser controller) which will include all area which may be within the coordinates of the indicated infiltration. The presurgical planing study can be dene with and without contrast agents and in multi-modalities. MRI data can be overlayed with CT data, and the composite data can be used for the creation of the three dimensional composite coistxuct "which is utilized to drive the laser.

After the process defined by BF is completed, the process defined by BD is entered and executed. A typical lesion will have about 250000 feces, but they can run higher. Thereafter, DS is used to display the image. Sometimes, a slice may be missed and the threshcld window on display will show it interconnected to another object.

At that point the process is repeated to separate the various objects which result if that is desired. Often the representation is sufficient for initial planning, but for surgical planning where a numerically controlled device is employed, if the object has connected artifacts, the process is repeated until the object is cleanly defined. A scene may include various objects of interest. Each of these objects is separately defined. The objects can then be cantoned into a DS file to display a combined image. The combined image may have all objects displayed in grey scale, or each of the separate objects in a ccmposite scene can be colored individually by the color rendering previously described.

In addition, since the coordinate values are retained, the composite scene nay display the ressults in accurate spaced relationship corresponding to their position in vivo. Various parts of the composite scene may be made transpalant so that it is possible to view a hidden tumor beneath a skull in 3-D rotation of the scene using the movie frame display. However, visually, the occlusion of hidden tumor when a section of the skull is present often gives a more satisfactory representation for surgical viewing.

Now while we have described the oonstruct elements and processes primarily with respect to CT data, the same nay be used for MRI data, and data fron other transducers. It is possible to overlay two modalilies with DS. One image can be one modality, and the other the other modality. The MRI can be unwarped to overlay. Basically, unwarping constitutes flattening a warped image to fit within coordinates of the planned 2-D section, shown hy a modality which doesn't warp, as CT. This unwarping can be done on the original DT file, and a subsd: DE file created of the unwarped image for further processing .

Now during the surgical procedure, as well as for preplanning, it is useful to have a full data set to utilize for processing so that an ongoing view of surgery in progress or preplanning of such surgery or other therapy or diagnosis can be accomplished. This is desireably obtained with ultrasound, and we will describe a method which may be used to obtain this reel tine view during surgery.

With reference to Figure 27A-D, therein is illustrated a means for obtaining a ful l data set of a three dimersiσnal scan by a transducer which is just placed in position. The preferred emtodiment is an ultrassound trasduoer which nay be placed adjacent the skull or in a burr hole and left in place without any movanent to monitor the progress of an ongoing operation. Illustrated is a transducer in the form of an ultrasound transducer 910 which is positioned on the object with ultrasound emitter 911 spaced above the subject 912, but in a fluid filled chamber 913 with rubber membrane 914 enclosed area. The membrane contracts the subject, and a lubricant coating 915 is used between the membrane and subject to assure good conduction of sound.

The transducer is coupled to an ultrasound ccraputer which digitizes and displays two dimensional images, as is a standard procedure. Commercial ultrasound devices such as 917 sold by Diasonics are illustrated by this kind of device 917. These devices can have both RS170 or digital output of the representations of the subject scan.

However, unlike the normal transducer which takes a two dimensional slice upon each scan, the transducer emitter /receiver 911 is caised to rotate about an axis, the vertical axis as illustrated, in order to obtain a complete three dimensional data set. The transducer support 923 is caused to rotate by a stepper motor and controller 916. The rotational position of the transducer support 920 is narked with reflective narks 921. A shaft position encoder and sensor illustrated at 922 (including circuitry which is known as illustrated those at poge 481 of Sourcebook of Electronic Circuits, Mc-Graw Hill Bode Company, 1968 which is incorporated herein by reference) locates the angular position of the shaft and provides information to the stepper motor and controller 916. Accordingly, the sstepper motor can cause the scan (which normally results in a triangular image) to have the image displajed be representative of a rotating scan, with an image taken at each degree or part of a degree. The actual degree of rotation about the axis is determined such that each pixel at the most distant point of the triangle is sensed. These images are placed in a buffer for reccmMnation. They can be contained quickly using array processing procedures, but in our preferred embodiment are passed into a frame buffer, and stored as identified frames. With an array (or parallel array, depending upon amount of data processed) processor censtr-uction of the complete data set with ail attendant grey scales is accomplished by logical CR positiorirg of the grey scale data in a three dimensional array. Thus if the location is already filled with data it is replaced with the most recent data. If if is empty it is filled. But if there is no data to fill the location and data is already present at that point, the data remains in position.

Figure 27B illustrates principle of transmitting the image data from a primary to a secondary coil. The transducer emitter /receiver 911 signals are transferred between the rotating secondary oil 933 and the the system analysis static primary coil 934.

Figure 27C illustrates schematically an exploded view of the coils of Figure 27B. "Pot" (manufactured by Ferrme and others and known as FERRME Pots) cores 933 and 934 are mounted on a common shaft with a spacer washer between philap 936. The primary is wound on the static coil 934 functioning as the pick up system analysis coil. The secondary is wound oi on the secondary rotating core 933. The direction of winding (a bobbin can be used) is shown on static core 934 by the arrow.

While Figure 27A is schematically illustrated, it will be understood that it can be different sizes and shapes. In our preferred embodiment, the transducer is a burr hole size at the end of the transducer (capable of being inserted within the skull, or coupled attached to the skin of a subject's head with a lubricant. The transducer is held in place by an attachment element 938 to a stereotactic frame 939.

As illustrated schematically in Figure 27D, as the transducer 910 is held in place (by stereotactic frame 939 for the subject 912) each image 941 captured by the rotating wave may be displayed on a monitor 942. The same monitor can likewise display the ccr responding rotation image 9-ι3 of preplanning accomplished by other modalities, such as the CT and MRI reconstructed views, under control of the system computer 944. By this method, not only can the current progress of an operation be seen, it can be compared with the preplanning views, and if necessary accoimodationns to the current or preplanred procedures can be acccmplished. By the use of a non-invasive ultrasound image the patient can undergo be scanned continuously during the course of the operation, without risk of the injury which would occur with X-ray wave scans .

In addition to the features previously described it is iseful to employ an image transfer apparatus, as so we have described herein the preferred apparatus and method of transferlng images, which is particularly useful in cur medical application, but which may also prove to be generally useful far other image and voice transmissions.

With reference to Figure 28. at the transmission end a source of TV signals, such as a camera, which may be a charged coupled canera or video board puts attribute signals 28-12, 28-14, 28-16 out. These signals correspond to the RBG signals of the camera, but essentially those of a TV, after decoding from either the PAL or NTSC. Sync is preferred to be negative and on Green. If the camera or video source is NISC or PAL, there is included (not shown) decoder circuitry to place the attribute signals into the transmission mode, again preferable corresponding to the digitized RBG signals of the RS 170 standard present in the camera or master frame store 28-10, as the camera 28-10 can alternatively be a master frane store. The master frame store can be located in a computer, such as a PC or in a video board. In the event the original signals are not the desired 8 tit digital then attributes 28-12, 28-14, 28-16 are converted into 8 hit (for 256 shades of grey) by respective analog- to-digital converters 28-18, 28-20, 28-22 and transferred to a respective frame stare 28-24, 28-26, 28-28, otherwise they bits are transferred as is to the respective frame store 28-24, 28-26, 28-28 without the conversion. This is determined by the setting of switch 28-23, which may be a set or inhibit, bit.

While the preferred arbodiment is illustrated as an image represented in 8 bits, in an alternative embodiment and within the contemplation of the present invention are representations in larger tit formats, 12 or 16 bits, as will be discussed with respect to alternative embodiments.

In the preferred embodimant, as nay also be aplicable to the alternative embodiments, each frame store operates at two speeds. Addresses are scanned in turn as will be later described aid images data either stored after (conversion or retrieved for sending received data fron tne frame .stores 28-24, 28-26, 28-28 is provided as the paraliel data input correspnding to respoective USAR (Uriveasal Synchroious/Asynchronous Receivers/TransmitterS, which are widely available as chip sets) for ultimate transmission down voice grade telephone lines by modems 23- 36, 28-38, 28-40. The modans interconnect to the switching network 41 representing the telephone switching network enatϋng the signals to be transferred to a local number and thereafter through the telephone network either by high spee=d cable, satelite, or whatever means to the receiving end cf the transmission. It will be recognizexi that one aspect of" this apparatus is that the devices described can utilized and be insensitive to transmission delays of any line of the switching network 28-41.

In an alternative embodiment not necessary to specifically illustrate as the circuitry is essentially identical 28-22, 28-28, 28-34, 28-40 can be be used for more identical items, as can the third line 3106, 3100, 394 in Fig 30. by addition of similar elarents and software image segmentation. At any rate, the USARIS function as parallel to serial data converters for tran≤mission of data by the modans. The USARIS comprise a parallel in serial out shift register which is loadexL fron the frane store. When the USART has shifted all the data out to the modem, it signals it is empy on a demand line 28-42, 28-44, 28-46. Each A/D converter 28-18, 28-20, 28-22 signals wien it has completed conversion on a respective "complete" line 28-48, 28-50, 28-52 and provides data on a respective 8 bit wide parallel bus 28-54, 28-56, 28-58.

Figure 28 shows the frame store 28-24 by way of illustration. The store comprises a random access memory 23-66 with data input from the 8 bit parallel output of the A/D converter 28-18 output bus 28-54, data output to a memory data output bus 28-60 leading as input to the USART 28-30, and receiving its 12-bit address bus 68 from an address multiplexer 28-70 receiving, as a first selectable input, an output bus 28-72 from a load address counter 28-74, receiving as a second selectable input, an output bus 28-76 from a retrieve address counter 78, and receiving, as a control ling input, the "USART COMPLETE' signal 28-42 from the USART 28-30. Whenever the USART 28-30 requires the provisions of a new parallel word, the multiplexer 28-70 provides the retrieve address bus 23-76 onto the address bus 28-68, but otherwise the multiplexer 28-70 provides the load address bus 28-72 onto the address bus 28-68.

The retrieve coulter 28-74 is incremented by being clocked by the 'USART GOMELEEE' and the lead counter 28-74 is clocked by the "A/D CCMPLEIE" line 28-48. The "A/D COMPLETE" line 28-48, when signaling, also causes the memory 28-66 to load while the "USARE CCMPLEIE" line 28-42 causes the maiαry 28-66 to retrieve data.

Every time the A/D 28-18 has a new result, it stores the result in the next sequential location in the memory 28-66, at the address pointed at by the load address counter 28-74, and every time the USART 23-30 reeds a new parallel data word to send, it retrieves it fron the address pointed at by the retrieve counter 28-78.

The load counter address bus 28-72 is examined by an upper limit detecter 28-80 and is inhibited 28-82 when an upper limit is reached, i.e. the memory 28-66 is filled. The retrieve counter address bus 28-76 is also examined by an upper limit detecter 28-84 and is reset 28-86 when the upper limit is reached, i.e. the memory 28-66 has been emptied. The reset 28-86 also resets the load address counter 28-74 which then resumes loading the memory 28-66. Thus the load address counter and the A/D 28-18 cooperate rapidly to fill the memory and then stop until the retrieve countar 28-78 and the USART 28-30 have emptied the memory 28-66. The cycle then recommences.

The load counter address bus 28-72 can be coupled back 28-88 to control the camera or, if the camera is replaced by a master video store, the control master video store of a computer. Where such a back coupling 23-88 exists, the "LBART CCMELEIE" signal 28-42 can also be coupled to inhibit counting of the load address counter 28-74 to thereby halt loading during retrieval and allow proper progression to the next pixel as scon as retrieval is complete.

The transmission mode can be set as line transfer, for a single line of the video screen, or for full frame transfer before a full complete handshake is ccmpleted. Preferably a line complete transfer is included along with a frane complete. In this manner error checking by row and column can he undertaken.

Figure 30 shows the receiver, which is essentially tine transiitter of Figure 1 working in reverse. Receiving modems 390, 392, 394 feed receiving USARIS 396, 398, 3100 which in turn serially reeeived bits into parallel data words fed to the receiver frame stores 3102, 3104, 3106 operating similarly (but in reverse) to the frame stores 28-24, 28-26, 28-28 of Figure 28. The outputs of the receiver frame stores 3102, 3104, 3105 axe coupled as input to the D/A (digital to analog) converter and reconbiner 3108 which converts 8-bit retrieved words fron the receiver frane stores 3102, 3104, 3106 into analog signals and reccmbines the attributes for presentation to a video monitor 3110.

No so far mentioned is the audio channel which can be a straight voice link, or which can also be encoded (Figure 28) by a sound signal source such as a microphone 3112 driving an amplifier 3114 in turn feeding a sound analog-to-digital converter 116 storing sound-representative 8-bit words in a sound bit store 118. The bit store 3118 works much as the frame stares 28-24, 28-26, 28-23, but the input (from 3116) is clocked as a predetermin-ad rate (1000 samples/second for 5000 Hz audio bandwidth). Si the receiver (Fig 30) data fron the transmit (parallel to serial converting) USART 3120 and transmit modem 3122 is received on a receiving modem 3124 and a receiving sound store vial a serial to parallel converting receivirg USART 3128. Sound representative words are retrieved fron the sound store 3126 at a fixed clock rate by the D/A converter 3108 for use as a sound signal by the video monitor 3110.

For storage, the monitor 3110 can be replaced by a video recorder.

Synchronization between receiver and transmitter can be achiever by each frame store 3102, 3104, 3106 and 28-24, 28-26, 28-28 recognizing a predetermined succession of signals for more than a predetermined number of successive pixels and resetting to a predetermined address upon receipt. For example, more than two lines worth of clamp level sync (Feme synchronization on a conventional TV) can reset the counters to the 000000etc position) and line sync signals can reset counters in their lesser significant bits to synchronize each line.

Figure 31 shows an exemplary TV signal where image areas 3128 are always equal to or greater than a reference level 3130 and where short line sync pulses 3132 and longer frame sync pulses 3134 are always less than the reference level.

Figure 32 shows how this is used for synchronization both in the receiver and the transmitter. The frame store 28-24, 28-26, 28-28, 3102, 3104, 3106 has a sync marker 3136 added which seeks, by examining incoming 8-bit parallel code words, to find code words representing voltages below the reference level 3130. The sync monitor 3136 also examines the pixel incrementing signal 28-48 from the A/D converter (in Figure 28) and gives a first output 3138 operative to set all 18 bits of the load address αounter 28-74 to zero if the long, frame synchronization pulse 3134 is detected and to set only the last 9 significant bits of the counter 28-74 to zero if the shorter line sync pulse 3132 is detected, output 3140 being indicative thereof. This activiby takes place both in the receiver and transmitter.

The attributes can be the three RGB colors, separately sent, or every other pixel, or a top half (or other segnent) of a picture separate sent from a bottom half (or other segnent) of a picture, and so on.

The luminance signal which is of high bandwith can occupy one channel and the two NISC or PAL chromiriance signals (of lesser bandwith) can both share another channel. More than three channels can be used.

However, the luiinance signal can be stripped and reins-arted by the sending and receiving computing device. By the use of 8 bit plane graphic boards (Imaging Technology, Matrox, AIT Targa) sending of grey scale data along with color identifier of the images through the system ultimately to the R, B, G buffers of these devices will enable transfer and reccmbination with reirsertion at the other end of the transmission.

The recombiner 3108 looks at the address counters loading the frame store 3102 and, when all are inhibited (i.e. when all frame stores are full) knows a complete image has been received. The recombiner 3108 then presents the signal to the monitor 3110 or video recorder. In receiving, when the frame stores 3102, 3104, 3106 are full, the loading counter 28-74 is inhibited even from being reset until the recombiner 3108 has completed image retrieval.

The A/D (D/A) converter separater recombiner 3108 can be implemented in the form of a video board of 8 (or better) bit planes, such as a Targa board (ATT) or F100 board of Imaging Technology, Inc. With such an structure, the input/σutpt of the video board A/D (D/A) converter separater/resccmbiner provides 8 bit plane data to separate buffers for each color segnent of the image, removing the requirement to transfer color burst signals, as they can be added at the during the time of recombination and if necessary encoding as NTSC or PAL signals.

It should be recognized here that the primary contamplation of the invention is the sending of color images. However, the frame synchronization driver can be used to convey grey scale data and utilized for compression. In addition, compression techniques can be employed to further compress data by data compression algorithms.

Thus the synchronization driver and controls for the frame stores can separate signals received and allocate than to the various universal synchronous/asynchronous receiver /transmitters wherein each of said modem receives a image attribute signal from its carrier channel which is applicable to the frama store of the receiver, the frame store includes tit addresses have a value setting for indicating an image complete signal for said image attribute signal. The frame store controls also have a bit store which when a complete signal is furnished cause the images to be transferred to a display or print buffer far display of the presentation of a camposite image including a plurality of image attribute signals on a monitor or printer when all necessary image attributes for forming said image are complete, regardless of the carrier line over which they are conveyed.

It should be recognized that an RS132 port control on a personal computer is capable of acting as the USART, and that three serial ports for such systams could function as the ports through which the LBART signals are conveyed to the modans. The modems of course could also be internal boards within a personal computer. The multiplexer of the control for the frame and sound stores can be a processor of a personal computer. The frame stores can be allocated portions of memory of a computer. Accordingly, it should be understood that the synchronization and controls shown in Figure 29 and Figure 32 are in an alternative embodiment incorporated by a conforming software driver in a computer having the appropriate number of I/O ports for the desired number of attribute signals. Preferably this number is three, so that three voice channels can be connected for sending R, B , G signals to display in fill grey scale (256 shades of grey) a full color pallette on display or for printing. In order to compress a larger image, for instance the images of a 1024X1024 image or larger, the sending and receiving frane stores can be segmented and sent only in grey scale to achieve time compression.

Tine compression is possible by using all three (or more) channels for sending color images, but the same three channels can also be used to send a grey scale image three times as fast. Portions of the grey scale image are assembled in each frame store, and then when full displayed or printed from the display buffer. It should be recognized that the same hardware can be used in common for receiving and transmitting , since so much of it is common to the two processes. If two-way image transfer is desired, the modems 390, 392, 394, 28-36, 28-38, 28-40 can either be dedicated ore way, or can alternatively send and receive a word in an interleaved manner. In an alternative preferred embodiment which can utilize the same hardware of the preferred embodiment, images can be transferred serially from one system to another over a modem, lmages are generated either in a camera or computer and frames created representative of the image. With reference to Figure 33, we have not described in detail the circuitry necessary to capture an image. Image capture element 3601 is representative of means to capure images and represent the images 3602 in a frame buffer. Representative devices capable of capturing an image are the Targa boards of ATT, Indianapolis, IN, Matrox, a Canadian company, and Imaging Technology Incorporated, Wobum, MA. Whole PAL or NBC decoders may be front ends to (and output from) these devices, as is advantageous to allow inexpensive video cameras to plug into the systam, the image is decocted cr fed directly from a camera into a multiplexer MUX, as R,B,G signals, typically with sync on green. Fron the MUX the signals are passed through a luminance correction element LUX and loaded into memory, as red, blue, green. Typically the devices have a single display buffer. Far true color, each color conponent should be provided with its own display buffer, and accordingly, as illustrated with respect to the alternative embodiment, a display buffer is associated with each color, as shown by the dotted line division of the display buffer.

In addition, the display buffer preferably should be at least a 16 bit display buffer, preferably a 24 bit display buffer (i.e. 16 or 24 bit planes). However, for the purposes of illustration of the invention, we refer to the 12 bit plane device manufactured by Imaging Technology Incorporated. The device can be modified as illustrated herein to achieve a representation of a color captured image, and display it remotely through voice grade (and also digital) lines in accordance with the description. This 12 tit plane board (4 bits for each color are used in the illustrated embodiment will enable transmission of color images with 256 shades of grey of a grey scale image, and 16 shades of color. The 16 tat plane device ( or 15 tit plane device) will, as will be understood after a complete understanding of the napping sequ ence, be able to represent 32 shade color (5 bits for each color) and the 24 bit plane device will be able to represent 64 shades of color (6 bits for each red, blue or green color).

For our medical applications, all images are structurally represented in grey scale, and color is pseudo color, or a derived representation of the true color as has been described.

The advantage of this structural representation is that each red, blue, green image contains the same identical data as a binary bit mapped image file, with tag registers for control functions. Because the images are identical they can be transmitted in one third of the tine, as only the identical data needs only be sent once rather than three tines regardless of the shades of color employed (i.e. 12, 15, 24 bits). However, when 256 shades of color are used, the images will differ, and all three bit maps will be sent.

Accordingly, using the transmission device described for the preferred embodiment, additional channels can be used to compress the sending of the image, and at the same time provide the flexibility to sand different images. In accordance with the invention, for a single image, the image can be sent with line by line synchronization, area segmentation synchronization, for full image synchronization. Area segmentation can be for purposes of illustration, and preferably, the middle section of the image, the upper section and the lower section. Essentially image segnentation synchronization enables the selection of the appropriate appropriate number of channels to be employed to achieve the desired transmission rate. More channels allow faster transmission rates. The channel traismission speeds are dependent upon the quality of the line maintaining error free (or subtantially error free as the images can be transmitted even with errors which nay not realistically affect the suitability of the received image) transsmission at the trananitting baud rate.

According to our calculations, a 512 512 true color image could be transmitted at 9600 Baud in approximately 20 seconds. Since standard video is typically a much smaller image expressed in pixel resolution, a true color image of 256 X 256 pixel image could by the technique using only three lines be transferred in approximately 5 seconds. However, a full (pseudo) color requiring the sending of a single frame would transmit at 9600 Baud in about 7 seconds, without compression. A 256 256 pixel image would transmit in about 3 seconds. All of these calculations are for uncompressed data. There is a small amount of overhead in the transmission time which will be described, as will be described the preferred process of reducing it. Included in the process of reduction is the possibility of compressing the image data being transmitted. By using data compression (e.g. Unix compression) with some images we have been able to reduce the data to about 10% of the original data.

However, by employing compression techniques and additional lines , transmission time can be reduced substantially to such second transmission times. While for most applications of still data, adequate time compression is achieved with three lines, additional lines reduce the time arithmetically by simple division, if one ignores the error checking time, which with row-column error checking, which is adequate for this purpose, only an additional overhead of 2% is required.

In implementing the embodinents described herein it may be conceptually worthwhile to understand that the synchronization device incorporates the frames transmitted into a binary file including the bit-flapped frame image, or the segments of the file which are allocated to be transferred via a particular modem.

Accordingly, the interconnection protocols for transmission of these files may incorporate standard communication protocols, such as KERMIT (developed at Columbia University) or other error checking protocols.

When the synchronization device (preferably a personal computer or computer workstation) has made connection to the receiving device, via dial, connection, handshake, clear to send and transmit are handled by the communication protocol with a smart modem, (e.g. Hayes compatable) and the communication system. By utilizing this kind of modem, having full duplex, half duplex and autodial and interrupt for voice communication, the same modem can handle both data transfer and voice data, simply by picking up the receiver when the receiving terminal and sending terminal indicate that the channel is available for voice communication.

Transmitted is the binary file containing the bit-mapped frame (or frame segment) image, and control codes.

The clear to send signal initiates a circular poll of memory locations, periodically polling the memory of the receiving device to check whether of not the frame (or frame segment) complete register and control character registers are complete and set to a predetermined value.

These registers may be specified registers of the device, located at arbitrary locations in memory. The receiving routine specifies the binary file location, and the binary file contains within the file location not only the frame image data but also specified addresses for the frame (line or segment) complete code and the control codes.

Initially certain control codes are transmitted. It is within the contemplation of the invention that at the end of each control code transmission, the sending device waits for conformation that the receiving device is able to display the planned transmission. This is because it its part of the compression scheme for pseudo color data and grey scale data that by employing a bit map index table the sending device can compress the data to the bit plane format of the receiving device. While this is possible to accomplish in the receiving device, transmission of unused data is time consuming, and as the conversion by the described process is faster than the transmission rate, the sending device can accomplish that task efficiently without affecting the transmission. Transmission is only momentarily paused while this process is confirmed and intiliated.

The control codes include the code for the kind of originating data and for the planned display. Included in the control descriptor is the following:

The first four bits of the control descriptor d esignate whether the original bit map is 8 bits (1-0), or 16 bits with 4 bits for red, 4 bits far blue, 4 bits far green, and 4 unused (1-0), or 16 tits with 5 bits for red, 5 bits for blue, 5 bits for green and one bit unused (1-0), or 24 bits with 8 bits for red, 8 bits for blue, and 8 bits far green (1-0).

The next four hits of the control descriptor indicate the kind of possible presentation for the data, i.e. grey scale (1-0), standard color nap (1-0), specialized color map (1-0), true color (1-0) occupying 4 bits of the control word for the control register which is Grey Scale is self explanatory. True color is color where all 256 shades of each color is displayed. If the register is set to indicate presentatiσn of standard color nap (covering grey including white, red, blue, green, yellow, flesh and black as present in the preferred example) the standard color map is presented and the map being resident in the device need not be transferred. In the event that standard color is not present in the sending data, the transmission file will first send a transmission to the sending device of the file for the specialized color map, otherwise it will continue transmission of image data.

The next four bits of the control descriptor are applicable to image format. These tits include line, line length, column length of original image expressed in pixels, segment segient dimensions, whether original frame (master frame) or frame update where only DMA changes are transferred.

While not applicable to the usual situation, which involves compression of data of all frame pixel data, for situations where the image is not constantly changing, as in real life video capture and three dimensional images, but for more typical cartoon animation, it is possible to transfer pixel changes within a frame, by control characters at the start of frame transfer. Thus the initial frame transfer includes control characters which indicate what part of the frame is changed.

In this alternative mode the original frame image is captured in the pre processing analytical analysis. By an pass analysis, only changed pixels (grey scale and color) are changed in the receiving device. Accordingly, the transfer of data of only changed images reduces the transfer tine after receipt of the original image. As most scenes in these animations change only slightly from time to time, a captured frame is segmented to withdraw only the changes to the image, and only these pixals with their address, grey scale and color, are transferred to the receiving station which replaces the changed pixels in the next display buffer while the current on is being displayed, or on refresh of the current display buffer if only one display buffer is applicable.

In order to accomplish the location of data in the receiving device, the control descriptor has additional control locations enabling the receiving device to orient the memory to receive at the proper address location the file being sent. This is implemented by a process in the receiving device which naps the planned memory locations for the image and, where applicable, the location of the color map pointer registers. These registers establish the color which is selected from the color map of the receiving device, and it is this map that is transmitted for specialized color representations. This control descriptor also indicates the number of frames that are to be transmitted by the current channel, and also the channel number and number of frames to be transmitted by additional channels being used for the transmission, and other control characters as are applicable to the transmission.

The color map pointer reference may be had to to the Color Map example hereinabove, in which there is shown the resident color map and pointer registers R, G, B, for pointer values of the color shades. For standard color representation, the pointer registers are divided into color representation segments each having a value identified ranging as illustrated between 000 to 255. The example therein illustrated is useful for all colors. Brown can be considered a subset of yellow with different values for red and green.

Once all these protocols are established and the sending device transmits frame by frame (or line by line or segment by segment) the image data allocated to that channel, and initiates (car continues) the connection of additional modem connections for the other channels. Because all protocols are established by one modem, the other channels nay transmit data at their own rate.

The circular ring poll of the receiving station checks each frame for completion, and also checks for end of file. At the end of file, transnission may be terminated or switched to another mode.

At the end of file transmission, both terminals nay indicate the end of file transanissicn. If another channel is not being used at that time for voice transmission, the end of file may Gause an optional modem inhibit signal, permitting voice communication to be used on the data channel, between frames as an interup message or at the end of file transmission.

The end of file transmission or end of frame transmission, when recognized by the receiving stations poiling, nay be used to signal a user to initial a display manually by a "Ready for Display" message.

Alternatively, the end of frame or file initiates automatic image display, which in the event of voice interupt may be comented upon before a subsequent transmission.

By the use of a plurality of channels, within the capacity of the receiving device to handle receipt of data and display of data concurrently, animation by successive frames captured and displayed can occur.

The video camera can be employed as an alternative to a light pen, capturing artists rendering and inserting them into the trananission systems as well as transmitting them to a remote device. Using charged coupled camera it is possible to capture image frames with as many pixel values as the cameras (and frame buffers of the capture device) can handle. Typically, we work with 512512 but 20482048 pixel capabilities give adequate representations for almost any purpose. If the receiving computer terminal dees not handle the inage size, the frame store is displayed in zoom (reduction and enlargement) with vertical and horizontal image scrolling review. The size of the image to be displayed can be determined either at the sending or receiving end. The greater the size of the pixel representation εsent, the longer the traismission tine if the sane number of lines are used, however, with the system described, additional lines can be employed to maintain traismission speed. Thus when a camera is focused on a sheet or blackboard, a writer can write a note to his mother in handwritten form, or write kanji (Chinese characters) Sanscrit and other images and the image at the other end will reproduce the written or drawn image in conformity with the pixel representation of the image as captured by the camera, without the need of a light pen or scratch pad. At either end the written characters and words can be recognized by character recognition algorithms. Of course, if a scratch pad or light pen, or nouse can be used to create the original frame, or parts thereof .

Grey scale values when the final display is a monochrome device are converted into dot matrix equivalents by taking each pixel and replicating it to be four, etc. pixels, each with a fixed on or off of match the equivalent grey scale.

Of course, any image can have overlays of linear data, such as grids, anatomical naps, patient identification, volume, angle, and hospital identification, and the like. However, with free tilt we can create a 3-D brain atlas and perform the same functions as on data, with warping as necessary to stretch over reformated cuts. Initially these may be obtained from several hundred 2-D atlases which are reccmbined like the 3-D images. In addition, with free tilt, we can display a sagital and coronal view with respect to an axial cut, along with a free tilt slice orthogonal to the axial, cut of the view of the surgeon.

SG Appendix SG is an interactive segmentation program, for using a moving scalpel interactively with a data set to excise images, cutting for instance the edge of a tumor from an artifact in the image. As will be seen, only the part which is critical to defin a two image reeds to be cut, and then a wide circle can be drawn about the remander of the image, as a routine will save only that object identified in a circled area. It may be used together with thresholding to create a three dimensional data set base quickly, and precisely, and allows freedom for the surgeon to use his scalpel to cut through tissue and bone at a precise point, to allow a view to be taken along the cut, as well as for other purposes. It will be appreciated that the scalpel is the pixel by pixel moving cursor whose speed of movement is slowed to accomodate the feel of the surgeon necessary to draw the lines precisely.

SG nay be declared with the following .statements, abstracted from a larger program with which these statements interact. In order to reduce space, certain functions which can be written by one skilled in the art having knowledge of the declared statements and functions have been eliminated. The progran also has functions for location of trajectσry. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

* Declarations of Screen windows, and other variables * * * * * pointers etc. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

*e.g.* int frame_size = SCREEN_SIZE, /*frame size*/ number_slices = 0, /*number of slices*/ window_width, /*window width*/ window_level, /*window level*/ etc. file_name[200][80], /*legal input file names*/ mkname[13], /*MK file name*/ onsname[80]; /*color map name*/ Panel_item text_1_item, /*messge in panel2 */ etc. text_2_item; /*message in panel3*/ Pixwin *pw; /*pixwin far drawing */ struct pixrect *sacreen, /*memory pixrect containing centered image without mask*/ overlay, /*pixrect containing image with mask-points to OUTPUT_BUF*/ *memory; /*pixrect for centering images - points to DEP_BUF */ struct pixrect *box_cur; /*erase box*/ int drawjines = FAISE, /*DT file status flag*/ mk_loaded = FAISE, /*MK file status flag*/ creating - FAISE, /*MK processing flag*/ mask_exists, /*mask existence flag */ numpix, /*number of pixels in image*/ offset = 0, /*screen offset to compensate for different image sizes*/ first_mask = 0, /*first mask in MK file*/ last_mask = 0, /*last mask in MK file*/ slice_pointer = 1, /*indicates current slice*/ mk_heaier[128] , /*MK file header*/ st, /*i/o status*/ thr1, /*lower threshold*/ thr2, /*upper threshold*/ fp_mk = -1, /*MK file pointer*/ fp_dt = -1; /*DT file pointer*/ raw[SCREEN_SIZE*SCREEN_SIZE], /*16 bit deep raw image*/ nask[SCREEN_SIZE*SCREEN_SIZE] ; /*image's mask*/ unsigned char disp_buf[SCREEN_SIZE*SCREEN_SIZE], /*image without mask*/ output_buf[SCREEN_SIZE*SCREEN_SIZE]; /*image with mask overlayed*/ void int(), list(), dlsplay_image( ) , get_image(), put_mask() _ readtest() , eara_se() , draw(), cal l_fill() , out_mess(), arror_mess() , getstring() ; /********************************************************************************* /

* Main Program * *********************************************************************/ main() create_windows( ) ; create_panell_primary_itens( ) ; init() ; list() ; window_main_loop( frame 1); /***************************************************************************

* Creates all the windows * *********************************************************************/ create_windows() framel = window_c reate(0, FRAME, FRAME_LABEL, "Interactive Segmentation Copyright (c) Lynn L. Augspurger & Assc. 1988. All rights reserved.",


WIN_Y, SCREEN_POS_Y, textsw = window_create(framel, TEXISW,




WIN_HEIGHT, CAN_1_ HEIGHT, 0); panel2 = window_create(frame1, PANEL, WIN_X, PAN_2_X,



WIN_HEIGHT, PAN_2_HEIGHT, 0); panel4 = wIndow_ create(framel, PANEL,

WIN_VERTICAL_SCROLLBAR, scrollbar create(0),




WIN_HEIGHE, PAN_4_HEEQΪT, 0); text_1_item = panel_create_item(panel2, PANEL_MESSAGE, 0); panel_set(text_1_item, PANEL _LABEL_X, MESS_X,

PANEL_LABEL_Y, MESS_Y, 0); frame2 = window_create(framel , FRAME,




WIN_HEIGHT, FRA_2_HEIGHT, 0); panel3 = window_create(frame2, PANEL, 0); test_2_item = panel_ create_item(panel3, PANEL_TE XT,


PANEL_NOTIFY_PRO C, getstrlng,

PANEL_VALUE_DISPLAY_LENGHT, COL_LEN, 0); pw = canvas_ pixwin( canvas); / * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

** Creates panel buttons for main menu * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ create_panell_primary_items( ) void displ( ) , load() , start(), next(), prev() , irsert(), delete() , trunc (), replace(), append(), done( ), remove_fector() , append_factor(), display_header( ) , wind(), thresh (), clear() , paint_can(), handle_fill() , erasse_canvas( ) , stop() ; panell = window_create(framal, PANEL,





WIN_HEIGHT, PAN_1_HEIGHT, 0); panel_create_item(panel l, PANEL_BUTTON, PANEL_LABEL_IMAGE, panel_burron_image(panell, "Start", LARGE_BUT, 0), PANEL_NOTIFY_PROC, start, 0); etc. _ panel_create_item(panell, PANEL_BUTTON, PANEL LABEL_IMAGE, panel_button_image(panell, "Trace", LARGE_BUT, 0),

PANEL_NOTIFY_PROC, paint-can, 0); etc.d

/* A Exit to the main program or routine meni should be present. create_ panell_secondary_itans( ) void release(); etc.

/* Display a slice of an image* which wdll be interactively ≤egiented with the paint scalpel. * * * * void disp() int sli_nun; etc. re adtext( "Enter slice number to display: "); sscanf(buf, "%d", &sli_num); return; if (! creating) slioa_pointer = sli_num; get_image(slice_pointer - sli_num; display_image(); / * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

* Reads both image and it's mask from disk files * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * / void get_im age(sli_num) int sli_num; st = lseek(fp_dt, (sli_num-1)*nurmpix*2+512, 0); check("get_image(lseak DT)", st); st = read(fp_dt, raw, numpix*2); check(("get_image(reading DT)", st); if (sli_nun >= first_mask && sli_num <= last_mask) st = lseek(fp_mk, (sli_num-first_mask)*numpix*2+512, 0) ; che ck("get_image(lseek MK)", st); ? st = read(fp_mk, mask, numpix*2) ; check( "get _image( reading MK)", st); mask_exists = TRUE; else mask_ex_sts = FALSE; sprintf(buf, "Displaying slice: %din", sli_num); out_mess(buf); / * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Displays image after windowing and thresholding with it's overlayed on top * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * /

void display_imBge() int i; /*set window and threshold levels*/ - set_win(raw, di_sp_buf, window_wicith, windowjevei, ntmpix, MAXGRAY); if(thr1 1= 0 :: thr2 != 0) set_thre(raw, disp_buf, thrl, thr2, numpix, BLUE); /*center windowed image into SCREEN pixrect*/ pr_rop(screen, offset, offset, frame_size, frame_size, PIX_SRC, memory, 0, 0); if (mask_exists) /*display image with mask*/ combine(disp_buf, output_buf, mask, numpix); pw_write(pw, offset, offset, frame_size, frame_size, PEX_SRC, overlay, 0, 0); else /*display image with no mask*/ pw_write(pw,0,0 SCREEN_SIZE, SCREEN_SIZE, PIX_SCR, screen,0,0);

/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

* By a screen event or value entered the load input file(s) executes*

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * / void load(item, value, event) Panel_item item; unsigned int value; Event *event; etc.

/*c reate pixrects for displaying images*/ manory = mem_point(frame_size, frame_size, 8, disp_buf) ; overlay = mem_point(frame_ size, frame_size, 8, output_buf); first_mask -= 0; last_mask = 0; display_headar( ) ; get_image(slice_pointer ) ; display_image( ) ; / *******************************************************************

* Create MK file * ******************************************************************** / void start () int i, sli_num; out_mess("") ; if ( !dt_loaded) error_mess('Must load DT file first"); etc. make_name( inane, mkname, "MK"); /^get a suitable name for the MK file*/ fp_mk = open(mkname, O_RIWR, 0); check("start (opening MK) ", fp_mk) ; /*set flags and processing variables*/ mk_loaded = TRUE; creating = TRUE; for (i = 0; i < 128; i++) mk_header[i] = dt_header[i] ; first_mask = sli_num; slice_pointer = sli_num; mk_haader[41 ] = first_mask; st = lseek(fp_mk, 512, 0); check( "start (Iseek MK)", st); /*display first slice to create a mask for*/ get_image( slice_pointer) ; display_i mage( ) ; / *********************************************************************************

*Display next image. If creating an MK file, the previous mask written to the MK file via EUT MASK. ******************************************************************** /* void next()

out_mess(""); if (!dt_loaded) error_mess('Must load DT file first"); return; if (creating) put_mask(); get_image(++slice_pointer); display_image( ) ; / *********************************************************************

* Display previous image *********************************************************************/* void prev()

out_mess(""); if (!dt_loadad) error_mess( "Must load DT file first"); return;

if (creating) error_me ss("Cannot view previous slice while creating MK file: press Display"); return; etc. get_image( — slice_pointer); display_image() ; / ******************************************************************************************** * Replace old massk with the one on the screen ***************************************************************************/* voiv replace()

out_mess(""); if ( !dt_loaded) error_ mess('Must load DE file first"); return; if (creating) error_me ss("Cannot replace mask when in MK processing mode"); return; if (slice_pointer < first_mask slice_ rointer > last_mask) error_mess(""Mask for that slice does not exist") ; return; st=lseek(fp_mk, ( stice_ pointer-first_mask)*numpiX*2+512, 0); put_mask() ;


* Close MK file and update it's header ************************************************************************/ void done () out_mess(""); if ( !creating) error_nesss( "Not creating MK file"); return; putjrask() ; mk_header[42] = slice_ pointer; mk_header[43] = frame_size; mk_header[44] = fr me_size; last_mask = slice_pointer; creating = FALSE; st = lseek(fp_mk,0,0); check("done(lseek MK)",st); st = write(fp_mk, mk_headar, 512); check("done(writing MK)", st); / ***************************************************************************

* Add nsw masks at end of MK file ************************************************************************* /* void append()

out_mess(""); if ( !dt_loaded) arror_mess(:Must load DT file first"); return; if (creating) error_mess("Cannot append masks whan in MK processing mode: press Done

"); return: etc. ************************************************************************* Naje riin fir bew nasjs at beginning of MK files ************************************************************************/ void inssert()

static int i, empty[128];

out_me_3s(""); if (!dt_loaded) error_mess("Must load DT f ile first")' return; etc. if ( slice_pointer < 1 slice_ pointer >= first_mask) er ror_mess("Mask al l ready exists") ; return; out_mess("Wait: making room for new masks"); /* shift masks to make room for new ones */ for (i = 0; i < lasf_mask-first_mask+1; i++) st = lseak(fp_mk, (last_madk-fir st_mask-i)*numpix*2+512, 0) ; check( "insert (Iseek MK)", st); st = read(fp_mk, mask, numpix*2) ; check( "insert (reading MK)",st); st = Iseek( fp_mk , (last_ mask-slice_pointer-i)*numpix* 2+512, 0); dheck( "insert (Iseek MK)", st); st = write(fp_mk, mask, numpix*2) ; check( "insert (write MK)", st);

/* erase portion of the Mk file where the new masks are to go */ st = lseek(fp_mk, 512, 0); check("insert(lseek MK)", st); for (i = 0; i , (first-mask-slice_ pointer)*numpix/256; if+) st = write(fp_mk, empty, 512); check("insert(write MK)", st); /* update file and header varibles */ out_mess(""); first_mask = slice_pointer; slice_pointer - ; mk_header[41] = first_mask; st = lseek(fp_mk, 0, 0); dneck("insert(lseek MK)", st); st = write(fp_mk, mk_header, 512); check("insert(write MK)", st); /************************************************************************

* Delete masks from beginning of MK file in the reverse fashion ************************************************************************/ void delete() etc. out_mess( "Deleting' ') ; /* move masks down to write over those being deleted */ for (i = 0; i < last_mask-mask_num; i++) st = lseek(fp_mk, (mask_num-fir st_ mask+1+i)* numpix* 2+512, 0); check("delete( Iseek MK)", st); etc. /* truncate duplicated masked at end of file */ st = ftruncate(fp_mk, (last_mask-first_mas k+1)* numpix*2+512; check("delete(ftruncate MK)", st); close(fp_mk) ; sprintf(buf, "rm %s", mkname); system(buf ) ; mk_loaded = FALSE; return; /* update Mk fi le header */ mk_header[41] = first_mask; mk_beader[42] = last_nask; st = lseek(fp_mk, 0, 0); check("deiete(lseek MK) ", st); etc. /************************************************************************

* Truncate masks at end of file ******************************************** ***********************/ void trunc( ) int mask_num ; etc. return: /* truncate unwanted masks */ st = ftruncate(fp_mk, (nas k_num-first_ mask)*numpix*2+512; check("trunc(ftruncate MK)", st); if last_mask <first_mask) /* delete entire file if true */ close(fp_mk) ; sprintf(buf, "rm %s", mkname); system(buf ) ; mk_loaded = FALSE; return; /* update MK file header */ mk_header[41] = first _mask mk-header[42] = last mask; st = lseek(fp_mk, 0, 0); check( "truncate( Iseek MK) ", st ) ; st = wrIte(fp_mk, mk_header, 512) ; etc.


* Write mask from screen to MK file ***********************************************************************/ void put_mask() int i; pw_read( overlay, 0,0 , frame_size, frame_size, PIX_SRC , pw, offset , offset); for (i = 0; i < numpix; i+f) if (output_buf[i] = GREEN) mask[i] = -1; else mask[i] = 0; st = write(fp_mk, mask, numpix*2); check("(put_mask(writing MK)", st); sprintf(buf, "Writing mask: %dn", slice_pointer); out_mess(buf);


* Initialize color nap, erase box and pixrects **************************************************************************/** void init() unsigned diar red[MAPSIZE], green[MAPSIZE], blue[MAPSIZE], init i; /* define erase box */ static short int boxy_date[CUR_Size] = -1, -32767, -32767, -32767, -32767, -32767, -32767, -32767, -32767. -32767, -32767, -32767, -32767, -32767, -32767, -1;

/* initialize color map for gray scale */ for (i +O; i < MAPSIZE-2; i++) red[i] = i*2; green[i] = i*2; blue[i] = i*2;

/* assign colors for various overlays */ red[RED] = 255; green[RED] = 0; blue[RED] = 0; red[GREEN] = 0; green[GREEN] = 255; blue[GREENj = 0; red[BLUE] = 0; green[BLUE] =0; blue[BLUE] = 255; /* set color map */ sprintf(cmsname, "sg%d", getpid()) ; pw_setcmsname(pw, cmsname) ; pw_putcolormap(pw, O, MAPSIZE, red, green, blue); /* constuct a pixrect for the erase box and display */ box_cur = mem_point(CUR_SIZE, CUR_SIZE, 1, box_data); screen = mem_create(SCREEN_SIZE, SCREEN_SIZE, 8); overlay = mem_point(SCREEN_SIZE, SCREEN_SIZE, 8, output buf);


* Display DT file header **********************************************************************/ void display_header ( ) sprintf(buf," n"); textew_ insert( textsw, buf, strlen(buf)); sprintf(buf , "Descriprtions pertaining to the input file :n"); textew_insert (textsw, buf, strlen(buf)) ; sprintf (buf,

" " textsw_insert(textsw, buf, strlen(buf)) ; sprintf (buf ," Patient/phanton name : %-.12sn",pname); textsw insert (textsw, buf, strlen(buf)); sprintf(buf ," Run number : %-.5sn",runnum); textsw insert (textsw, buf, strlen(buf)); sprintf (buf, " Input file Name : %-.12sn", iname) : textsw irsert (textsw, buf, strlen(buf)); sprintf (buf ," Source voxel size : %3.3f X %3.3f X %3.3fn". pixel_size, pixel_size, slice_thick) ; text sw_insert(textsw, buf, strlen(buf)); sprintf (buf ," Size of 3D object : %d X %d X % dn", frame_size, frame_size, nunber_slices) ; textsw_insert (textsw, buf, strlen(buf)); sprintf (buf," Darsity - Eevel/Width : %d, %dn", window_level , windσw_width); text sw_insert( textsw, buf, strlen(buf)); if (mk_loaded) sprintf(buf," First and Last mask : %d, %dn", mk_haader[41], ml_header[42]); textsw insert(textsw, buf, strlen(buf)); sprintf (buf," Date of file preparation : %d/%d/%dn". dt_header[14], dt_header[15], dt_header[16] ; textsw insart(textsw, buf, strlen(buf)); sprintf (buf, textswinsert (textsw, buf, strlen(buf)); /************************************************************************

* Get new window level and width and display image ***********************************************************************/ void wind()

/* get new levels from keyboard and check for errors */ out_mess('"'); readtext("Enter Level, Width: "); sscanf(buf, "%d %d", & window_level, & window_width); if (window level < -32768 window_level > 32768) error_mess("Level bounds: -32768 < window_level < 32768"); return: else if (window_width <1 window_width > 32768) error_mess(''Window bounds: 1 < window_width < 32768"); return; display_image() ;

/************************************************************************* * Get new threshold levels and display image ******************************************************************/ void thresh()

/* get new T2 and T1 from the keyboard and check for errors */ out_mess('"'); readtext("Enter T2, T1: "); sscanf(buf, "%d %d", &thr2, &thr1); if (thr1 <-32768 thr2 > 32768 thr1 > thr2) error_mess("Usage: T2 <32768; T1 > -32768; T2 >=T1") ; return; display_image();


* Loop on frame2 for text input ************************************************************************/ void readtext(prompt) char prcmpt[] ; panel_set (text_2_item, PANEL_LABEL_SERING, prompt, PANEL_LABEL_X, LABEL_X, PANEL_LABEL_Y, LABEL_Y, O); window_loop( frame2);


* Get text fron frame2 panel, put into BUF ************************************************************************/ void getstring() strcpy(buf, (char *) panel_get_value(text_2_itam)) ; panel_set_value( text_2_item, ""); window return();


* Create EXET botton and activate ERASE as the window event procedure***********************************************************************/ void erase canvas() windαw set (canvas, WIN_CURSOR, cursor_create(CURSOR_IMAGE, box_cur, CURSOR_OP, PIX_SRC PIX_DST, O) . WIN EVE_NT_FR0C, erase, 0); vdndow_destroy(panell) ; create_panell _secondary_items();


* Return to main menu and deactivate the current window event

* procedure ********************************************************************/ void release() draw_lines = FALSE;

Window_destroy(panell); create_panell_primary_items(); window_set( canvas, WEN_CURSOR, cursor_create( CURSOR_CROSSHAIR_COLOR, CROSS_COLOR , CURSOR_CROSSHAIR_LEIGTH, CROSS_LEN, CURSOR_SHOW_CROSSHAIRS, TRUE, O),



* Restore pixels in the erase box with coresponding pixels fron SCREEN.

* SCREEN contains the disnlaved imase without the mask. *********************************************************************/ void erase(local_canvas. event. are) Window local canvas; Event &evemt; caddr_ t arg: int xc, yc, xc = event_x(event); yc = event_v(event); if(event_id(event) = LOC_DRAG evant_id(evaιt) = MS_LEFT) pw_write(pw, xc, yc, CUR_SIZE, CUR_SIZE, PIX_SRC, screen, xc, yc);


* Create EXIT botton and activate DRAW as the window event procedure ***********************************************************************/ void paint_can() window_set (canvas, WIN_EVENT_PROC, draw, O); window_destroy(panell) : create_panell _se condary_items();


* Set drawing modes and draw vectors between cursors movements

* if ERAW_LINES is true ************************************************************************/ void draw(canvass_local, event, arg)

Window canvas_local :

Event *event; caddr_t arg; static int xor, ycr, x_ew, y_new, x_start, y_start ; if (event_id(event) = MS _LEFT) /* activate drawing */ if ( !draw_lines) /* save value of first point */ x_start = xor = event_x(eve_nt); y_start = ycr = event_y(event); draw lines = TRUE;

else if (event_id( event) = MS_MIDDLE) /* deactivate drawing */ draw_lines = FALSE; else if (event_id(event) = MS REGHT) /* connect last podnt to f irst * pw_vector(pw, x_start, y_start, xcr, ycr, PIX_SRC, GREEN); draw lines = FALSE; /*draw vectors between points if DRAW_LINES is true and no buttons are pressed*/ if(draw_lines && evant_id(event) != LOC DRAG) x_new = event_x(event); y_new = event_y(event); if (x_new > SCREEN SIZE-1) x_new = SCREEN_SIZE-1; if (x_new < 0) xjiew = 0; if (y_new > SCREEN_SIZE-1) y_new = SCREEN_SIZE-1; if (y_new < 0) y_new = 0; pw_vector(pw, x_new, y_new, xcr, ycr, PIX_SRC, GREEN); xcr = event_x(event) ; ycr = event_y(event);


* Erase mask from screen * **********************************************************************/ void clear( ) pw_write(pw, 0, /*****************************************************************************

* Create EXIT botton and activate CAIL FEL as the window event *

* procedure * ********************************************************************** / void handle_fill()

wdndow_set (canvas, EVENT_ PROC, call_fill, 0); window_destroy(panell) ; create_panell_secondary_items( ) ; /*********************************************************************** *

* Call FILL to fill the enclosed green regions on the screen *

* indicated by the cursor postion * **************************************************************************** void call_fill(local_canvas, event, arg) Window local_canvas; Event *event; caddr_t arg; int xcr, ycr; if (event_id(event) = MS LEFT && event is drawn complete; /*?*/ xcr = event_x(event) - offset; ycr = event_y (event) - offset; pw_read(overlay, 0, 0, frame_size, frame_size, PIX_SRC, pw, offset, offset) ; fill (out pt_buf, frame_size, frame_size, xcr, ycr, GREEN, GREEN) ; pw_write(pw, offset, offset, frame_size, frame_size, PIX_SRC, overlay, 0, 0); /* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * ** * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

* Display message in pane12 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ void out_mess(message) char *message; panel_set(text_1_ite m, PANEL_LABEL_SERING, message, 0);

/ ************************************************************************

* Display message in panel2 and ring bell ***********************************************************************/ void error_ mess(message) char *message; penel_sat (text_1_item, PANEL_ LABEL_STRING, message, 0); window_bell(framea) ;


* Iisst input files * ************************************************************************* / void list() int i, number_files; char c; FILE*fpi,

*fopen(); systam( "1s *.DT dt_ file_ name_list"); system(:ls *.MK dt_file_ name_list:); if ((fpi = fopen("dt_file_ name_list", "r")) = NULL) perror() ; exit(); i = numbar_files = 0; while ((c = getc(fpi)) != EOF ) if (c = 'n')

*file_name [number_filesi++] = c; i = 0; else fil e_ name[number_files][i++J = c. close(fpi) ; systen( rm dt_fi.le_name_list");

for (i = 0; i < number_files; i++) panel_oreate_item(pane14, PANEL_TOGGLE,

PANEL_CLlENTJ_DATA, (caddr_t) i,


PANEL_NOTIFY_PROC, load, PANEL_CHOICE_STRINGS, file_name[i ], 0, 0) ;

/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

* Close files, restore cursor speed and exit program * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ void stop() out_mess(""); if (creating) error_mess("Cannot quit while creating MK files: press Dene"); return; remove_factor() ; close(fp_dt) ; close(fp_mk); exit();

While we have described and illustrated our preferred and alternative embodiments, it wall be recognized that improvements and modifications may be made to the invaitions described and which nay be claimed based upon the description hereto without departing from the permitted scope of the invention stated in the appended and granted claims. Those skilled in the art both now and in the future may conceive of further improvements without departing from the claimed scope of our inventions, and rearrangement of the various features described herein alone or in combination shall be considered to be within the scope of the inventions claimed, within the spirit of protecting our true inventions.

Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US4638798 *10 Sep 198027 Jan 1987Shelden C HunterStereotactic method and apparatus for locating and treating or removing lesions
US4663720 *21 Nov 19845 May 1987Francois DuretMethod of and apparatus for making a prosthesis, especially a dental prosthesis
US4665492 *2 Jul 198412 May 1987Masters William EComputer automated manufacturing process and system
US4710876 *5 Jun 19851 Dec 1987General Electric CompanySystem and method for the display of surface structures contained within the interior region of a solid body
US4737921 *3 Jun 198512 Apr 1988Dynamic Digital Displays, Inc.Three dimensional medical image display system
US4821200 *2 Apr 198711 Apr 1989Jonkopings Lans LandstingMethod and apparatus for manufacturing a modified, three-dimensional reproduction of a soft, deformable object
Non-Patent Citations
1 *NAKATANI et al, "A Binocular Stereoscopic Display System for Echocardiography", IEEE Transactions on Biomedical Engineering, Vol. BME-26, No. 2, February 1979.
2 *See also references of EP0368999A1
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
WO1992006654A1 *22 Oct 199130 Apr 1992Ian LeonardProstheses and methods and apparatus for making same
WO2000001308A1 *2 Jul 199913 Jan 2000Microvention, Inc.Expansible implant for vascular embolization and method of making the same
WO2000031689A1 *23 Nov 19992 Jun 2000Synapix, Inc.Free-form video editing system
WO2009076758A1 *17 Dec 200825 Jun 2009The Royal Institution For The Advancement Of Learning/Mcgill UniversityOrthopaedic implants
WO2017151638A1 *28 Feb 20178 Sep 2017General Electric CompanyMagnetic resonance apparatus and program
CN104271076A *21 Dec 20127 Jan 2015物化股份有限公司Systems and methods for designing and generating devices using accuracy maps and stability analysis
EP0791894A2 *15 Sep 198927 Aug 1997General Electric CompanySystem and method for displaying oblique cut planes within the interior region of a solid object
EP0791894A3 *15 Sep 198915 Oct 1997General Electric CompanySystem and method for displaying oblique cut planes within the interior region of a solid object
US5431562 *9 Nov 199211 Jul 1995Ormco CorporationMethod and apparatus for designing and forming a custom orthodontic appliance and for the straightening of teeth therewith
US5518397 *1 Apr 199421 May 1996Ormco CorporationMethod of forming an orthodontic brace
US5683243 *2 Jun 19954 Nov 1997Ormco CorporationCustom orthodontic appliance forming apparatus
US6015289 *30 Oct 199718 Jan 2000Ormco CorporationCustom orthodontic appliance forming method and apparatus
US6165193 *6 Jul 199826 Dec 2000Microvention, Inc.Vascular embolization with an expansible implant
US62448611 Nov 199912 Jun 2001Ormco CorporationCustom orthodontic appliance forming method and apparatus
US65001905 Dec 200031 Dec 2002MicroventionVascular embolization with an expansible implant
US661644411 Jun 20019 Sep 2003Ormco CorporationCustom orthodontic appliance forming method and apparatus
US70294874 Dec 200218 Apr 2006Microvention, Inc.Vascular embolization with an expansible implant
US711384126 Aug 200226 Sep 2006Pentax CorporationImplant forming method
US720176216 Dec 200210 Apr 2007Microvention, Inc.Vascular embolization with an expansible implant
US748355810 Apr 200727 Jan 2009Microvention, Inc.Vascular embolization with an expansible implant
US779904717 Dec 200821 Sep 2010Microvention, Inc.Vascular embolization with an expansible implant
US832328829 Sep 20084 Dec 2012Depuy Products, Inc.Customized patient-specific bone cutting blocks
US832603723 Nov 20054 Dec 2012Matrox Electronic Systems, Ltd.Methods and apparatus for locating an object in an image
US834315929 Sep 20081 Jan 2013Depuy Products, Inc.Orthopaedic bone saw and method of use thereof
US835711130 Sep 200722 Jan 2013Depuy Products, Inc.Method and system for designing patient-specific orthopaedic surgical instruments
US835716629 Sep 200822 Jan 2013Depuy Products, Inc.Customized patient-specific instrumentation and method for performing a bone re-cut
US836107629 Sep 200829 Jan 2013Depuy Products, Inc.Patient-customizable device and system for performing an orthopaedic surgical procedure
US837706829 Sep 200819 Feb 2013DePuy Synthes Products, LLC.Customized patient-specific instrumentation for use in orthopaedic surgical procedures
US839864529 Sep 200819 Mar 2013DePuy Synthes Products, LLCFemoral tibial customized patient-specific orthopaedic surgical instrumentation
US841974029 Sep 200816 Apr 2013DePuy Synthes Products, LLC.Customized patient-specific bone cutting instrumentation
US842552329 Sep 200823 Apr 2013DePuy Synthes Products, LLCCustomized patient-specific instrumentation for use in orthopaedic surgical procedures
US842552429 Sep 200823 Apr 2013DePuy Synthes Products, LLCCustomized patient-specific multi-cutting blocks
US857625024 Oct 20075 Nov 2013Vorum Research CorporationMethod, apparatus, media, and signals for applying a shape transformation to a three dimensional representation
US859439529 Sep 200826 Nov 2013DePuy Synthes Products, LLCSystem and method for fabricating a customized patient-specific surgical instrument
US864172130 Jun 20114 Feb 2014DePuy Synthes Products, LLCCustomized patient-specific orthopaedic pin guides
US897985523 Feb 201117 Mar 2015DePuy Synthes Products, Inc.Customized patient-specific bone cutting blocks
US902309425 Jun 20085 May 2015Microvention, Inc.Self-expanding prosthesis
US902493931 Mar 20095 May 2015Vorum Research CorporationMethod and apparatus for applying a rotational transform to a portion of a three-dimensional representation of an appliance for a living body
US90340058 Sep 201019 May 2015Microvention, Inc.Vascular embolization with an expansible implant
US909535515 Jan 20144 Aug 2015DePuy Synthes Products, Inc.Customized patient-specific orthopaedic pin guides
US913823923 Feb 201122 Sep 2015DePuy Synthes Products, Inc.Customized patient-specific tibial cutting blocks
US916815313 Jun 201227 Oct 2015Smith & Nephew, Inc.Surgical alignment using references
US917366223 Feb 20113 Nov 2015DePuy Synthes Products, Inc.Customized patient-specific tibial cutting blocks
US931425117 Mar 201519 Apr 2016DePuy Synthes Products, Inc.Customized patient-specific bone cutting blocks
US95610394 Aug 20157 Feb 2017DePuy Synthes Products, Inc.Customized patient-specific orthopaedic pin guides
US973741727 Jul 200722 Aug 2017Vorum Research CorporationMethod, apparatus, media and signals for producing a representation of a mold
US978602223 Feb 201110 Oct 2017DePuy Synthes Products, Inc.Customized patient-specific bone cutting blocks
US20090087276 *29 Sep 20082 Apr 2009Bryan RoseApparatus and Method for Fabricating a Customized Patient-Specific Orthopaedic Instrument
International ClassificationG06T1/00, G06F17/50, A61F2/30, A61F2/28, A61F2/02, A61B19/00, A61B5/107
Cooperative ClassificationA61F2002/30952, A61F2002/2825, A61F2/30942, A61F2/2875, A61F2002/30948, A61B34/10
European ClassificationA61F2/30M2
Legal Events
30 Nov 1989AKDesignated states
Kind code of ref document: A1
Designated state(s): AU JP KR
30 Nov 1989ALDesignated countries for regional patents
Kind code of ref document: A1
Designated state(s): AT BE CH DE FR GB IT LU NL SE
27 Dec 1989WWEWipo information: entry into national phase
Ref document number: 1989907442
Country of ref document: EP
23 May 1990WWPWipo information: published in national office
Ref document number: 1989907442
Country of ref document: EP
14 May 1991WWWWipo information: withdrawn in national office
Ref document number: 1989907442
Country of ref document: EP