WO2002003304A2 - Predicting changes in characteristics of an object - Google Patents

Predicting changes in characteristics of an object Download PDF

Info

Publication number
WO2002003304A2
WO2002003304A2 PCT/GB2001/002828 GB0102828W WO0203304A2 WO 2002003304 A2 WO2002003304 A2 WO 2002003304A2 GB 0102828 W GB0102828 W GB 0102828W WO 0203304 A2 WO0203304 A2 WO 0203304A2
Authority
WO
WIPO (PCT)
Prior art keywords
model
condition
shape
data
operative
Prior art date
Application number
PCT/GB2001/002828
Other languages
French (fr)
Other versions
WO2002003304A3 (en
Inventor
Guy Richard John Fowler
Jane Haslam
Ivan Daniel Meir
Timothy Parr
Original Assignee
Tct International Plc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tct International Plc filed Critical Tct International Plc
Priority to AU2001266169A priority Critical patent/AU2001266169A1/en
Publication of WO2002003304A2 publication Critical patent/WO2002003304A2/en
Publication of WO2002003304A3 publication Critical patent/WO2002003304A3/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders

Definitions

  • This invention relates to predicting changes in characteristics of an object and has particular but not exclusive application to procedures to be performed on living objects, especially the human body, such as maxillo-facial and craniofacial surgery, for example bimaxillary osteotomy which involves breaking, moving and resetting of both the maxilla and mandible to improve facial function and aesthetics.
  • the simulation is performed on a 2D lateral view of the patient rather than in 3D, and hence the surgeon or patient cannot visualise the post-operative appearance from a range of 3D view-points.
  • the simplistic nature of the empirical models leads to inaccurate simulation results.
  • the second main prior approach involves finite element models - slower modelling techniques which allow simulation of non-linear, anisotropic and visco-elastic tissue properties. Examples are given in Hemmy D., Harris G.F., Ganaparthy V., "Finite Element Analysis of Craniofacial Skeleton Using Three Dimensional Imaging as the Substrate", in Caronni E.F. (Ed) Craniofacial Surgery, Proc. of the 2 nd International Congress of the Intern. Society of Cranio-Maxillo-Facial Surgery, Florence, Italy, 1991, and Koch.
  • the present invention embodies a new approach based upon statistical rather than physical modelling techniques.
  • the invention addresses the disadvantages of current modelling techniques, and when applied to maxillo-facial and craniofacial surgery, can produce post-operative predictions in near real-time, from conventional pre- operative lateral cephalograms and pre-operative 3D facial surface data acquired using for example the Tricorder DSP Series 3D imaging system manufactured by Tricorder pic, of 6 The Long Room, Coppermill Lock, Summerhouse Lane,
  • the invention can also provide significant advantages when used in other situations as will become evident hereinafter.
  • a generic 2D statistical shape modelling technique has been developed known as a 2D Point Distribution Model (or PDM), based upon objects represented at a set of labelled 2D points.
  • PDM Point Distribution Model
  • the model consists of the mean positions of these points and the main modes of variation, which describe the ways in which the points move about the mean.
  • a PDM is built by performing a statistical analysis of a number of shape training examples.
  • Each example represents an observed instance of the class of shape, and is described by a set of 2D manually labelled so-called landmark points that capture the important features of the object.
  • the 2D PDM is built from the training data as follows: 1. Align training examples using Procrustes Analysis as described by Cootes et al supra, scaling, rotating and translating the examples so that they correspond as closely as possible to the first training example, and
  • P is truncated to use only the t most significant eigenvectors such that some fraction (typically 95%) of the training set variance is expressed.
  • New examples of the class of objects modelled can then be generated by varying the shape parameters b.
  • g,. is a vector of grey level profile data, g ; . is the mean grey level profile vector averaged over the training data for the ith. landmark point
  • P ft is a matrix of the most significant eigenvectors of the grey-level training data covariance matrix for the ith landmark point
  • b g is a set of weights, one for each eigenvector
  • Training local grey-level models gives a set of specific models of the expected grey- level evidence at each point in the 2D PDM.
  • 2D PDMs plus grey-level models can then be used in image search applications, that is, given a PDM of a particular class of 2D shape, one can locate an instance of that class of shape in a new image.
  • the grey-level models can be used to compare expected and observed grey-level image evidence, producing a measure of grey-level fitness at each model point that is used to drive the image search algorithm.
  • Image search is achieved using an algorithm known as an Active Shape Model (or ASM), described in detail in Cootes T.F., Taylor C.J., "Active Shape Models - Smart Snakes", Proc. BMVC, Leeds 1992, Springer Verlag, pp266-275.
  • ASM Active Shape Model
  • An instance of a 2D PDM is initialised at some position in the image, typically using the mean shape parameters. 2. A region of the image around each model point along the perpendicular to the boundary at that point is examined, and the best match between the observed and expected image data in that region is found; this gives a suggested local displacement at each model point.
  • Steps 2 and 3 are iterated until the algorithm converges.
  • ASMs can also be implemented in a multi-resolution form that speeds up the algorithm and improves its robustness.
  • 2D PDMs and ASMs have been applied to a range of shape-modelling and image analysis applications, including face modelling and location as described in Lanitis A., Taylor, C.J., et al "Automatic Identification of Human Faces Using Flexible Appearance Models", Proc. 5 th BMVC, 1994 pp 65-74.
  • Other applications include locating heart ventricles in echocardio rams, segmenting magnetic resonance (MR) images of the abdomen, and locating anatomical landmarks in lateral cephalograms.
  • MR magnetic resonance
  • each object is described as a labelled set of n points; the only difference is that z-ordinates are now included.
  • a large number of landmark points ( ⁇ 500-1000) must be marked by hand for each example.
  • the examples must be aligned before contour extraction so that the contours approximately correspond between different examples.
  • the points marked on the contours are very unlikely to be 'true' 3D landmark points e.g. points of high curvature in 3D. 4.
  • the method has problems dealing with objects of complex topology. Another method is proposed in Heap T., Hogg D., "Towards 3D hand Tracking Using a Deformable Model", Proc. 2 nd International Conf. On Automatic Face and Gesture Recognition 1996, ppl40-145. This involves a semi-automatic method for building 3D hand-models from MRI data in which a physically based Simplex Mesh model is constructed on the first example.
  • a further extension of the statistical modelling techniques is to build a predictive model. This is done by building a combined statistical model which models the correlation between one class of measurements A and another class of measurements B. A particular measurement of A can then be used to predict the corresponding measurement of B.
  • a model is built which links a 3D PDM of an object to a matrix of Scatter Correction Factors associated with the object, and subsequently uses an instance of 3D shape to infer the corresponding Scatter Correction Factors.
  • each combined example contains an example of measurements A (vector ⁇ of length a) and an example of measurements B (vector x B of length b).
  • the ith training example so obtained is a vector x c , which concatenates a normalised version of x Al and a normalised version of x B :
  • the normalisation factors ⁇ A and ⁇ B are given by the total training set variance of measurement vectors ⁇ and x B respectively.
  • the combined vector x c is normalised such that the sub-measurements ⁇ and x B give an equal contribution (in terms of variance) to the combined vector.
  • the model is truncated to use a or less eigenvectors in order that it may be used to make predictions.
  • W is a diagonal matrix of weights with diagonal elements set to 1 for the first a elements, and 0 for the final b elements.
  • Equation (7) is solved for the unknown vector of combined model weights b c , using standard linear algebra techniques.
  • x c can then be calculated using equation (6), and the estimate of x B is given by the last b elements of vector x c multiplied by the normalisation factor ⁇ B .
  • the invention provides an improved predictive technique which involves planning changes for one set of variables for an object and predicting corresponding changes in another r set of variables for the object.
  • the invention provides a method of predicting changes for an object with first and second characteristics that are distinct from but statistically correlated with one another, comprising: providing a statistical model configuration of at least one mode of variation of a first set of variables relating to the first characteristic of the object, and at least one mode of variation of a second set of variables relating to the second characteristic of the object, planning a change to the first set of variables for the object, and using the model configuration to predict a corresponding change to the second set of variables for the object from data corresponding to the planned change to the first set.
  • the statistical model configuration may include a first parametric model of the first characteristic of the object, a second parametric model of the second characteristic of the object and a predictive model that characterises a statistical correlation between the models, and the method involves fitting the first and second models to the corresponding characteristics of an object in the first condition to provide parameterised data for the first and second characteristics of the object in the first condition, planning a change to the condition for the object so as to provide parameterised data for the first characteristic of the object in a second different condition, and utilising the parameterised data and the predictive model to provide parameterised data corresponding to a prediction of the change of second characteristic of the object in the second condition.
  • the statistical model configuration may include a first parametric model of the first characteristic of the object in the first object condition, a second parametric model of the second characteristic of the object in the first object condition, a third parametric model of the first characteristic of the object in the second object condition, a fourth parametric model of the second characteristic of the object in the second object condition, and the predictive model characterises a statistical correlation between the models, with the method involving: fitting the first and second models to the corresponding characteristics of an object in the first condition to provide parameterised data for the first and second characteristics of the object in the first condition, planning the second condition for the object using the third model to provide parameterised data for the first characteristic of the object in the second condition, and utilising the parameterised data and the predictive model to provide parameterised data for the fourth model to predict the second characteristics of the object in the second condition.
  • the invention has particular application to predicting the outcome of medical procedures and may be carried out to predict the outcome of a medical operative procedure
  • the object comprises a patient
  • the first shape characteristic corresponds to the shape of underlying hard tissue structure of the patient
  • the second shape characteristic corresponds to the shape of a soft tissue structure that covers the hard tissue structure.
  • Data may be acquired from a pre-operative lateral cephalogram concerning the shape of underlying hard tissue structure of the patient and data from a pre- operative 3D scan of the patient may be acquired for the shape of the soft tissue structure.
  • the invention also includes a computer program to be run on a computer to perform the aforesaid method and data processing apparatus configured to perform the method.
  • the invention provides a medical analysis tool comprising a processor operable to provide a statistical model configuration of at least one mode of variation of a first set of variables relating to shape characteristics of a relatively hard tissue part of a living body, and at least one mode of variation of a second set of variables relating to shape characteristics of a relatively soft tissue part of the body that overlies the relatively hard tissue part, an input operable to plan a change to the first set of variables for a patient, such that the processor utilises the model configuration to predict corresponding changes to the second set of variables for the patient, whereby to predict changes in shape of the soft tissue part that correspond to changes planned for the hard tissue part.
  • the tool may include a model fitting system operable to fit the first and second models to the corresponding pre-operative hard and soft tissue shape characteristics of a patient to provide parameterised shape data for the pre-operative hard and soft tissue shape characteristics, and a planning input system operable to define a post- operative hard tissue configuration for the patient using the third model to provide parameterised shape data for post-operative hard tissue configuration.
  • a model fitting system operable to fit the first and second models to the corresponding pre-operative hard and soft tissue shape characteristics of a patient to provide parameterised shape data for the pre-operative hard and soft tissue shape characteristics
  • a planning input system operable to define a post- operative hard tissue configuration for the patient using the third model to provide parameterised shape data for post-operative hard tissue configuration.
  • the processor may be operable to utilise the parameterised shape data and the predictive model to provide parameterised data for the fourth model to predict post-operative soft tissue shape characteristics for the patient corresponding to the planned post post-operative hard tissue configuration.
  • the statistical model configuration may include at least one point distribution model.
  • a display device may be configured to provide a visual display of the predicted the post-operative soft tissue configuration and least one of the pre-operative soft and hard tissue configuration and the planned post-operative hard tissue configuration so that the outcome of the planned procedure can be reviewed and shown to the patient if desired.
  • Figure 1 is a schematic illustration of a hardware configuration for carrying out a predictive method according to the invention for predicting the outcome of a bimaxillary osteotomy
  • Figure 2 illustrates the relationship between process components of a model used in predicting the outcome of the surgery
  • Figure 3 is a lateral cephalogram of a patient's head with landmark points shown marked on it
  • Figure 4 illustrates a camera arrangement for capturing 3D data
  • Figure 5 is an example of a 2D rendering of a 3D image captured by the camera arrangement of Fig. 4 with landmark points thereon,
  • Figure 6 is a flow chart of a process for training the models
  • Figure 7 is a flow chart of a process for predicting the outcome of a bimaxillary osteotomy, using the trained models
  • Figure 8a illustrates a display of a 2D lateral cephalogram of the bony tissue of a patient before surgery is carried out
  • Figure 8b illustrates a display of a proposed surgical treatment plan for the patient
  • Figure 9a illustrates a display of a 3D model instance for the soft tissue shape of the head of the patient before surgery is carried out
  • Figure 9b illustrates a display of a 3D predicted model of the soft tissue shape of the head of the patient after surgery is carried out according to the proposed treatment plan shown in Figure 8b.
  • 2D and 3D shape-modelling techniques are used to build a statistical model of the relationship between hard and soft-tissue during maxillo-facial surgery.
  • This model can then be used to predict 3D soft -tissue changes that occur as a result of maxillo-facial surgery.
  • a surgeon may propose to break and move a patient's jawbone to improve facial function and aesthetics and the model provides a prediction of the resulting 3D shape of the head produced by the proposed surgery.
  • the method can be split into 2 general stages:
  • Model-Building this involves building a statistical model which expresses the relationship between hard tissue and soft tissue, for both pre and post-operative maxillo-facial patient data.
  • Soft-Tissue Prediction - given patient pre-operative data for an individual patient the statistical model is used to predict the post-operative soft-tissue appearance for the patient given the pre-operative data, plus knowledge of the surgeon's treatment plan.
  • a number of statistical models are constructed using a hardware configuration shown in Figure 1.
  • a conventional personal computer 1 with a processor unit 2, display screen 3 keyboard 4 and mouse 5 is coupled to a scanner 6.
  • the scanner 6 permits X-ray side-view images of the patient's head, known as lateral cephalograms, to be scanned, digitised and fed to the processor unit 2.
  • the resulting cephalogram data thus provides data concerning the bony or hard tissue configuration in the patient's head. It will be understood that this data can alternatively be obtained directly from digital X-ray equipment and the invention is not restricted to any particular method of hard tissue data capture.
  • the processor unit 2 is also configured to receive data concerning the external or soft tissue appearance of the patient's head. This data may be captured using a 3D scanner 7 shown schematically.
  • 3D scanner 7 is the Tricorder DSP Series 3D device supra.
  • the processor unit 2 includes a central digital processor RAM, ROM and data storage media such as a hard disc and floppy disc connected on a common bus, in a conventional manner.
  • the central processor can execute stored programs stored on the data storage media, so as to build the statistical models and display results obtained from them on the screen 3, and allow manipulation of the displayed data using the keyboard 4 and mouse 5.
  • the programs build statistical models for the aforesaid model building and also execute the soft tissue prediction as will become apparent hereinafter.
  • a statistical model is built that allows a prediction of postoperative soft-tissue appearance to be made from the following data: pre-operative soft-tissue appearance, pre-operative hard-tissue appearance, and knowledge of the surgical treatment plan i.e. knowledge of a proposed post-operative hard tissue appearance.
  • the model building utilises the following components shown in Figure 2:
  • a 3D PDM 11 describing the variability in shape of pre-operative 3D facial soft- tissue appearance, modelled from 3D surfaces acquired using the 3D scanner 7
  • a 3D PDM 13 describing the variability in shape of post-operative 3D facial soft-tissue, modelled from 3D surfaces acquired using the scanner 7.
  • a predictive model 14 which links the data from the models 10 - 13 together, and describing the relationship between data from models 10-12 and data from model 13.
  • a training set of pre and post-operative lateral cephalograms is obtained for human patients who have already undergone maxillo-facial surgery.
  • the cephalograms thus constitute historical data for maxillo-facial procedures previously carried out and can be used to train the pre- and post-operative 2D PDMs 10, 12.
  • the cephalograms are individually scanned using the scanner 6 and individually displayed on the screen 3 of the computer 1.
  • Each of the pre and post operative models includes a number of standard anatomical landmarks useful to maxillo-facial surgeons (Nasion, Sella, Porion, Orbitale, Gonion, Pogonion, Menton, Gnathion, Upper Incisor Root, Upper Incisor Tip, Lower Incisor Root, Lower Incisor Tip, ANS, PNS, A Point, B Point).
  • Figure 3 shows the structures modelled.
  • x CephPre is a vector of pre-op cephalogram 2D landmark data
  • x Ceph?re is the mean pre-op cephalogram 2D landmark data averaged over the training set
  • ⁇ CephPre is a matrix of the most significant eigenvectors of the pre-op cephalogram training data covariance matrix
  • an ⁇ b CephPre is a set of weights, one for each eigenvector.
  • CephPosl ⁇ CephPosl " * " * CephPosl” CephPosl V' (where x CephPos ⁇ is a vector of post-op cephalogram 2D landmark data, x CephPosl is the mean post-op cephalogram 2D landmark data averaged over the training set, ⁇ c ephPost 1S matri of the most significant eigenvectors of the post-op cephalogram training data covariance matrix, andb Ce/7 ⁇ F ⁇ , is a set of weights, one for each eigenvector.)
  • identical anatomical landmarks are used in the post-operative cephalogram model to those in the pre-operative cephalogram model.
  • the 3D shape of 3D pre and post-operative facial soft -tissue are each modelled using a 3D PDM. This involves capturing a training set of images of pre- and post operative facial shape using the scanner 7 shown in Figure 1.
  • the basic modelling technique used is standard, as described by Hill et al. supra, but an improved method for marking up 3D training data is used, which addresses two problems with the standard method of Hill et al as will now be explained.
  • a texture-mapped, triangulated 3D facial surface is acquired for each training example using the Tricorder DSP Series 3D capture system. The acquisition is done with each person face-on to the capture system as shown in Figure 4.
  • the system includes an array of digital cameras C1-C4 directed face-on to the patient's face which is illuminated with a spatially textured light from a source (not shown) and the outputs of the cameras are processed to produce data corresponding to a texture-mapped, triangulated 3D facial surface.
  • Each texture-mapped, triangulated 3D facial surface is converted into a 2.5D depth-map, and an image of the corresponding texture. This is done by calculating a virtual pin-hole camera model which is the average of the 4 (pre-calibrated) Tricorder DSP Series cameras models shown in Figure 4, and re-projecting the 3D facial surface using this camera model to give a 2.5D depth-map and texture image.
  • Each depth-map texture image is then treated as a simple image and a relatively small ( ⁇ 80) set of reproducible 2D points are manually marked on each image.
  • Figure 5 shows an example marked-up texture image.
  • the marked points consist of two types: i) landmark points (shown as filled dots 15) - distinctive facial features or positions which can be reliably marked on each example image, and ii) pseudo- landmark points (shown as unfilled dots 16) - intermediate points which are equally spaced along the shape boundary between the distinctive landmark points.
  • the marked 2D points are used to warp each image and depth-map into a common 'shape-free' frame using 2D thin-plate spline (TPS) interpolation.
  • TPS thin-plate spline
  • any pixel (x,y) in a given training example depth-map is nominally in correspondence with the same pixel in every other example depth-map.
  • a small number of 2D landmark points have been used to produce texture-map and depth-map correspondences over the whole face.
  • Equation (3) a shape instance in the pre-operative 3D soft -tissue model 11 can be described by the equation:
  • Xj ⁇ pr e 1S a vector of pre-op 3D soft -tissue data
  • ⁇ rypr e 1S ne mean pre-op 3D soft-tissue data averaged over the training set
  • ⁇ 3£ ,p re is a matrix of the most significant eigenvectors of the pre-op 3D soft-tissue training data covariance matrix
  • h 3DPre is a set of weights, one for each eigenvector.
  • identical 3D landmarks are used in the post-operative 3D soft- tissue model to those in the pre-operative 3D soft-tissue model.
  • steps SI- S4 the building of the models 10 - 13 is shown schematically as steps SI- S4.
  • Each training example for the predictive model 14 consists of a measurement vector ⁇ edtc i tn at 1S the concatenation of 4 blocks of data: 1) a vector b CephPre of length nCephPre representing the pre-operative 2D bony structure of the face in parametric form.
  • b Ceph ⁇ le is calculated from the raw 2D landmark point data x CephPre by inverting equation (8),
  • b 30Pre of length n3DPre representing the pre-operative 3D soft- tissue structure of the face in parametric form.
  • b 3flPre is calculated from the raw 3D landmark point data x 3DPre by inverting equation (10),
  • b CephPosl of length nCephPost representing the post-operative 2D bony structure of the face in parametric form.
  • b CephPosl is calculated from the raw 2D landmark point data x CephPosl by inverting equation (9), and 4)
  • b 3DPosl is calculated from the raw 3D landmark point data x 3DPosl by inverting equation (11).
  • each block of data making up p, ⁇ is normalised by dividing by its total training set variance, so that each type of data gives a contribution of equal weight to the combined model i.e.:
  • the combined predictive model is then (in step S6 of Fig. 6) built from the training data by Principal Component Analysis, using the method described previously in relation to prior predictive models.
  • an instance of the predictive model can be described by the equation: X Pr edict ⁇ X Pr edict " * " "?r edict " Predict (13)
  • x P ⁇ edicl is the predictive model instance, x Pre ⁇ c , is the mean predictive model data averaged over the training set, P ' fredicl is a matrix of the most significant eigenvectors of the predictive model training data covariance matrix, andb p - erf;c/ is a set of weights, one for each eigenvector.
  • a useful predictive model can be built from of the order of 100 (or more) training examples, each example containing the data for a single example of a bimaxillary osteotomy procedure. Adding further training data improves the accuracy of the predictive model.
  • Soft- is sue Prediction
  • the trained predictive model can be used to predict to the outcome of a surgical maxillo-facial procedure.
  • a surgeon may propose a procedure which involves breaking a patient's jaw and moving the jaw- line by resetting the jaw.
  • the resulting change in the 3D physical appearance of the face produced by the procedure depends on the rearrangement of bony material produced by the surgery and has been difficult to predict, explain and demonstrate to the patient.
  • the actual and the perceived success of the procedure depends greatly on the skill, experience and communication skills of the surgeon.
  • the method according to the invention allows the surgeon to input a proposed procedure making reference to a 2D cephalogram of the patient and predict the 3D soft tissue outcome, i.e. the facial appearance after carrying out the surgery.
  • a standard pre-operative lateral cephalogram of the patient is acquired by conventional X-ray techniques, which is scanned by means of the scanner 6 and the resulting data is supplied to the processor 2 shown in Figure 1.
  • the 2D captured data for the pre- operative lateral cephalogram is converted into a parametric form by fitting the 2D pre-operative lateral cephalogram model 10 to the cephalogram of the patient.
  • a pre-operative 3D facial soft -tissue surface image of the patient is acquired using the 3D Tricorder DSP Series device.
  • the corresponding data is sent from scanner 7 to the processor 2.
  • the captured pre-operative 3D facial soft-tissue surface data is converted into a parametric form by fitting the 3D facial soft-tissue model 12 to the 3D facial soft-tissue surface.
  • the surgical treatment plan is set up by manipulating the 2D landmarks on the pre-operative lateral cephalogram. This process is used to define an instance of the of the post-operative 2D cephalogram model 13.
  • the resulting data are supplied as inputs to the predictive model 14 which, at steps S12 and S13, uses the pre-op lateral cephalogram parameters, pre-op 3D soft-tissue parameters and surgical treatment plan to predict post-op 3D soft-tissue shape and appearance.
  • the 2D pre-operative lateral cephalogram model is fitted to the pre-operative lateral cephalogram using the standard multi-resolution ASM of Cootes et al "Active Shape Models : Evaluation of a Multi-Resolution Method for Improving Image Search", supra.
  • the fitting algorithm determines the pre-operative cephalogram model shape parameters b CephPte which best fit the given cephalogram, and also the 2D location, orientation and scaling of the model instance in the cephalogram. This permits the cephalogram to be characterised in terms of a small set of shape parameters b Ceph?re , from which the aforementioned corresponding anatomical landmark point positions x c ePhP ⁇ e c n he calculated.
  • the fitting algorithm is run on the processor unit 2 in Figure 1 and the resulting location of the landmark points relative to the cephalogram of the patient may be displayed on the screen 3 of the computer to provide the user with confirmation that the 2D pre-operative model has been satisfactorily fitted to the bony tissue image of the patient
  • the 3D pre-operative facial soft-tissue model is fitted to the pre-operative 3D facial soft-tissue surface using an algorithm run on the processor unit 2 which is a variant of the Iterated Closest Point (ICP) algorithm described in "A method for registration of 3-D shapes", Besl, P. J. and McKay, N. D., IEEE PAMI, 14(2), pp 239-256, 1992 .
  • the original search algorithm of Hill et al described in "Model- Based Interpretation of 3D Medical Images", supra was developed for deforming 3D models to fit to 3D volumetric image data whereas the modified version of the ICP algorithm deforms an initial 3D PDM in both pose and shape to produce the best local fit to 3D surface data.
  • the algorithm proceeds as follows.
  • a display of the resulting parameterised data may be provided on the screen 3 of the computer.
  • the process allows the pre-operative 3D facial soft-tissue surface to be characterised automatically in terms of a small set of shape parameters b 3DPve , and a
  • the surgical treatment plan is input using a similar User Interface to that of existing systems such as OTP and QuickCeph supra.
  • the pre-operative lateral cephalogram acquired at step S7 is displayed on the screen 2 with the anatomical landmark point positions CephPte marked on it.
  • the surgeon then indicates the proposed changes to make during surgery by manipulating the bony landmark points with the mouse 5 or by means of the keyboard 4 to give a new set of landmark point positions x CephPosl , indicating how the mandible and/or maxilla will move during surgery.
  • Figure 8a is a schematic illustration of the pre-operative lateral cephalogram of the patient and Figure 8b illustrates the planned post-operative configuration to be achieved by surgery.
  • the parameterised form of the pre-operative data (b CephPre , b 3DP ⁇ e and 3D surface model pose s, t, R), and the parameterised form of the treatment plan (b Cep ⁇ , Pos ,), are used to calculate a prediction of post-operative soft-tissue shape and appearance. This is done as follows:
  • the output of this algorithm is a version of the 3D pre-operative facial surface which has been modified to simulate the required maxillo-facial surgery.
  • Figure 9 a shows the display of the instance of pre-operative 3D model 11 for the patient
  • Figure 9b illustrates the predicted 3D post-operative shape predicted by the predictive model 14 for the surgeon's treatment plan.
  • the surgery planned in 2D as shown in Figures 8a and 8b is predicted to produces changes in 3D as shown in Figures 9a and 9b.
  • the surgeon can then if desired modify the planned surgery in the screen display of Figure 8b and observe the outcome in the display of Figure 9b. This enables the surgical procedure to be optimised to achieve the desired aesthetic outcome.
  • the displays of Figures 8 and 9 may be shown to the patient to explain and seek approval for the proposed procedure.
  • the training of the predictive model 14 may be carried out on an ongoing basis.
  • the model training was carried out as an initial step, but in addition, the data for subsequent surgical procedures may be used to update the training of the models.
  • the invention is not restricted to maxillo-facial and craniofacial surgery and can be used for other procedures where it useful to predict changes in soft tissue shape resulting from a proposed operation to change to a corresponding relatively hard tissue configuration, and is not restricted to human surgery.
  • the invention may also be used for operations on non-animate objects for which a statistical correlation occurs between an inner structure and an outer structure covering the inner structure so as to predict changes in the shape of the outer structure produced by a proposed operation to change the inner structure. Conditions other than the shape of the object may be predicted by means of the invention.

Abstract

A medical analysis tool which can be run using a personal computer, provides a statistical model configuration that allows a prediction to be made of the soft tissue shape of a patient when changes are made to the shape of the underlying hard tissue e.g. by surgery. The model configuration includes a first parametric model (10) of pre-operative hard tissue shape characteristics of the patient, a second parametric model (11) of pre-operative soft tissue shape characteristics of the patient, a third parametric model (12) of post-operative hard tissue shape characteristics of the patient, a fourth parametric model (13) of post-operative shape tissue shape characteristics of the patient, and a predictive model (14) that characterises a statistical correlation between the models. A surgical plan can be inputted by a surgeon to change to the hard tissue configuration for the patient and the model configuration predicts corresponding changes in shape of the soft tissue which can be displayed by computer for review.

Description

Predicting changes in characteristics of an object
Field of the invention
This invention relates to predicting changes in characteristics of an object and has particular but not exclusive application to procedures to be performed on living objects, especially the human body, such as maxillo-facial and craniofacial surgery, for example bimaxillary osteotomy which involves breaking, moving and resetting of both the maxilla and mandible to improve facial function and aesthetics.
Background to the invention
In maxillo-facial and craniofacial surgery, the surgeon's goal is not only to improve facial functionality, but also to produce an aesthetically pleasing face. Therefore, the post-operative soft -tissue appearance is an important factor in patient outcome. An accurate simulation of soft -tissue changes during surgery would give a number of benefits, namely:
• a tool which enables the surgeon to improve surgical outcome by simulating different possible surgeries and choosing the one with the best outcome,
• a patient consent management tool which allows the surgeon to sit down with the patient, and set their expectations about possible surgical outcomes, * a training tool that allows trainees to simulate surgeries with a wide variety of pathologies, and
• a clinical audit tool to allow the results produced by surgery to be directly compared to the original treatment plan.
Traditional methods for maxillo-facial simulation and planning are based upon simple empirical studies of the relationship between bone and tissue movements in 2D lateral cephalograms as described by Athanasiou A.E. (ed.), "Orthodontic Cephalometry", Mosby- Wolfe Verlag, London, 1995. These methods form the basis for a number of 2D maxillo-facial surgery simulation products e.g. QuickCeph produced by QuickCeph Systems of 12925 El Camino Real, Ste. J23 San Diego, CA 92130, USA, and also OTP produced by Orthovision Inc. of 3701 Shoreline Dr, Suite 202B, "Wayzata, MN 55391 USA. However, there are two significant disadvantages with such methods. Firstly, the simulation is performed on a 2D lateral view of the patient rather than in 3D, and hence the surgeon or patient cannot visualise the post-operative appearance from a range of 3D view-points. Secondly, and more importantly, the simplistic nature of the empirical models leads to inaccurate simulation results.
More recently, a number of techniques have been developed for performing fully 3D soft -tissue modelling for maxillo-facial surgery simulation. The most promising of these techniques are based upon physical modelling of facial tissues, taking into account individual patient anatomy. Two main approaches can be found. The first involves mass-spring models - fast, simple modelling techniques based upon simulating the linear elastic properties of tissue as a series of masses attached to each other by springs. Examples are described in Keeve, E., Girod, S., Kikinis R., Girod B., "Deformable Modelling of Facial Tissue for Craniofacial Surgery Simulation", Computer Aided Surgery, Vol. 3, No. 5, 1998, and Bro-Neilson M., Cotin S., "Real-time Volumetric Deformable Models for Surgery Simulation using Finite Elements and Condensation", Proc. Of Eurographics, Vol 5, pp57-66, 1996.
The second main prior approach involves finite element models - slower modelling techniques which allow simulation of non-linear, anisotropic and visco-elastic tissue properties. Examples are given in Hemmy D., Harris G.F., Ganaparthy V., "Finite Element Analysis of Craniofacial Skeleton Using Three Dimensional Imaging as the Substrate", in Caronni E.F. (Ed) Craniofacial Surgery, Proc. of the 2nd International Congress of the Intern. Society of Cranio-Maxillo-Facial Surgery, Florence, Italy, 1991, and Koch. R, Gross H.H., Buren D.F., Frankhauser G., Parish Y., Carls F.R., "Simulating Facial Surgery Using Finite Element Models", Proc. of SIGGRAPH '96, New Orleans, Louisiana, ASM Computer Graphics, Vol. 30, 1996.
Although these techniques represent interesting advances in this area, a near realtime, clinically validated technique has not yet emerged. Also, a significant practical disadvantage of such techniques is that they require knowledge of the underlying 3D bony structure to be available from X-ray CT in order to construct a patient specific structural model. In the UK and some other countries, X-ray CT scans are not acquired for the majority of maxillo-facial cases due to the high associated radiation dose. Treatment planning is typically performed using only a lateral cephalogram.
The present invention embodies a new approach based upon statistical rather than physical modelling techniques. The invention addresses the disadvantages of current modelling techniques, and when applied to maxillo-facial and craniofacial surgery, can produce post-operative predictions in near real-time, from conventional pre- operative lateral cephalograms and pre-operative 3D facial surface data acquired using for example the Tricorder DSP Series 3D imaging system manufactured by Tricorder pic, of 6 The Long Room, Coppermill Lock, Summerhouse Lane,
Harefield, Middlesex, UB9 6JA, United Kingdom. The invention can also provide significant advantages when used in other situations as will become evident hereinafter.
In order to explain the background to the invention, a review of prior statistical modelling techniques will now be given.
2D Point Distribution Models and Active Shape Models
A generic 2D statistical shape modelling technique has been developed known as a 2D Point Distribution Model (or PDM), based upon objects represented at a set of labelled 2D points. Reference is directed to Cootes T.F., Taylor C.J., Cooper D.H., Graham J., "Training Models of Shape from Sets of Examples", Proc BMVC 1992. The model consists of the mean positions of these points and the main modes of variation, which describe the ways in which the points move about the mean.
A PDM is built by performing a statistical analysis of a number of shape training examples. Each example represents an observed instance of the class of shape, and is described by a set of 2D manually labelled so-called landmark points that capture the important features of the object. Each training example is thus described by a vector x of length 2n of its n landmark point positions: x=(x0, y0; xt, yt; ... xn yn . The 2D PDM is built from the training data as follows: 1. Align training examples using Procrustes Analysis as described by Cootes et al supra, scaling, rotating and translating the examples so that they correspond as closely as possible to the first training example, and
2. perform Principal Component Analysis of the covariance 2nx2n matrix S of the aligned training data:
s = -∑( ; -χχ; -χ')r (i)
P T!
(where p is the number of training examples, x'. is the th aligned training example, and x' is the mean of the aligned training examples).
The modes of variation of the 2D PDM are described by the unit eigenvectors of S, p, (i=l to 2n) such that:
Sp, = λiVi (2)
(where λt- is the ith. eigenvalue of S). Any shape in the aligned training set can then be described exactly by the equation: x' = x' + Pb (3) (where P is the 2nx2n matrix of eigenvectors (pl3 p2, ... p2 ).
Generally, P is truncated to use only the t most significant eigenvectors such that some fraction (typically 95%) of the training set variance is expressed. New examples of the class of objects modelled can then be generated by varying the shape parameters b.
In addition to the basic shape information, it is also possible to model the local grey-level appearance at each of the labelled model points (see Cootes T.F, Taylor C.J., Lanitis A., Cooper D.H., Graham J., "Building and Using Flexible Models Incorporating Grey-Level Information", Proc. ICCV, Berlin, May 1993, pp242-246 for further details). Briefly, grey-level training data is extracted along profiles perpendicular to the shape example boundary at each landmark point, and landmark grey-level models are built using techniques analogous to those used to model shape. Each grey-level model consists of a mean grey-level pattern, and a number of modes of variation about the mean. A grey-level model instance g; for the ith landmark point can be expressed as: βr ft +'A (4)
(where g,. is a vector of grey level profile data, g;. is the mean grey level profile vector averaged over the training data for the ith. landmark point, Pft is a matrix of the most significant eigenvectors of the grey-level training data covariance matrix for the ith landmark point, andbg is a set of weights, one for each eigenvector).
Training local grey-level models gives a set of specific models of the expected grey- level evidence at each point in the 2D PDM. 2D PDMs plus grey-level models can then be used in image search applications, that is, given a PDM of a particular class of 2D shape, one can locate an instance of that class of shape in a new image.
During image search, the grey-level models can be used to compare expected and observed grey-level image evidence, producing a measure of grey-level fitness at each model point that is used to drive the image search algorithm. Image search is achieved using an algorithm known as an Active Shape Model (or ASM), described in detail in Cootes T.F., Taylor C.J., "Active Shape Models - Smart Snakes", Proc. BMVC, Leeds 1992, Springer Verlag, pp266-275. The general approach used is as follows:
1. An instance of a 2D PDM is initialised at some position in the image, typically using the mean shape parameters. 2. A region of the image around each model point along the perpendicular to the boundary at that point is examined, and the best match between the observed and expected image data in that region is found; this gives a suggested local displacement at each model point.
3. From the suggested local displacements, adjustments to the model pose and shape parameters are calculated which best satisfy the suggested displacements. This is achieved using an iterative algorithm, and enforces the constraint that each shape parameter is within 3 standard deviations from the mean model shape.
4. Steps 2 and 3 are iterated until the algorithm converges.
ASMs can also be implemented in a multi-resolution form that speeds up the algorithm and improves its robustness. Reference is directed to Cootes, T.F., Taylor C.J., Lanitis A., "Active Shape Models : Evaluation of a Multi-Resolution Method for Improving Image Search", Proc. BMVC 1994, pp327-336
2D PDMs and ASMs have been applied to a range of shape-modelling and image analysis applications, including face modelling and location as described in Lanitis A., Taylor, C.J., et al "Automatic Identification of Human Faces Using Flexible Appearance Models", Proc. 5th BMVC, 1994 pp 65-74. Other applications include locating heart ventricles in echocardio rams, segmenting magnetic resonance (MR) images of the abdomen, and locating anatomical landmarks in lateral cephalograms.
3D Point Distribution Models and Active Shape Models
PDMs and ASMs have been extended from 2D to 3D. Reference is directed to Hill A., Thornham A. and Taylor C.J., "Model-Based Interpretation of 3D Medical Images", 4th British machine Vision Conference, Guildford, England, 339-348, Sept. 1993. As in 2D each object is described as a labelled set of n points; the only difference is that z-ordinates are now included. Thus, each of the p training examples is expressed as a vector x of length 3n, where x = {xXx,xly,x , ,xpx,xpy,xp!} anά {xte,x( ,x/z} gives the co-ordinates of the ith landmark point in the example. Modelling and image search methods are analogous to those in 2D. The major difference between 2D and 3D modelling techniques is in the method used to mark-up the training data. This has been approached in a number of ways:
Hill et al. supra build their models from volumetric MRI data of the head by splitting the 3D space into a number of slices, and marking the landmark points on contours in each slice. This method has a number of disadvantages:
1. A large number of landmark points (~ 500-1000) must be marked by hand for each example.
2. The examples must be aligned before contour extraction so that the contours approximately correspond between different examples. 3. The points marked on the contours are very unlikely to be 'true' 3D landmark points e.g. points of high curvature in 3D. 4. The method has problems dealing with objects of complex topology. Another method is proposed in Heap T., Hogg D., "Towards 3D hand Tracking Using a Deformable Model", Proc. 2nd International Conf. On Automatic Face and Gesture Recognition 1996, ppl40-145. This involves a semi-automatic method for building 3D hand-models from MRI data in which a physically based Simplex Mesh model is constructed on the first example. Subsequent examples require only a few (~5-10) guiding points to pull the Simplex Mesh to the new example image data. This method appears to be robust, and once the initial simplex mesh has been set up, simple to use. However, it is not clear that this method generalises from the class of objects modelled (3D hands) to general objects where key 3D landmark points are not so easily identifiable.
Another approach is described in Brett A.D., Taylor C.J, "A Method of Automated Landmark Generation for Automated 3D PDM Construction", Proc. BMVC 1998, which provides a fully automatic method for 3D PDM construction given a set of 3D triangulated surfaces. Correspondences are determined between highly decimated versions of the surfaces and used to construct a binary tree of merged shapes, with the mean shape at the root of the tree. Once the binary tree has been constructed, a set of landmark points identified on the mean shape can then be propagated out to the leaf examples. Although this method is fully automatic, it is not robust enough for routine use.
Predictive Models
A further extension of the statistical modelling techniques is to build a predictive model. This is done by building a combined statistical model which models the correlation between one class of measurements A and another class of measurements B. A particular measurement of A can then be used to predict the corresponding measurement of B. In one predictive approach, devised b Haslam J. "Model-based Methods for Medical Image Correction and Interpretation", PhD thesis, August 1996, Manchester University, a model is built which links a 3D PDM of an object to a matrix of Scatter Correction Factors associated with the object, and subsequently uses an instance of 3D shape to infer the corresponding Scatter Correction Factors. Another predictive approach is described by Bowden R., Mitchell T.A, Sarhadi M., "Reconstructing 3D Pose and Motion from a Single Camera View", Proc. BMVC 1998. Bowden et al build a model which links the 2D outline of a human figure to a 3D 'stick-man' representation of the same figure, and subsequently use an instance of the 2D outline to infer the corresponding 3D representation. In both of these predictive approaches, the general methodology is as follows:
1. Assume that one class of measurements A are correlated with another class of measurements B, and that the correlation is strong enough for measurements A to be used to predict measurements B. 2. Build a statistical model by Principal Component Analysis from a set of combined training examples. Each combined example contains an example of measurements A (vector ^ of length a) and an example of measurements B (vector xB of length b). The ith training example so obtained is a vector xc , which concatenates a normalised version of xAl and a normalised version of xB :
xC( = {^,....,-^,-^,....,-^} (5)
The normalisation factors σA and σB are given by the total training set variance of measurement vectors ^ and xB respectively. Thus the combined vector xc is normalised such that the sub-measurements ^ and xB give an equal contribution (in terms of variance) to the combined vector.
An instance of the combined model xc may then be described as: xc = xc +Pcbe (6)
(where xc is the mean combined model vector, Pc is the matrix of eigenvectors of the combined model training data covariance matrix, and bcis a vector of combined model weights.) The model is truncated to use a or less eigenvectors in order that it may be used to make predictions.
3. Given a new set of measurements x^ of A, the combined model can be used to predict the corresponding measurements xB of B by solving a weighted linear least squares problem of the form: (P/W)(xc - xc) = (P/WPc)bc (7)
(where W is a diagonal matrix of weights with diagonal elements set to 1 for the first a elements, and 0 for the final b elements.)
Equation (7) is solved for the unknown vector of combined model weights bc, using standard linear algebra techniques.
4. Once bc has been estimated, xc can then be calculated using equation (6), and the estimate of xB is given by the last b elements of vector xc multiplied by the normalisation factor σB .
The approach followed by Bowden et al shows that an instance of the "stick man" can be used to predict a corresponding instance of the 3D configuration of a corresponding human torso, but the representation is not sufficiently accurate for use in practical situations such as medical procedures where high precision and accuracy are required.
Summary of the invention
The invention provides an improved predictive technique which involves planning changes for one set of variables for an object and predicting corresponding changes in another r set of variables for the object.
In one aspect the invention provides a method of predicting changes for an object with first and second characteristics that are distinct from but statistically correlated with one another, comprising: providing a statistical model configuration of at least one mode of variation of a first set of variables relating to the first characteristic of the object, and at least one mode of variation of a second set of variables relating to the second characteristic of the object, planning a change to the first set of variables for the object, and using the model configuration to predict a corresponding change to the second set of variables for the object from data corresponding to the planned change to the first set.
The statistical model configuration may include a first parametric model of the first characteristic of the object, a second parametric model of the second characteristic of the object and a predictive model that characterises a statistical correlation between the models, and the method involves fitting the first and second models to the corresponding characteristics of an object in the first condition to provide parameterised data for the first and second characteristics of the object in the first condition, planning a change to the condition for the object so as to provide parameterised data for the first characteristic of the object in a second different condition, and utilising the parameterised data and the predictive model to provide parameterised data corresponding to a prediction of the change of second characteristic of the object in the second condition.
In more detail, the statistical model configuration may include a first parametric model of the first characteristic of the object in the first object condition, a second parametric model of the second characteristic of the object in the first object condition, a third parametric model of the first characteristic of the object in the second object condition, a fourth parametric model of the second characteristic of the object in the second object condition, and the predictive model characterises a statistical correlation between the models, with the method involving: fitting the first and second models to the corresponding characteristics of an object in the first condition to provide parameterised data for the first and second characteristics of the object in the first condition, planning the second condition for the object using the third model to provide parameterised data for the first characteristic of the object in the second condition, and utilising the parameterised data and the predictive model to provide parameterised data for the fourth model to predict the second characteristics of the object in the second condition.
The invention has particular application to predicting the outcome of medical procedures and may be carried out to predict the outcome of a medical operative procedure wherein the object comprises a patient, the first shape characteristic corresponds to the shape of underlying hard tissue structure of the patient and the second shape characteristic corresponds to the shape of a soft tissue structure that covers the hard tissue structure. Data may be acquired from a pre-operative lateral cephalogram concerning the shape of underlying hard tissue structure of the patient and data from a pre- operative 3D scan of the patient may be acquired for the shape of the soft tissue structure.
The invention also includes a computer program to be run on a computer to perform the aforesaid method and data processing apparatus configured to perform the method.
In another aspect the invention provides a medical analysis tool comprising a processor operable to provide a statistical model configuration of at least one mode of variation of a first set of variables relating to shape characteristics of a relatively hard tissue part of a living body, and at least one mode of variation of a second set of variables relating to shape characteristics of a relatively soft tissue part of the body that overlies the relatively hard tissue part, an input operable to plan a change to the first set of variables for a patient, such that the processor utilises the model configuration to predict corresponding changes to the second set of variables for the patient, whereby to predict changes in shape of the soft tissue part that correspond to changes planned for the hard tissue part.
The tool may include a model fitting system operable to fit the first and second models to the corresponding pre-operative hard and soft tissue shape characteristics of a patient to provide parameterised shape data for the pre-operative hard and soft tissue shape characteristics, and a planning input system operable to define a post- operative hard tissue configuration for the patient using the third model to provide parameterised shape data for post-operative hard tissue configuration.
The processor may be operable to utilise the parameterised shape data and the predictive model to provide parameterised data for the fourth model to predict post-operative soft tissue shape characteristics for the patient corresponding to the planned post post-operative hard tissue configuration. The statistical model configuration may include at least one point distribution model.
A display device may be configured to provide a visual display of the predicted the post-operative soft tissue configuration and least one of the pre-operative soft and hard tissue configuration and the planned post-operative hard tissue configuration so that the outcome of the planned procedure can be reviewed and shown to the patient if desired.
Brief description of the drawings
In order that the invention may be more fully understood an embodiment thereof will now be described with reference to the accompanying drawings in which:
Figure 1 is a schematic illustration of a hardware configuration for carrying out a predictive method according to the invention for predicting the outcome of a bimaxillary osteotomy,
Figure 2 illustrates the relationship between process components of a model used in predicting the outcome of the surgery,
Figure 3 is a lateral cephalogram of a patient's head with landmark points shown marked on it, Figure 4 illustrates a camera arrangement for capturing 3D data,
Figure 5 is an example of a 2D rendering of a 3D image captured by the camera arrangement of Fig. 4 with landmark points thereon,
Figure 6 is a flow chart of a process for training the models,
Figure 7 is a flow chart of a process for predicting the outcome of a bimaxillary osteotomy, using the trained models,
Figure 8a illustrates a display of a 2D lateral cephalogram of the bony tissue of a patient before surgery is carried out,
Figure 8b illustrates a display of a proposed surgical treatment plan for the patient,
Figure 9a illustrates a display of a 3D model instance for the soft tissue shape of the head of the patient before surgery is carried out, and
Figure 9b illustrates a display of a 3D predicted model of the soft tissue shape of the head of the patient after surgery is carried out according to the proposed treatment plan shown in Figure 8b. Detailed description
In the example of the invention described hereinafter, 2D and 3D shape-modelling techniques are used to build a statistical model of the relationship between hard and soft-tissue during maxillo-facial surgery. This model can then be used to predict 3D soft -tissue changes that occur as a result of maxillo-facial surgery. For example, a surgeon may propose to break and move a patient's jawbone to improve facial function and aesthetics and the model provides a prediction of the resulting 3D shape of the head produced by the proposed surgery. The method can be split into 2 general stages:
Model-Building - this involves building a statistical model which expresses the relationship between hard tissue and soft tissue, for both pre and post-operative maxillo-facial patient data.
Soft-Tissue Prediction - given patient pre-operative data for an individual patient, the statistical model is used to predict the post-operative soft-tissue appearance for the patient given the pre-operative data, plus knowledge of the surgeon's treatment plan.
These two stages will now be discussed individually in detail.
Model building
A number of statistical models are constructed using a hardware configuration shown in Figure 1. A conventional personal computer 1 with a processor unit 2, display screen 3 keyboard 4 and mouse 5 is coupled to a scanner 6. The scanner 6 permits X-ray side-view images of the patient's head, known as lateral cephalograms, to be scanned, digitised and fed to the processor unit 2. The resulting cephalogram data thus provides data concerning the bony or hard tissue configuration in the patient's head. It will be understood that this data can alternatively be obtained directly from digital X-ray equipment and the invention is not restricted to any particular method of hard tissue data capture. The processor unit 2 is also configured to receive data concerning the external or soft tissue appearance of the patient's head. This data may be captured using a 3D scanner 7 shown schematically. One example of 3D scanner 7 is the Tricorder DSP Series 3D device supra.
The processor unit 2 includes a central digital processor RAM, ROM and data storage media such as a hard disc and floppy disc connected on a common bus, in a conventional manner. The central processor can execute stored programs stored on the data storage media, so as to build the statistical models and display results obtained from them on the screen 3, and allow manipulation of the displayed data using the keyboard 4 and mouse 5. The programs build statistical models for the aforesaid model building and also execute the soft tissue prediction as will become apparent hereinafter.
Using this configuration, a statistical model is built that allows a prediction of postoperative soft-tissue appearance to be made from the following data: pre-operative soft-tissue appearance, pre-operative hard-tissue appearance, and knowledge of the surgical treatment plan i.e. knowledge of a proposed post-operative hard tissue appearance. The model building utilises the following components shown in Figure 2:
• A standard 2D PDM 10 with grey-level models describing the variability of the position and grey-level appearance of key bony landmarks identifiable in the pre- operative lateral cephalograms.
• A 3D PDM 11 describing the variability in shape of pre-operative 3D facial soft- tissue appearance, modelled from 3D surfaces acquired using the 3D scanner 7 • A standard 2D PDM 12 with grey-level models describing the variability of the position and grey-level appearance of bony landmarks in the post-operative lateral cephalograms.
• A 3D PDM 13 describing the variability in shape of post-operative 3D facial soft-tissue, modelled from 3D surfaces acquired using the scanner 7. • A predictive model 14 which links the data from the models 10 - 13 together, and describing the relationship between data from models 10-12 and data from model 13. These models will now be considered in more detail.
2D PDM models of lateral cephalograms (models 10 & 12)
A training set of pre and post-operative lateral cephalograms is obtained for human patients who have already undergone maxillo-facial surgery. The cephalograms thus constitute historical data for maxillo-facial procedures previously carried out and can be used to train the pre- and post-operative 2D PDMs 10, 12. The cephalograms are individually scanned using the scanner 6 and individually displayed on the screen 3 of the computer 1. The positions and appearance of key anatomical landmarks and structures present in both the pre and post-operative lateral cephalograms are identified and modelled using a standard 2D PDM with multi- resolution grey-level models as described in Cootes T.F etal "Building and Using Flexible Models Incorporating Grey-Level Information", and "Active Shape Models : Evaluation of a Multi-Resolution Method for Improving Image Search", supra. Each of the pre and post operative models includes a number of standard anatomical landmarks useful to maxillo-facial surgeons (Nasion, Sella, Porion, Orbitale, Gonion, Pogonion, Menton, Gnathion, Upper Incisor Root, Upper Incisor Tip, Lower Incisor Root, Lower Incisor Tip, ANS, PNS, A Point, B Point). Figure 3 shows the structures modelled.
Pre-operative 2D Model 10
Considering the pre-operative cephalogram model 10, by analogy with Equation (3) a shape instance in the pre-operative cephalogram model can be described by the equation: XCφήPre = XCe/ΛPre "*" " CephVr e" Cephfr e \°)
(where xCephPre is a vector of pre-op cephalogram 2D landmark data, xCeph?re is the mean pre-op cephalogram 2D landmark data averaged over the training set, ¥CephPre is a matrix of the most significant eigenvectors of the pre-op cephalogram training data covariance matrix, anάbCephPre is a set of weights, one for each eigenvector.) Post-operative 2D Model 12
Similarly, for the post-operative cephalogram model 12, a shape instance can be described by the equation:
^CephPosl = ^CephPosl "*" * CephPosl" CephPosl V' (where xCephPosι is a vector of post-op cephalogram 2D landmark data, xCephPosl is the mean post-op cephalogram 2D landmark data averaged over the training set, ^cephPost 1S matri of the most significant eigenvectors of the post-op cephalogram training data covariance matrix, andbCe/7ΛFω, is a set of weights, one for each eigenvector.) In this example, identical anatomical landmarks are used in the post-operative cephalogram model to those in the pre-operative cephalogram model.
3D Models of Facial Shape (models 11 &13)
The 3D shape of 3D pre and post-operative facial soft -tissue are each modelled using a 3D PDM. This involves capturing a training set of images of pre- and post operative facial shape using the scanner 7 shown in Figure 1. The basic modelling technique used is standard, as described by Hill et al. supra, but an improved method for marking up 3D training data is used, which addresses two problems with the standard method of Hill et al as will now be explained.
Hill's method is time-consuming, requiring of the order of 1000 landmarks to be marked on each training example and for the present application a final 3D model is required that describes facial soft-tissue at a comparable resolution to the originally acquired 3D surfaces. However, it is difficult to identify manually a large number of reproducible 3D landmarks on facial surfaces. To overcome this problem, a method is used according to the invention which takes a smaller number of manually marked 3D facial landmarks, and uses them to interpolate a large number of landmarks over the whole facial surface. The improved method exploits an assumption that the captured facial surface can be represented as a visible surface representation whereby in a particular co-ordinate frame, the facial surface height z can be described as a single-valued function of x and y. This turns the landmark mark-up problem into a 2.5D (or 2D with depth) problem. The improved method used in accordance with this example of the invention extends the 2D face-modelling technique of Lanitis et al, supra from 2D into 2.5D. The following steps are carried out:
1) A texture-mapped, triangulated 3D facial surface is acquired for each training example using the Tricorder DSP Series 3D capture system. The acquisition is done with each person face-on to the capture system as shown in Figure 4. The system includes an array of digital cameras C1-C4 directed face-on to the patient's face which is illuminated with a spatially textured light from a source (not shown) and the outputs of the cameras are processed to produce data corresponding to a texture-mapped, triangulated 3D facial surface.
2) Each texture-mapped, triangulated 3D facial surface is converted into a 2.5D depth-map, and an image of the corresponding texture. This is done by calculating a virtual pin-hole camera model which is the average of the 4 (pre-calibrated) Tricorder DSP Series cameras models shown in Figure 4, and re-projecting the 3D facial surface using this camera model to give a 2.5D depth-map and texture image. A depth-map is defined to be a 2D array D of 3D points D(i,j) = (x ,y ,z ) orthogonal to the depth direction z. Thus each point D can be considered to lie at a depth z from a common plane (x, y). The values of x and y are stored as well as the corresponding depth z.
3) Each depth-map texture image is then treated as a simple image and a relatively small (~80) set of reproducible 2D points are manually marked on each image.
Figure 5 shows an example marked-up texture image. The marked points consist of two types: i) landmark points (shown as filled dots 15) - distinctive facial features or positions which can be reliably marked on each example image, and ii) pseudo- landmark points (shown as unfilled dots 16) - intermediate points which are equally spaced along the shape boundary between the distinctive landmark points.
4) Using the method of Lanitis supra, the marked 2D points are used to warp each image and depth-map into a common 'shape-free' frame using 2D thin-plate spline (TPS) interpolation. In the 'shape-free' frame, any pixel (x,y) in a given training example depth-map is nominally in correspondence with the same pixel in every other example depth-map. Thus, a small number of 2D landmark points have been used to produce texture-map and depth-map correspondences over the whole face.
5) Dense 2D re-sampling of the 'shape-free' depth-maps produces a set of 3D 'landmark' points for each example. Only points for which a data-point exists in all training examples are included in this example of the model.
6) A standard 3D PDM is built from the training data.
It will be understood that in a modification of the described method, the data markup process could be straightforwardly extended from 2.5D into full 3D by using 3D mark-up and 3D TPS.
Two 3D facial soft -tissue models are produced, as follows:
Pre-operative 3D soft-tissue Model 11
By analogy with Equation (3), a shape instance in the pre-operative 3D soft -tissue model 11 can be described by the equation:
X3βPre = X3DPre + "3DPre"3BPre (10)
(where Xjpre 1S a vector of pre-op 3D soft -tissue data, ^rypre 1S ne mean pre-op 3D soft-tissue data averaged over the training set, ^,pre is a matrix of the most significant eigenvectors of the pre-op 3D soft-tissue training data covariance matrix, and h3DPre is a set of weights, one for each eigenvector.)
Post-operative 3D soft-tissue model 13
By analogy with Equation (3) a shape instance in the post-operative 3D soft-tissue model can be described by the equation: X3DPost = 3DPost ~^ "3DPost "3DPost (U) (where ^DPost ιs vector of post-op 3D soft-tissue data, ^DPost ιs the ean post-op 3D soft-tissue data averaged over the training set, P3Qp0St is a matrix of the most significant eigenvectors of the post-op 3D soft-tissue training data covariance matrix, and h3DPost is a set of weights, one for each eigenvector.)
In this example, identical 3D landmarks are used in the post-operative 3D soft- tissue model to those in the pre-operative 3D soft-tissue model.
Referring to Figure 6, the building of the models 10 - 13 is shown schematically as steps SI- S4.
Predictive Model 14
Once the pre- and post-operative 2D cephalogram and 3D soft -tissue models 10 - 13 have been built, the combined predictive model 14 that describes the relationship between the four individual models is prepared. This involves steps S5 and S6 shown in Figure 6 which will now be described in detail.
Each training example for the predictive model 14 consists of a measurement vector ¥τedtci tnat 1S the concatenation of 4 blocks of data: 1) a vector bCephPre of length nCephPre representing the pre-operative 2D bony structure of the face in parametric form. bCeph¥le is calculated from the raw 2D landmark point data xCephPre by inverting equation (8),
2) a vector b30Pre of length n3DPre representing the pre-operative 3D soft- tissue structure of the face in parametric form. b3flPre is calculated from the raw 3D landmark point data x3DPre by inverting equation (10),
3) a vector bCephPosl of length nCephPost representing the post-operative 2D bony structure of the face in parametric form. bCephPosl is calculated from the raw 2D landmark point data xCephPosl by inverting equation (9), and 4) A vector b3DPosl of length n3DPost representing the post-operative 3D soft- tissue of the face in parametric form. b3DPosl is calculated from the raw 3D landmark point data x3DPosl by inverting equation (11).
A concatenation of these blocks of data is carried out at step S5 in Figure 6.
In the manner described previously in relation to prior predictive models, each block of data making up p,^, is normalised by dividing by its total training set variance, so that each type of data gives a contribution of equal weight to the combined model i.e.:
_ bcephPte, bCephPιe„Cepll?τt ^DPre, pVte„3DPt,
Predict I >• > • •} > ; — 3 > σCephPιe °~Ceph?re σ3DPre σ3D?τe , . bCephPostx DCephPostnCephPosl *3DΛw/, *3Z> o- "7„»33DDPPir£e_. σCephPost σCephPost σ3DPost σ3DPιe
(where the normalisation factors σCeph?Ie, σ3DPre , σCephPost and σ3DPost are given by the total training set variance of measurement vectors bCephP e , b3DPre , bCephPosl and h D os, respectively.)
The combined predictive model is then (in step S6 of Fig. 6) built from the training data by Principal Component Analysis, using the method described previously in relation to prior predictive models. Thus, an instance of the predictive model can be described by the equation: XPr edict ~ XPr edict "*" "?r edict " Predict (13)
(where xPτedicl is the predictive model instance, xPreώc, is the mean predictive model data averaged over the training set, P ' fredicl is a matrix of the most significant eigenvectors of the predictive model training data covariance matrix, andbp-erf;c/ is a set of weights, one for each eigenvector.)
A useful predictive model can be built from of the order of 100 (or more) training examples, each example containing the data for a single example of a bimaxillary osteotomy procedure. Adding further training data improves the accuracy of the predictive model.
Soft- is sue Prediction Once the trained predictive model has been produced, it can be used to predict to the outcome of a surgical maxillo-facial procedure. For example, a surgeon may propose a procedure which involves breaking a patient's jaw and moving the jaw- line by resetting the jaw. The resulting change in the 3D physical appearance of the face produced by the procedure depends on the rearrangement of bony material produced by the surgery and has been difficult to predict, explain and demonstrate to the patient. The actual and the perceived success of the procedure depends greatly on the skill, experience and communication skills of the surgeon. In this example, the method according to the invention allows the surgeon to input a proposed procedure making reference to a 2D cephalogram of the patient and predict the 3D soft tissue outcome, i.e. the facial appearance after carrying out the surgery.
The main process steps are shown in Figure 7 . At step S7, a standard pre-operative lateral cephalogram of the patient is acquired by conventional X-ray techniques, which is scanned by means of the scanner 6 and the resulting data is supplied to the processor 2 shown in Figure 1. Then, at step S8, the 2D captured data for the pre- operative lateral cephalogram is converted into a parametric form by fitting the 2D pre-operative lateral cephalogram model 10 to the cephalogram of the patient.
At step S9, a pre-operative 3D facial soft -tissue surface image of the patient is acquired using the 3D Tricorder DSP Series device. The corresponding data is sent from scanner 7 to the processor 2. At step S10 the captured pre-operative 3D facial soft-tissue surface data is converted into a parametric form by fitting the 3D facial soft-tissue model 12 to the 3D facial soft-tissue surface.
At step Sll, the surgical treatment plan is set up by manipulating the 2D landmarks on the pre-operative lateral cephalogram. This process is used to define an instance of the of the post-operative 2D cephalogram model 13. The resulting data are supplied as inputs to the predictive model 14 which, at steps S12 and S13, uses the pre-op lateral cephalogram parameters, pre-op 3D soft-tissue parameters and surgical treatment plan to predict post-op 3D soft-tissue shape and appearance.
These steps will now be described in more detail.
Fitting 2D Pre-Operative Lateral Cephalogram Model to Cephalogram Data (Step S8)
The 2D pre-operative lateral cephalogram model is fitted to the pre-operative lateral cephalogram using the standard multi-resolution ASM of Cootes et al "Active Shape Models : Evaluation of a Multi-Resolution Method for Improving Image Search", supra. The fitting algorithm determines the pre-operative cephalogram model shape parameters bCephPte which best fit the given cephalogram, and also the 2D location, orientation and scaling of the model instance in the cephalogram. This permits the cephalogram to be characterised in terms of a small set of shape parameters bCeph?re , from which the aforementioned corresponding anatomical landmark point positions xcePhPτe c n he calculated.
The fitting algorithm is run on the processor unit 2 in Figure 1 and the resulting location of the landmark points relative to the cephalogram of the patient may be displayed on the screen 3 of the computer to provide the user with confirmation that the 2D pre-operative model has been satisfactorily fitted to the bony tissue image of the patient
If the automatic fit of the cephalogram model to the cephalogram is not acceptable to the clinician, the results of the fitting can be manually improved by moving any incorrectly positioned model landmark points, and updating bCephPre accordingly using the iterative method described in "Active Shape Models - Smart Snakes" by Cootes et al supra. .This process may be carried out using the mouse 5 (Figure 1) selectively to drag the display of landmark points of the 2D model so as to get a better fit. Fitting 3D Pre-Operative Facial Soft- Tissue Model to 3D Facial Surface Data (Step SI 0)
The 3D pre-operative facial soft-tissue model is fitted to the pre-operative 3D facial soft-tissue surface using an algorithm run on the processor unit 2 which is a variant of the Iterated Closest Point (ICP) algorithm described in "A method for registration of 3-D shapes", Besl, P. J. and McKay, N. D., IEEE PAMI, 14(2), pp 239-256, 1992 . The original search algorithm of Hill et al described in "Model- Based Interpretation of 3D Medical Images", supra was developed for deforming 3D models to fit to 3D volumetric image data whereas the modified version of the ICP algorithm deforms an initial 3D PDM in both pose and shape to produce the best local fit to 3D surface data. The algorithm proceeds as follows.
1) Initialise the position and shape of the 3D pre-operative facial soft-tissue model 12. A reasonable initialisation is found by calculating the centroid and scale of the pre-operative 3D facial soft -tissue surface, and initialising a 3D pre-operative facial soft-tissue model instance of mean shape with this position and scale, and an identity rotation matrix.
2) For each 3D PDM landmark point, find the closest point on the pre- operative surface. This gives a vector of updated model points x'3DP e which indicates to where each 3D PDM landmark point should be moved.
3) Update the 3D pre-operative facial model pose and shape parameters to produce an instance of model 12 which gives the best least squares fit to x3DPτe , and which is also within 3 standard deviations of the mean model shape as described by Hill et al in Model-Based Interpretation of 3D Medical Images", supra.
4) Iterate Steps 2) and 3) until convergence occurs.
A display of the resulting parameterised data may be provided on the screen 3 of the computer. The process allows the pre-operative 3D facial soft-tissue surface to be characterised automatically in terms of a small set of shape parameters b3DPve , and a
3D model pose defined in terms of an isotropic scaling s, a translation vector t and a rotation matrix R Input Surgical Treatment Plan (Step S 11)
The surgical treatment plan is input using a similar User Interface to that of existing systems such as OTP and QuickCeph supra. The pre-operative lateral cephalogram acquired at step S7 is displayed on the screen 2 with the anatomical landmark point positions CephPte marked on it. The surgeon then indicates the proposed changes to make during surgery by manipulating the bony landmark points with the mouse 5 or by means of the keyboard 4 to give a new set of landmark point positions xCephPosl , indicating how the mandible and/or maxilla will move during surgery. Figure 8a is a schematic illustration of the pre-operative lateral cephalogram of the patient and Figure 8b illustrates the planned post-operative configuration to be achieved by surgery. This involves breaking the jaw and moving it forward and this is simulated by making corresponding changes to the location of the landmark points on the screen 3. The resulting configuration of the landmark points are then inputted into the post-operative 2D model 12 such that xCephPosl can then be used to calculate the best-fit 2D post-operative lateral cephalogram model parameters bCephPosl using the iterative method described Cootes et al "Active Models- Smart Snakes" supra.
Prediction (Step SI 2)
The parameterised form of the pre-operative data (bCephPre , b3DPτe and 3D surface model pose s, t, R), and the parameterised form of the treatment plan (bCepι,Pos,), are used to calculate a prediction of post-operative soft-tissue shape and appearance. This is done as follows:
1. Use the combined predictive model 14 described by equation (13), and the methods described above generally in relation to prior predictive models, to use the measurements bCephPte , b3DPre and bCephPosl to predict b3DPost by solution of a weighted linear least squares problem. The resulting instance of b3DPosl is thus a prediction of the post-operative 3D soft-tissue appearance in parametric form.
2. Convert b3DPre into a corresponding set of 3D surface model points x 3DPre us nδ equation (10). Transform x3DPre into the correct 3D frame of reference by applying the 3D pre-operative surface model pose s, t, R to each 3D point in
X3DPre tO give X3DPre .
3. Convert b3DPost into a corresponding set of 3D surface model points *3DPost using equation (11). Transform x3DPos, into the correct 3D frame of reference by applying the 3D pre-operative surface model pose s, t, R to each 3D point in x3DPosl to give x'3DPosi • Although x'3DPosι itself gives a reasonable prediction of 3D soft-tissue post-operative shape, a more accurate method is given below.
4. Calculate the change in parametric model points dx3D between pre and post- operative 3D surface model points:
^3D = 3DPost ~ X3£>Pre (")
5. Apply the change in parametric model points dx3D to the original texture- mapped 3D pre-operative facial surface. For each point p in the pre-operative facial surface:
5.1 Calculate p' , the closest point to p in x'3DPτe .
5.2 Extract the corresponding change dp' in the position of p' from dx3D .
5.3 Add dp' to p.
The output of this algorithm is a version of the 3D pre-operative facial surface which has been modified to simulate the required maxillo-facial surgery.
The resulting post -operative 3D data produced either by step 3 or 5 of this predictive process is then displayed to the surgeon on the screen 3. Figure 9 a shows the display of the instance of pre-operative 3D model 11 for the patient, and Figure 9b illustrates the predicted 3D post-operative shape predicted by the predictive model 14 for the surgeon's treatment plan. Thus the surgery planned in 2D as shown in Figures 8a and 8b is predicted to produces changes in 3D as shown in Figures 9a and 9b. The surgeon can then if desired modify the planned surgery in the screen display of Figure 8b and observe the outcome in the display of Figure 9b. This enables the surgical procedure to be optimised to achieve the desired aesthetic outcome. The displays of Figures 8 and 9 may be shown to the patient to explain and seek approval for the proposed procedure.
Many modifications and variations to the described method fall within the scope of the invention. Whilst in the described example an hybrid 2D-3D predictive model is employed, a number of variants on this scheme could also be used, depending on the available training data and/ or treatment planning protocol. For example, it is possible to link pre and post-operative 2D cephalogram data to pre and postoperative 2D soft-tissue shape extracted from a 2D photograph of the patient. Also, it would be possible to link pre and post-operative 3D pre and post-operative X-ray CT data to 3D soft -tissue shape extracted from a 3D surface scan. Other possibilities will be evident to those skilled in the art.
Also, the training of the predictive model 14 may be carried out on an ongoing basis. In the described example, the model training was carried out as an initial step, but in addition, the data for subsequent surgical procedures may be used to update the training of the models.
Furthermore, the invention is not restricted to maxillo-facial and craniofacial surgery and can be used for other procedures where it useful to predict changes in soft tissue shape resulting from a proposed operation to change to a corresponding relatively hard tissue configuration, and is not restricted to human surgery. The invention may also be used for operations on non-animate objects for which a statistical correlation occurs between an inner structure and an outer structure covering the inner structure so as to predict changes in the shape of the outer structure produced by a proposed operation to change the inner structure. Conditions other than the shape of the object may be predicted by means of the invention.

Claims

Claims
1. A method of predicting changes for an object with first and second characteristics that are distinct from but statistically correlated with one another, comprising: providing a statistical model configuration of at least one mode of variation of a first set of variables relating to the first characteristic of the object, and at least one mode of variation of a second set of variables relating to the second characteristic of the object, planning a change to the first set of variables for the object, and using the model configuration to predict a corresponding change to the second set of variables for the object from data corresponding to the planned change to the first set.
2. A method according to claim 1 wherein the statistical model configuration includes a first parametric model of the first characteristic of the object, a second parametric model of the second characteristic of the object and a predictive model that characterises a statistical correlation between the models, the method comprising: fitting the first and second models to the corresponding characteristics of an object in the first condition to provide parameterised data for the first and second characteristics of the object in the first condition, planning a change to the condition for the object so as to provide parameterised data for the first characteristic of the object in a second different condition, and utilising the parameterised data and the predictive model to provide parameterised data corresponding to a prediction of the change of second characteristic of the object in the second condition.
3. A method according to claim 2 wherein the statistical model configuration includes a first parametric model of the first characteristic of the object in the first object condition, a second parametric model of the second characteristic of the object in the first object condition, a third parametric model of the first characteristic of the object in the second object condition, a fourth parametric model of the second characteristic of the object in the second object condition, and the predictive model characterises a statistical correlation between the models, the method comprising: fitting the first and second models to the corresponding characteristics of an object in the first condition to provide parameterised data for the first and second characteristics of the object in the first condition, planning the second condition for the object using the third model to provide parameterised data for the first characteristic of the object in the second condition, and utilising the parameterised data and the predictive model to provide parameterised data for the fourth model to predict the second characteristics of the object in the second condition.
4. A method according to claim 3 including acquiring data concerning the first characteristic of the object in its first condition and fitting the first parametric model to the acquired data.
5. A method according to claim 4 wherein the planning of the second shape condition for the object includes manipulating the third parametric model relative to the acquired data concerning the first characteristics of the object in its first condition.
6. A method according to claim 5 including displaying the acquired data concerning the first characteristic of the object in its first condition and manipulating a display of the third model relative to the displayed data.
7. A method according to any one of claims 3 to 6 including displaying an instance of the fourth model corresponding to the parameterised data therefor produced by means of the predictive model to display a prediction of the second characteristic of the of the object in the second object condition.
8. A method according to any one of claims 3 to 7 including acquiring data concerning the second characteristic of the object in its first condition and fitting the second model thereto.
9. A method according to claim 8 including modifying the data in accordance with the parameterised data produced by the fourth model and displaying the modified data to display a prediction of the second characteristic of the of the object in the second object condition
10. A method according to any preceding claim wherein the first and second characteristics relate to the shape of the object.
11. A method according to any preceding claim wherein the first characteristic relate to information concerning an interior structure of the object.
12. A method according to any preceding claim wherein the second shape characteristics relate to information concerning an outer structure of the object.
13. A method according to any preceding claim carried out to predict the outcome of a medical operative procedure wherein the object comprises a patient, the first characteristic corresponds to the shape of underlying hard tissue structure of the patient and the second characteristic corresponds to the shape of a soft tissue structure that covers the hard tissue structure.
14. A method according to claim 13 wherein the first condition relates to the shape of the patient before carrying out the operative procedure and the second condition relates to the shape of the patient after carrying out the operative procedure.
15. A method according to claim 13 or 14 including acquiring data from a pre- operative lateral cephalogram concerning the shape of underlying hard tissue structure of the patient.
16. A method according to claim 13, 14, 15 including acquiring data from a pre- operative 3D scan of the patient for the shape of the soft tissue structure.
17. A computer program to be run on a computer to perform a method as claimed in any preceding claim.
18. Apparatus configured to perform a method as claimed in any one of claims 1 to 16.
19. A computer software package to be run on a computer to predict changes for an object that has first and second characteristics that are distinct from but statistically correlated with one another, the package being operable to provide a statistical model configuration of at least one mode of variation of a first set of variables relating to the first characteristic of the object, and at least one mode of variation of a second set of variables relating to a second characteristic of the object, such that by planning a change to the first set of variables for the object, the model configuration is operable to predict a corresponding change to the second set of variables for the object from data corresponding to the planned change to the first set
20. A package according to claim 19 wherein the statistical model configuration includes a first parametric model of the first characteristic of the object, a second parametric model of the second characteristic of the object and a predictive model that characterises a statistical correlation between the models, such that by fitting the first and second models to the corresponding characteristics of an object in the first condition to provide parameterised data for the first and second characteristics of the object in the first condition, planning a change to the condition for the object so as to provide parameterised data for the first characteristic of the object in a second different condition, the parameterised data and the predictive model to provide parameterised data corresponding to a prediction of the change of second characteristic of the object in the second condition.
21. A system for predicting shape changes for an object that has first and second shape characteristics that are distinct from but statistically correlated with one another, comprising a statistical model configuration including a first parametric model (10) of the first shape characteristics of the object in a first condition of the object, a second parametric model of the second shape characteristics of the object in the first object condition, a third parametric model of the first shape characteristics of the object in a second different object condition, a fourth parametric model of the second shape characteristics of the object in the second object condition, and a predictive model (14) that characterises a statistical correlation between the models, a model fitting system operable to fit the first and second models to the corresponding shape characteristics of an object in the first shape condition to provide parameterised shape data for the first and second shape characteristics of the object in the first condition, a planning input system operable to define a second shape condition for the object using the third model to provide parameterised shape data for the first shape characteristics of the object in the second condition, and a processor operable to utilise the parameterised shape data and the predictive model to provide parameterised data for the fourth model to predict the second shape characteristics of the object in the second condition.
22. A medical analysis tool comprising a processor operable to provide a statistical model configuration of at least one mode of variation of a first set of variables relating to shape characteristics of a relatively hard tissue part of a living body, and at least one mode of variation of a second set of variables relating to shape characteristics of a relatively soft tissue part of the body that overlies the relatively soft tissue part, an input operable to plan a change to the first set of variables for a patient, such that the processor utilises the model configuration to predict corresponding changes to the second set of variables for the patient, whereby to predict changes in shape of the soft tissue part that correspond to changes planned for the hard tissue part.
23. A tool according to claim 22 wherein the statistical model configuration includes a first parametric model of pre-operative hard tissue shape characteristics of the patient, a second parametric model of pre-operative soft tissue shape characteristics of the patient, a third parametric model of post-operative hard tissue shape characteristics of the patient, a fourth parametric model of post-operative shape tissue shape characteristics of the patient, and a predictive model that characterises a statistical correlation between the models.
24. A tool according to claim 23 including: a model fitting system operable to fit the first and second models to the corresponding pre-operative hard and soft tissue shape characteristics of a patient to provide parameterised shape data for the pre-operative hard and soft tissue shape characteristics, and a planning input system operable to define a post-operative hard tissue condition for the patient using the third model to provide parameterised shape data for post-operative hard tissue condition.
25. A tool according to claim 24 wherein the processor is operable to utilise the parameterised shape data and the predictive model to provide parameterised data for the fourth model to predict post-operative soft tissue shape characteristics for the patient corresponding to the planned post post-operative hard tissue condition.
26. A tool according to any one of claims 22 to 25 wherein the statistical model configuration includes at least one point distribution model.
27. A tool according to any one of claims 22 to 26 including a display device configured to provide a visual display of the predicted the post-operative soft tissue condition.
28. A tool according claim 27 wherein the display device is configured to provide a visual display of at least one of the pre-operative soft and hard tissue condition and the planned post-operative hard tissue condition.
29. A tool according to any one of claims 22 to 28 including an input to receive data corresponding to a 2D representation of the pre-operative hard tissue condition for the patient, and an input to receive data corresponding to a 3D representation of the pre-operative soft tissue condition for the patient.
30. A computer program to be run by the processor claimed in any one of claims 22 to 29 to provide said statistical model configuration.
31. A method of training a medical analysis tool as claimed in any one of claims 22 to 30 including acquiring a set of training data corresponding to the model configuration and determining modes of variation thereof.
32. A system for predicting shape changes for an object substantially as hereinbefore described with reference to the accompanying drawings.
33. A method of predicting shape changes for an object substantially as hereinbefore described with reference to the accompanying drawings.
PCT/GB2001/002828 2000-06-30 2001-06-26 Predicting changes in characteristics of an object WO2002003304A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001266169A AU2001266169A1 (en) 2000-06-30 2001-06-26 Predicting changes in characteristics of an object

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0016151.3 2000-06-30
GB0016151A GB2364494A (en) 2000-06-30 2000-06-30 Predicting changes in characteristics of an object

Publications (2)

Publication Number Publication Date
WO2002003304A2 true WO2002003304A2 (en) 2002-01-10
WO2002003304A3 WO2002003304A3 (en) 2003-03-13

Family

ID=9894812

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2001/002828 WO2002003304A2 (en) 2000-06-30 2001-06-26 Predicting changes in characteristics of an object

Country Status (3)

Country Link
AU (1) AU2001266169A1 (en)
GB (1) GB2364494A (en)
WO (1) WO2002003304A2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007034346A2 (en) * 2005-09-23 2007-03-29 Koninklijke Philips Electronics N.V. A method, a system and a computer program for image segmentation
WO2008115368A2 (en) * 2007-03-16 2008-09-25 Carestream Health, Inc. Digital system for plastic and cosmetic surgery
EP2471483A1 (en) * 2005-03-01 2012-07-04 Kings College London Surgical planning
WO2012117122A1 (en) * 2011-03-01 2012-09-07 Dolphin Imaging Systems, Llc System and method for generating profile change using cephalometric monitoring data
WO2012138624A2 (en) 2011-04-07 2012-10-11 Dolphin Imaging Systems, Llc System and method for three-dimensional maxillofacial surgical simulation and planning
US8417004B2 (en) 2011-04-07 2013-04-09 Dolphin Imaging Systems, Llc System and method for simulated linearization of curved surface
US8650005B2 (en) 2011-04-07 2014-02-11 Dolphin Imaging Systems, Llc System and method for three-dimensional maxillofacial surgical simulation and planning
EP2569755A4 (en) * 2010-05-21 2017-06-28 My Orthodontics Pty Ltd Prediction of post-procedural appearance
EP2680233A4 (en) * 2011-02-22 2017-07-19 Morpheus Co., Ltd. Method and system for providing a face adjustment image
CN116778576A (en) * 2023-06-05 2023-09-19 吉林农业科技学院 Time-space diagram transformation network based on time sequence action segmentation of skeleton

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7359748B1 (en) 2000-07-26 2008-04-15 Rhett Drugge Apparatus for total immersion photography
KR102475962B1 (en) * 2020-08-26 2022-12-09 주식회사 어셈블써클 Method and apparatus for simulating clinical image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998018970A1 (en) * 1996-10-30 1998-05-07 Voest-Alpine Industrieanlagenbau Gmbh Process for monitoring and controlling the quality of rolled products from hot-rolled processes

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998018970A1 (en) * 1996-10-30 1998-05-07 Voest-Alpine Industrieanlagenbau Gmbh Process for monitoring and controlling the quality of rolled products from hot-rolled processes

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
D. SHEN ET AL.: "A Hierarchical Deformable Model Using Statistical and Geometric Information" PROC. MATHEMATICAL METHODS IN BIOMEDICAL IMAGE ANALYSIS, 11 June 2000 (2000-06-11), pages 146-153, XP002222534 SC, USA *
KEEVE E ET AL: "Deformable modeling of facial tissue" BMES/EMBS CONFERENCE, 1999. PROCEEDINGS OF THE FIRST JOINT ATLANTA, GA, USA 13-16 OCT. 1999, PISCATAWAY, NJ, USA,IEEE, US, 13 October 1999 (1999-10-13), page 502 XP010357640 ISBN: 0-7803-5674-8 *
R. BOWDEN ET AL.: "Reconstructing 3D Pose and Motion from a Single Camera View" PROC. 9TH BRITISH MACHINE VISION CONFERENCE, vol. 2, 14 September 1998 (1998-09-14), pages 904-913, XP008010723 Southampton, UK cited in the application *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2471483A1 (en) * 2005-03-01 2012-07-04 Kings College London Surgical planning
WO2007034346A3 (en) * 2005-09-23 2008-12-04 Koninkl Philips Electronics Nv A method, a system and a computer program for image segmentation
WO2007034346A2 (en) * 2005-09-23 2007-03-29 Koninklijke Philips Electronics N.V. A method, a system and a computer program for image segmentation
WO2008115368A2 (en) * 2007-03-16 2008-09-25 Carestream Health, Inc. Digital system for plastic and cosmetic surgery
WO2008115368A3 (en) * 2007-03-16 2008-12-11 Carestream Health Inc Digital system for plastic and cosmetic surgery
EP2569755A4 (en) * 2010-05-21 2017-06-28 My Orthodontics Pty Ltd Prediction of post-procedural appearance
EP2680233A4 (en) * 2011-02-22 2017-07-19 Morpheus Co., Ltd. Method and system for providing a face adjustment image
US8711178B2 (en) 2011-03-01 2014-04-29 Dolphin Imaging Systems, Llc System and method for generating profile morphing using cephalometric tracing data
WO2012117122A1 (en) * 2011-03-01 2012-09-07 Dolphin Imaging Systems, Llc System and method for generating profile change using cephalometric monitoring data
WO2012138624A2 (en) 2011-04-07 2012-10-11 Dolphin Imaging Systems, Llc System and method for three-dimensional maxillofacial surgical simulation and planning
EP2693976A2 (en) * 2011-04-07 2014-02-12 Dolphin Imaging Systems, LLC System and method for three-dimensional maxillofacial surgical simulation and planning
EP2693976A4 (en) * 2011-04-07 2015-01-07 Dolphin Imaging Systems Llc System and method for three-dimensional maxillofacial surgical simulation and planning
US8650005B2 (en) 2011-04-07 2014-02-11 Dolphin Imaging Systems, Llc System and method for three-dimensional maxillofacial surgical simulation and planning
US8417004B2 (en) 2011-04-07 2013-04-09 Dolphin Imaging Systems, Llc System and method for simulated linearization of curved surface
CN116778576A (en) * 2023-06-05 2023-09-19 吉林农业科技学院 Time-space diagram transformation network based on time sequence action segmentation of skeleton

Also Published As

Publication number Publication date
GB2364494A (en) 2002-01-23
GB0016151D0 (en) 2000-08-23
WO2002003304A3 (en) 2003-03-13
AU2001266169A1 (en) 2002-01-14

Similar Documents

Publication Publication Date Title
KR102018565B1 (en) Method, apparatus and program for constructing surgical simulation information
JP7110120B2 (en) Method for estimating at least one of shape, position and orientation of dental restoration
EP3100236B1 (en) Method and system for constructing personalized avatars using a parameterized deformable mesh
Mollemans et al. Predicting soft tissue deformations for a maxillofacial surgery planning system: from computational strategies to a complete clinical validation
EP2537111B1 (en) Method and system for archiving subject-specific, three-dimensional information about the geometry of part of the body
US7929745B2 (en) Method and system for characterization of knee joint morphology
US7822246B2 (en) Method, a system and a computer program for integration of medical diagnostic information and a geometric model of a movable body
US8948484B2 (en) Method and system for automatic view planning for cardiac magnetic resonance imaging acquisition
WO2002003304A2 (en) Predicting changes in characteristics of an object
Desvignes et al. 3D semi-landmarks based statistical face reconstruction
EP1851721B1 (en) A method, a system and a computer program for segmenting a surface in a multidimensional dataset
Buchaillard et al. 3D statistical models for tooth surface reconstruction
CN113302660A (en) Method for visualizing dynamic anatomical structures
Tiddeman et al. Construction and visualisation of three-dimensional facial statistics
JP2022111705A (en) Leaning device, image processing apparatus, medical image pick-up device, leaning method, and program
JP2022111704A (en) Image processing apparatus, medical image pick-up device, image processing method, and program
CN112562070A (en) Craniosynostosis operation cutting coordinate generation system based on template matching
Zhang et al. Performance analysis of active shape reconstruction of fractured, incomplete skulls
Rhee et al. Soft-tissue deformation for in vivo volume animation
Baka et al. Correspondence free 3D statistical shape model fitting to sparse X-ray projections
Wierzbicki et al. Subject-specific models for image-guided cardiac surgery
JP7165541B2 (en) Volume data processing device, method and program
Zhang et al. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach
Soltaninejad et al. Automatic crown surface reconstruction using tooth statistical model for dental prosthesis planning
Magnenat-Thalmann et al. Modeling anatomical-based humans

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP