US20160206205A1 - Method and system for wound assessment and management - Google Patents
Method and system for wound assessment and management Download PDFInfo
- Publication number
- US20160206205A1 US20160206205A1 US15/083,081 US201615083081A US2016206205A1 US 20160206205 A1 US20160206205 A1 US 20160206205A1 US 201615083081 A US201615083081 A US 201615083081A US 2016206205 A1 US2016206205 A1 US 2016206205A1
- Authority
- US
- United States
- Prior art keywords
- injury
- area
- wound
- interest
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0002—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
- A61B5/0015—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
- A61B5/0022—Monitoring a patient using a global network, e.g. telephone networks, internet
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0073—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by tomography, i.e. reconstruction of 3D images from 2D projections
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1077—Measuring of profiles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/44—Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
- A61B5/441—Skin evaluation, e.g. for skin disorder diagnosis
- A61B5/445—Evaluating skin irritation or skin trauma, e.g. rash, eczema, wound, bed sore
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6887—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
- A61B5/6898—Portable consumer electronic devices, e.g. music players, telephones, tablet computers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2576/00—Medical imaging apparatus involving image processing or analysis
- A61B2576/02—Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
- A61B5/1079—Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30088—Skin; Dermal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Definitions
- Chronic and complex wounds including venous, diabetic, and pressure ulcers, surgical wounds, ostomy and other complex wounds affect millions of patients in the United States alone. Billions of dollars are spent on the treatment of chronic wounds in the United States annually, including over billions on wound care products. The cost of treating chronic wounds continues to grow year after year due to an aging population and rising incidence of diabetes and obesity. The treatment cost for chronic wound has become a significant financial burden to the individual and society.
- FIG. 1 illustrates an example of a system diagram according to one embodiment
- FIG. 2 illustrates another example of a system diagram according to one embodiment
- FIG. 3 illustrates an overview of the processing performed at the server using the information obtained from an imaging sensor and a structure sensor according to one embodiment
- FIG. 4 illustrates a flow diagram showing the process for wound segmentation using 2D image information
- FIG. 5 illustrates an exemplary wound in which a foreground area corresponding to the wound and a background area have been designated according to one embodiment
- FIG. 6 illustrates an implementation example according to one embodiment
- FIG. 7 illustrates a flow diagram showing the process for computing the 3D measurements from the structure sensor data and the obtained segmented image
- FIG. 8 illustrates an example of the calibration of the structure sensor with the imaging sensor
- FIG. 9 illustrates that the foreground mask is applied to the depth data such that only the depth information within the area is obtained
- FIG. 10 illustrates the process for classifying the tissue in the wound
- FIGS. 11A-F illustrate a comparison between manual and automatic segmentation
- FIG. 12 illustrates an example of the system interface according to one embodiment
- FIG. 13 illustrates an exemplary computing system according to one embodiment.
- the present disclosure describes system for determining characteristics of a medical injury.
- the system includes one or more imaging sensors that obtain imaging information and topology information of an area of interest, and circuitry configured to determine a boundary of an injury portion within the imaging information of the area of interest, correlate the imaging information and the topology information, apply the boundary of the injury portion designated within the imaging information to the topology information to designate a mask area, and determine characteristics of the injury portion within the mask area based on the topology information and the imaging information.
- the system further includes an embodiment in which the circuitry is further configured to designate a representative background portion of the area of interest from the imaging information of the area of interest, designate a representative injury portion of the area of interest from the imaging information of the area of interest, and determine the boundary of the injury portion within the imaging information of the area of interest based on the designated representative background and injury portions.
- the system further includes an embodiment in which the circuitry is further configured to designate a representative injury portion of the area of interest from the imaging information of the area of interest based on user input or pixel characteristic differences.
- the system further includes an embodiment in which the circuitry is further configured to classify the injury portion within the mask area based on the imaging information.
- the system further includes an embodiment in which the circuitry is further configured to classify the injury portion within the mask area by being configured to divide the injury portion into tiles, to calculate a measure of central tendency for imaging values of each tile, and to classify each tile using injury type information generated by a previously trained classifier.
- the system further includes an embodiment in which the injury type information includes healthy, slough, and eschar tissue.
- the system further includes an embodiment in which the previously trained support vector machine generates the injury type information using circuitry configured to, for a set of annotated images, divide each image into tiles, to calculate a measure of central tendency for imaging values of each tile, to designate each tile according to an injury type, and to apply cross-validation using a separate test set.
- the system further includes an embodiment in which the characteristics of the injury portion within the mask area include depth, width, and length of the injury.
- the system further includes an embodiment in which the characteristics of the injury portion within the mask area include perimeter, area, and volume of the injury.
- the system further includes an embodiment in which the circuitry is configured to determine the boundary of the injury portion within the imaging information of the area of interest by utilizing an automatic image segmentation algorithm.
- the system further includes an embodiment in which the circuitry is configured to determine the boundary of the injury portion within the imaging information of the area of interest by detecting contours in the representative injury portion of the area of interest.
- the system further includes an embodiment in which the circuitry is configured to determine the boundary of the injury portion within the imaging information of the area of interest by detecting contours in the representative injury portion of the area of interest and iterating over all contours.
- the system further includes an embodiment in which the medical injury is a wound.
- the device includes circuitry configured to determine a boundary of an injury portion within imaging information of an area of interest, correlate the imaging information and topology information obtained by one or more imaging sensors, apply the boundary of the injury portion designated within the imaging information to the topology information to designate a mask area, and determine characteristics of the injury portion within the mask area based on the topology information and the imaging information.
- the device further includes an embodiment in which the circuitry is further configured to designate a representative background portion of the area of interest from the imaging information of the area of interest, designate a representative injury portion of the area of interest from the imaging information of the area of interest, and determine the boundary of the injury portion within the imaging information of the area of interest based on the designated representative background and injury portions.
- the device further includes an embodiment in which the circuitry is further configured to designate a representative injury portion of the area of interest from the imaging information of the area of interest based on user input or pixel characteristic differences.
- the device further includes an embodiment in which the circuitry is further configured to classify the injury portion within the mask area based on the imaging information.
- the device further includes an embodiment in which the circuitry is further configured to classify the injury portion within the mask area by being configured to divide the injury portion into tiles, to calculate a measure of central tendency for imaging values of each tile, and to classify each tile using injury type information generated by a previously trained support vector machine.
- the device further includes an embodiment in which the injury type information includes healthy, slough, and eschar tissue.
- the device further includes an embodiment in which the previously trained support vector machine generates the injury type information using circuitry configured to, for a set of annotated images, divide each image into tiles, to calculate a measure of central tendency for imaging values of each tile, to designate each tile according to a injury type, and to apply cross-validation using a separate test set.
- the previously trained support vector machine generates the injury type information using circuitry configured to, for a set of annotated images, divide each image into tiles, to calculate a measure of central tendency for imaging values of each tile, to designate each tile according to a injury type, and to apply cross-validation using a separate test set.
- the device further includes an embodiment in which the characteristics of the injury portion within the mask area include depth, width, and length of the injury.
- the device further includes an embodiment in which the characteristics of the injury portion within the mask area include perimeter, area and volume of the injury.
- the device further includes an embodiment in which the circuitry is configured to determine the boundary of the injury portion within the imaging information of the area of interest by utilizing a grab cut algorithm.
- the device further includes an embodiment in which the circuitry is configured to determine the boundary of the injury portion within the imaging information of the area of interest by detecting contours in the representative injury portion of the area of interest.
- the device further includes an embodiment in which the circuitry is configured to determine the boundary of the injury portion within the imaging information of the area of interest by detecting contours in the representative injury portion of the area of interest and iterating over all contours.
- the device further includes an embodiment in which the medical injury is a wound.
- the method includes the steps of determining, using processing circuitry, a boundary of an injury portion within imaging information of an area of interest, correlating, using the processing circuitry, the imaging information and topology information obtained by one or more imaging sensors, applying, using the processing circuitry, the boundary of the injury portion designated within the imaging information to the topology information to designate a mask area, and determining, using the processing circuitry, characteristics of the injury portion within the mask area based on the topology information and the imaging information.
- the method further includes an embodiment including the further steps of designating a representative background portion of the area of interest from the imaging information of the area of interest, designating a representative injury portion of the area of interest from the imaging information of the area of interest, and determining the boundary of the injury portion within the imaging information of the area of interest based on the designated representative background and injury portions.
- the method further includes an embodiment including the further step of designating a representative injury portion of the area of interest from the imaging information of the area of interest based on user input or pixel characteristic differences.
- the method further includes an embodiment including the further step of classifying the injury portion within the mask area based on the imaging information.
- the method further includes an embodiment in which the further step of the injury portion within the mask area is further classified by dividing the injury portion into tiles, to calculate a measure of central tendency for imaging values of each tile, and to classify each tile using injury type information generated by a previously trained support vector machine.
- the method further includes an embodiment in which the injury type information includes healthy, slough, and eschar tissue.
- the method further includes an embodiment in which the previously trained support vector machine generates the injury type information using circuitry configured to, for a set of annotated images, divide each image into tiles, to calculate a measure of central tendency for imaging values of each tile, to designate each tile according to an injury type, and to apply cross-validation using a separate test set.
- the method further includes an embodiment in which the characteristics of the injury portion within the mask area include depth, width, and length of the injury.
- the method further includes an embodiment in which the characteristics of the injury portion within the mask area include perimeter, area and volume of the injury.
- the method further includes an embodiment including the further step of determining the boundary of the injury portion within the imaging information of the area of interest by utilizing a grab cut algorithm.
- the method further includes an embodiment including the further step of determining the boundary of the injury portion within the imaging information of the area of interest by detecting contours in the representative injury portion of the area of interest.
- the method further includes an embodiment including the further step of determining the boundary of the injury portion within the imaging information of the area of interest by detecting contours in the representative injury portion of the area of interest and iterating over all contours.
- the method further includes an embodiment in which the medical injury is a wound.
- FIG. 1 illustrates a system for the volumetric assessment of chronic and complex wounds, such as but not limited to pressure ulcers, diabetic ulcers, arterial insufficiency ulcers, venous stasis ulcers, and burn wounds.
- Chronic wounds often require constant monitoring and attention. Beyond the visual information that can be obtained by a traditional single 2D camera, the three dimensional surface data is of particular clinical relevance.
- the capturing, analysis, and transmission of clinical data and imagery using mobile devices, a specialized camera with structured sensor capability, and a cloud/network infrastructure can provide significant improvements over existing technology.
- the present embodiments incorporate the above components to provide a complete platform to capture, evaluate, document, and communicate clinical information for the purpose of wound prevention and treatment.
- FIG. 1 illustrates system diagram according to an embodiment.
- a mobile device 1 having either attached/connected thereto or included therein a 2D imaging sensor 2 and a structured sensor 3 .
- the present embodiments are not limited to a mobile device 1 but may be any computing device capable of transferring information between sensors 2 and 3 and a network 20 .
- the mobile device 1 is connected to the server 10 via the network.
- the server 10 is also connected to portal 11 and informatics 12 .
- the mobile device 1 may be a cellular/wireless enabled portable device with processing capacity and embedded 2D photo taking function via an imaging sensor 2 .
- the mobile device 1 may include user interaction through touch screen, stylus, mouse, keyboard, or other means of input.
- the mobile device 1 may also have (3D) structure sensing functionality through connection with a structure sensor 3 connected to the mobile device 1 via an input/output port (e.g., a USB port) or other methods of connectivity such as via Bluetooth or near-field communication (“NFC”).
- the mobile device could be an IpadTM by Apple Computer or a Galaxy TabTM by Samsung or any other suitable mobile or tablet device having input/output capability.
- the imaging sensor 2 may be a digital CCD sensor embedded or included with the mobile device 1 such as an iSight CameraTM by Apple Computer.
- the imaging sensor 2 may have any suitable resolution (such as 640 ⁇ 480 resolution, for example) and have any suitable number of pixels (such as 8 megapixels).
- the challenge with using a 2D camera to make a measurement of a wound is the lack of scaling information and distortion correction.
- photographers often place a reference object (ruler or object of known dimensions, such as a penny) in the same scene for pictures to be taken, so measurements can be derived later.
- this method requires the camera to be perpendicular to the measuring plane and is cumbersome and not accurate.
- the structure sensor 3 may be a 3D imaging sensor such as the Occipital Structure SensorTM developed by Occipital. Because of the spatial relationship of the imaging sensor 2 and the structure sensor 3 is known, 2D images taken from the imaging sensor 2 can be mapped to the 3D structure data acquired by the structure sensor 3 . Thus, when obtaining information of a wound, images are obtained by both the imaging sensor 2 and the structure sensor 3 . In an alternative embodiment, imaging information may be obtained from only the structure sensor 3 or from only the imaging sensor 2 .
- the structure sensor 3 enables accurate 3D measurement using the mobile device 1 without any further specialized device or any complicated process.
- the structure sensor 3 may be mounted to the mobile device 1 using a bracket.
- the structure sensor 3 could also be implemented by a 3D stereo camera.
- the structure sensor 3 may be an apparatus that can be used in tandem with an existing mobile device to enable the capture of stereoscopic images, an apparatus that can be used with an existing camera to generate structured light (such as taught by “three-dimensional scanner for hand-held phones”, J. Ryan Kruse, US20120281087A1), a miniaturized laser range scanner, or any potential apparatus that can be used in tandem with the mobile device 1 to capture the three dimensional information of the wound site.
- the structure sensor 3 may be mounted to the mobile device 1 using, for example, a bracket. Alternatively, the structure sensor 3 may be external and not connected to the mobile device 1 .
- an on-screen guide can be provided which directs the image obtaining user to take the best possible picture.
- the on-screen guide can be displayed while the user is attempting to take a picture and will alert the user regarding whether the device is at the most optimal position for capturing the image.
- the guide can direct the user, for example, to more up or down or left or right in addition to information regarding lighting and tilt.
- other physiological information may also be an important part of the clinical diagnosis.
- This other physiological information may be measured with additional apparatuses together with a mobile device 1 .
- near-infrared thermal imaging can be used to detect heat to indicate infection
- hyper spectral imaging techniques can be adapted to a mobile platform and be used for measuring tissue perfusion and necrosis
- sensors can be used to detect and record odor
- other chemical sensors or bacterial detectors can be used in tandem with the current mobile platform.
- These additional sensors can be included in the same attachment as the structure sensor 3 or may be implemented as different structures.
- the additional sensors may also be implemented as external devices which connect to the mobile device 1 via wired or wireless communication.
- the server 10 may be implemented locally within a doctor's office or at a hospital or may be implemented in the cloud via a server implementing a cloud service such as AmazonTM AWS. Any server handling private medical information may be implemented as a secured HIPAA compliant server.
- a HIPAA compliant server is one that is compliant with The Health Insurance Portability and Accountability Act of 1996 (HIPAA; Pub. L. 104-191, 110 Stat. 1936, enacted Aug. 21, 1996).
- the server may include a database or connected to a database in which information regarding the wound is stored.
- the server 10 may execute processing for analyzing the information obtained by the mobile device 1 . The processing will be described in detail below.
- the practitioner portal 11 is connected to the server 10 and is designed to provide information to the practitioner regarding the patient's wound.
- the wound information can be combined and integrated with a practitioner's existing electronic health record for the patient.
- the informatics 12 provides a pathway for anonymized clinical data in the database to be accessed to support clinical research and health informatics to advance wound care.
- FIG. 2 illustrates a diagram illustrating the flow process of the system according to an embodiment.
- an image of the patient is captured by the mobile device 1 , thereby generating image information.
- This information may include information from the imaging sensor 2 and the structure sensor 3 or other sensors as is discussed above.
- 2D and 3D information of a wound is captured by the mobile device 1 .
- Table 1 shown below provides an example of the 3D measurements obtained by the structure sensor 3 .
- the practitioner 5 enters information via the mobile device to augment the imaging information obtained from the patient 4 .
- the practitioner can enter the information via a different interface from the mobile device 1 , which captures the imaging information.
- Table 2 shows an example of the relevant clinical parameters obtained from the practitioner.
- the information found in tables 1 and 2 are transferred from the mobile device 1 (or other device) to the server 10 where the data is stored in a database together with previously generated wound parameters. Image analysis is then performed on the transferred information.
- the image analysis can be performed at the server 10 or, in an alternative embodiment, the image analysis can be performed locally on the mobile device 1 .
- FIG. 3 illustrates an overview of the processing performed at the server using the information obtained from the imaging sensor 2 and the structure sensor 3 .
- wound segmentation is performed using the 2D image information to obtain wound boundary information and the result is combined with the calculated 3D mesh information.
- the wound boundary segmented in the 2D image can be mapped into the 3D space, and the 3D measurements can be calculated from 3D structure data.
- the server 10 is not limited to performing processing using imaging data from the imaging sensor 2 or 3D data from the structure sensor 3 .
- the server 10 may also perform the processing using saved images or images obtained remotely and forwarded to the server 10 .
- the image processing may also be performed on the mobile device 1 and in this instance, the image acquisition may be performed by using the built-in camera, downloading an image from the internet (dedicated server) or if available through an input/output interface such as a USB interface, using a USB flash drive.
- FIG. 4 shows a flow diagram showing the process for wound segmentation using the 2D image information.
- Image segmentation is performed with semi-automatic algorithm.
- the processing of the image may include user or practitioner interaction during the image segmenting process. Alternatively, the processes may be performed without any user interaction such that any user input described in FIG. 4 would be replaced with automatic predictions or inputs.
- step S 100 the 2D image information is obtained at the server 10 or at the mobile device 1 .
- step S 101 the obtained image is scaled. As computation time depends on image size, in order to assure “real-time” (on-the-fly) segmentation, the image is scaled down.
- step S 102 the obtained image is cropped and the cropped area is stored. In particular, in order to have a closer look at the wound, the system zooms (e.g. ⁇ 2) into the image in order to focus on a predefined region in the center of the image. This assumes that the image places the wound approximately in the center of the image. The invisible part of the image defines the cropping region.
- the system could include a wound detection step at the cropping process to detect the location of wound for circumstances when the wound is not in the center of the image, for example.
- the grab cut algorithm is initialized with a rectangle defining the region of interest (ROI).
- the ROI is defined at an offset of 20 pixels from the border.
- the ROI could be defined as any number of different shapes or wound locations or any offset number of pixels from the border.
- step S 104 the acquired image and directions for segmentation are shown to the user.
- the user is shown the cropped and zoomed image.
- the user first indicates, parts of the object and parts of the background, or vice versa, using his/her finger or a stylus pen on a touchscreen, the mouse or a touchpad.
- the wound (foreground area) and the background are identified by user interaction.
- FIG. 5 illustrates an exemplary wound in which the foreground area 51 corresponding to the wound and the background area 50 have been designated by the user.
- the area 53 has been designated as a background area to ensure that this area is not detected as part of the wound.
- the system distinguishes this area from the main wound based on distance from the center of the picture and based on the existence of the other wound in the image and/or relative size between wounds.
- the detection of the foreground and background portions of the image may be automatically performed at the server 10 or the mobile device 1 .
- detection of wounds is a difficult task requires smart algorithms like the grab cut algorithm which weighs homogeneity of the wound with border detection or machine learning methods which learn to classify pixel regions as wound or not wound. Both methods can be adjusted to work automatically without user interaction to provide an initial result.
- the inherent difficulty of segmenting wounds often requires that the segmentation process utilize post-processing which can be performed by the user to correct under and/or over segmentation or by another algorithm.
- the first exemplary algorithm is grab cut based, where the user is shown the wound and an overlay of a rectangle in the center of the image. The user is asked to align the wound inside the rectangle and take an image. Everything outside the rectangle is considered as background and everything inside the rectangle is assigned with probabilities of being background or foreground. After initialization of the grab cut algorithm, an initial result will be calculated automatically and shown to the user.
- the second exemplary approach is machine learning based, which requires the system, to learn, from several hundred images, background as skin, hand or other objects, and foreground as granulation, slough of eschar tissue, etc. After training the machine learning algorithm, a new image is divided into tiles and classified as background or foreground.
- Both exemplary approaches also may give the user the possibility to correct for eventual errors and adjust the segmentation afterwards.
- the further image processing starts automatically after or in response to the definition of both the object (foreground) and background, respectively.
- the further image processing is performed after the user provides an indication that the designation is complete.
- the further image processing begins in step S 105 in which the center pixel position inside the foreground definition is obtained.
- the grab cut algorithm can find several unconnected patches as wounds. When the user defines a region as wound, this indicates the user's intention to segment this region as wound and thereby use one pixel inside this region as the foreground. The system can then iterate over the wound patches and discard the patches not including the foreground pixel. The iteration starts after S 107 , depending on if more than one contour has been found.
- step S 106 the output mask is filtered for foreground pixels.
- the grab cut algorithm outputs a mask defining the background as 0, the foreground as 1, likely background as 2, and likely foreground as 3. This mask is filtered for only foreground pixels. All other assignments (0, 2, 3) are replaced by 0. The result is a binary mask which contains 1 for foreground pixels and 0 for background pixels.
- step S 107 the contours in the foreground mask are detected and it is determined whether there is more than one contour in the foreground mask.
- step S 109 the system iterates over all contours and detects whether each contour includes a foreground pixel.
- the result of one segmentation iteration can be several foreground patches on the image.
- the binary mask is used to detect the contours of these patches.
- step S 109 it is determined whether the foreground pixel is inside of one of the contours, and if so then this contour is defined as the wound of interest otherwise if the foreground pixel is not inside the contour, then this area is determined to be not the wound.
- the contour For each contour that does not include a foreground pixel, the contour is filled with the background value in step S 111 .
- the system detects again for contours in the modified binary mask and addresses any additional contours by iterating again over the contours. In the case that only one contour is left, the flow proceeds to step S 112 .
- step S 112 the next iteration of the grab cut algorithm is performed. This process generates an initial segmentation that delineates the object border using a prominent polygon overlay.
- step S 113 it is determined whether the user is satisfied with the result. If not, the flow returns to step S 104 whereby the user can refine the segmentation by using additional indications for object or background or both until satisfied.
- step S 114 the resulting images are uncropped using the stored cropped area.
- the resulting image is output as a segmented image.
- This semi-automatic segmentation algorithm can be implemented using the grab cut algorithm as described by Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. 2004 .
- “GrabCut” interactive foreground extraction using iterated graph cuts .
- ACM SIGGRAPH 2004 Papers SIGGRAPH '04
- Joe Marks Ed.
- DOI 10.1145/1186562.1015720, herein incorporated by reference or another segmentation algorithm such as a graph cut algorithm.
- the user specifies the seed regions for wound and non-wound areas using simple finger swipes on a touchscreen.
- the segmentation result is displayed in real-time, and the user also has the flexibility to fine-tune the segmentation if needed.
- This algorithm requires minimal supervision and delivers a very fast performance.
- Exemplary evidence of the effectiveness of the image segmentation was obtained using a selection of 60 wound images which were used for validation. Five clinicians were asked to trace wound boundaries using a stylus on a windows tablet running the Matlab program. The results were compared against the present wound border segmentation process using a normalized overlap score. As shown in FIG. 6 , the present implementation of the segmentation algorithm showed very good overlap with the experts' manual segmentations (overlap score of around 90%). The algorithm also reduced task time from around 40 seconds to ⁇ 4 seconds.
- FIG. 7 shows a flow diagram showing the process for computing the 3D measurements from the structure sensor 3 data and the obtained segmented image from the process shown in FIG. 4 .
- the structure sensor 3 data is topology information that provides information about the medical injury (wound).
- the segmentation can be mapped into 3D space, enabling the 3D wound model to be extracted.
- Dimensions such as width and length can be calculated by applying Principal Component Analysis (PCA) to the point cloud.
- PCA Principal Component Analysis
- the rotated rectangle of the minimum area enclosing the wound can be found.
- the width and the length of the rectangle define the extent of the wound i.e. width and length, respectively.
- the perimeter can be computed by adding the line segments delineating the wound boundary.
- a reference plane is first created using paraboloid fitting to close the 3D wound model.
- This reference plane follows the anatomical shape of the surrounding body curvature, representing what normal skin surface should be without the wound.
- the area of the wound can be calculated as the surface area of the reference plane enclosed within the wound boundary.
- the volume is the space encapsulated by the reference plane and the wound surface; depth is the maximum distance between these two surfaces.
- Another important aspect is the aligning of the structure sensor 3 with the imaging sensor 2 .
- these two sensors have a rigid 6DOF transform between them because of the fixed mounting bracket.
- a chessboard target and a stereo calibration algorithm such as is found in OpenCV, is used to determine the transformation.
- the individual sensors are calibrated using a zero distortion model for the structure sensor 3 , and a distortion and de-centering model for the imaging sensor 2 .
- the external transformation is calculated between the two sensors using a stereo calibration function such as the OpenCV stereoCalibrate function. As shown in FIG.
- both sensors observe the same planer surface, allowing the computation of the extrinsic calibration, similar to that of calibrating a Kinect depth camera with its own RGB camera.
- automated calibration method of a color camera with a depth camera can be used. With good calibration, the segmented wound border in the color image can be more accurately mapped onto the 3D structure data, and accurate wound dimensions can be computed.
- step S 200 of FIG. 7 depth maps obtained by the structure sensor 3 and the foreground mask and corresponding to the segmented image, are obtained.
- the foreground mask is a binary image with the same size as the color image but encodes the wound as foreground by assigning 1 to pixels belonging to the wound and 0 otherwise (background).
- step S 201 the all depth maps are combined into a single depth map. This step is performed to ensure a uniform depth map. In particular, depth maps are very noisy. Therefore some depth values in the depth map are missing. By storing several consecutive depth maps, it is possible to fill in the gaps by combining all the depth maps to one depth map. Although this step reduces the majority of missing depth values, some gaps can still remain. S 202 applies another method to fill in gaps using the depth values of neighboring pixels.
- step S 202 any missing depth information is filled by interpolating from neighboring values.
- step S 203 the depth information within the area, which is represented by the foreground mask is used for further processing.
- FIG. 9 illustrates that the foreground mask 32 is applied to the depth data such that only the depth information within the area is obtained.
- step S 204 the pixel position in the 2D image space is transformed to corresponding 3D coordinates in camera space using the depth map and intrinsic parameters of the structure sensor 3 .
- step S 205 the contour within the foreground mask area is determined and in step S 206 , the determined contour is projected into 3D space whereby the perimeter is calculated.
- This perimeter is the total length of the wound border (like circumference of a circle), but in 3D space.
- step S 207 the minimal enclosing rectangle is projected into 3D space and the length and width of the wound are calculated.
- S 206 calculates the perimeter of the wound and S 207 calculates the maximum length and width of the wound.
- step S 208 the segmentation (foreground mask) is projected into 3D space and the area is calculated.
- step S 209 a parabolic shape is fit to estimate the surface using the contour depth information.
- step S 210 the deepest point within the wound is calculated and in step S 211 , the volume of the wound is calculated.
- step S 212 width, length, perimeter, area, volume, depth (deepest) and segmentation of the wound, determined from the previous steps, are output.
- the server 10 may also perform wound tissue classification processing. Alternatively, this processing can also be performed at the mobile device 1 .
- FIG. 10 illustrates the process for classifying the tissue in the wound. In this process, after extracting the wound border, the wound tissue can be classified into granulation, and/or slough, and/or eschar tissues.
- a tile-based multi-class Support Vector Machine (SVM) classifier can be used to automate the task.
- the SVM may be trained on 100 images of different wounds, each providing hundreds of tiles to learn the features for the classifier.
- Cross validation and grid search can be used to optimize the learning process and the quality of generalization. Experimental testing showed good overlap (Overlap Score >80%) between manual and automatic segmentation as is shown in FIGS. 11A-F .
- FIG. 10 illustrates the process for classifying the tissue in the wound. In this process, after extracting the wound border, the wound tissue can be classified into granulation, and/or slough, and/or eschar tissues.
- FIG. 11A shows the original wound image
- FIG. 11B shows the overlay with classification algorithm output
- FIG. 11C shows automatic classification for granulation
- FIG. 11D shows automatic classification for slough tissues.
- FIG. 11E shows manual classification by an expert for granulation
- FIG. 11F shows manual classification by an expert for slough tissues.
- step S 300 of FIG. 10 the color image obtained by the image sensor 2 is obtained along with the foreground mask.
- step S 301 the color information within the area of the color image corresponding to the foreground mask is obtained.
- step S 302 the color information with the mask is divided into tiles, such as square tiles. Other shaped tiles may also be used.
- the features are calculated for each tile.
- the features are extracted elements.
- step S 304 each tile is classified using a trained support vector machine. The training process is shown in steps S 400 -S 406 .
- step S 400 a set of images are obtained that annotate areas of healthy, slough, and/or eschar tissue in the wound. These images are then further processed in step S 401 so that the respective color information, within the annotated area, are linked to the respective annotation.
- step S 402 the images are each divided in to square tiles. The features noted above are then calculated for each tile in step S 403 .
- step S 404 each tile is labeled according to the tissue class to which it belongs.
- steps S 405 cross validation is applied to find the best parameters for the support vector machine using a separate test set.
- step S 406 a SVM model is generated.
- step S 406 The model generated in step S 406 is used in step S 304 to classify each tile.
- step S 305 each tile is colored with a predetermined color according to the class it belongs such that each class has a different color.
- step S 306 the classified image having the colors highlighted thereon is output.
- the server 10 transfers data back to the mobile device 1 for display to the practitioner.
- the data may also be transmitted is a different device in place of the mobile device 1 .
- FIG. 12 illustrates an example of the interface provided to the practitioner.
- the information populating this interface is either generated within the mobile device 1 or sent to the mobile device 1 or an alternative device from the server 10 .
- the interface displays the color image 60 and provides the user with the ability to mark the wound 51 by selecting toggle 61 and to mark the background 50 by selecting toggle 62 .
- Button 63 enables the user to erase markings. Once the markings 50 and 51 are placed, the user may select the segmentation button 64 , which initiates the processing shown in FIG. 4 . Once this processing is complete, the wound border information is returned to the interface to be displayed as border 72 .
- the user may then select the measurement button 65 which initiates the processing shown in FIG. 7 .
- the measurement button 65 which initiates the processing shown in FIG. 7 .
- the user may also select the wound tissue classification button 67 which classifies different portions of the wound based on the classes 68 using different colors. Once the button 67 is selected the processing shown in FIG. 10 is performed. The result of the processing is overlaid onto the wound.
- the data generated by the server 10 in addition to being forwarded to the mobile device 1 , is also stored in a database.
- the database may be local to the server 10 or in remote or cloud location.
- the data may be medically sensitive, the data may be stored in encrypted form and/or in a protected way to ensure that privacy is maintained.
- the data generated by the server 10 or locally at the mobile device 1 can be stored in the database.
- This data includes the wound image and the relevant clinical data both manually entered and automatically generated using the image processing methods.
- the patient's wound healing progress can be analyzed to output parameters similar to those listed on Table 3 shown below.
- this information together with other visual features of the wound can then be integrated to support clinical decisions.
- clinical information including the information listed on Tables 1, 2, and 3, can be accessed for reporting on the wound management or practitioner portal 11 .
- the information stored in the database can be incorporated into a patient's existing electronic health record managed by the practitioner.
- management portal 11 Using the management portal 11 , a practitioner, who is a physician, a nurse, a researcher, or anyone with the proper authorization and credentials, can access the information to provide wound management in a HIPAA compliant manner. This portal can also be used for care co-ordination.
- the present embodiments provide significant advantages.
- the present system is able to effectively ensure that wounds are measured uniformly.
- the uniformity of the system makes consulting and cross-referencing much more feasible.
- Another advantage of the system is that the audit process for documentation with health insurers and Medicare/Medicaid fiscal intermediaries is significantly enhanced in the event of a chart audit for services rendered.
- the stored images prove treatment plans, skin substitute applications and progression (or lack) of healing.
- Another advantage of this system is the ability to supplement internal organizational audits regarding wound benchmark healing programs.
- the mobile device 1 could further include educational materials for patients and reference materials for care providers, such as guide for classifying wounds and front-line treatment modalities, current CPT coding guidelines for skin substitutes, updates on pharmaceuticals relevant to wound management/infection control, and the ability to secure link to existing electronic health record (EHR) systems.
- educational materials for patients and reference materials for care providers such as guide for classifying wounds and front-line treatment modalities, current CPT coding guidelines for skin substitutes, updates on pharmaceuticals relevant to wound management/infection control, and the ability to secure link to existing electronic health record (EHR) systems.
- EHR electronic health record
- the system is also able to enable a patient to grant access to medical history data when being transferred between different facilities, and to allow the care team to collectively contribute to patient medical records along with the patient themselves through self-monitoring and self-reporting.
- the present embodiments can also be applied to other applications besides wound measurement.
- the present embodiments can be also used as a preventive measure for population with higher risk of developing chronic wounds, such as diabetic patients who are prone to diabetic foot ulcer, an immobilized patient that is prone to developing pressure ulcers, and a patient with peripheral vascular diseases.
- chronic wounds such as diabetic patients who are prone to diabetic foot ulcer, an immobilized patient that is prone to developing pressure ulcers, and a patient with peripheral vascular diseases.
- a main reason for developing ulcers is the poor blood supply leading to ischemic tissue, which eventually develops into necrosis and ulcers.
- the present embodiments incorporated with multi-spectrum imaging or other advanced imaging technology and/or an image analysis algorithms can be used to assess the blood supply or blood perfusion on body surfaces.
- a band-pass, band-stop, low-pass, or high-pass filter can be used to take images under different wavelengths of light, which can be used to analyze the blood oxygen contents in the superficial layers of the skin, similar to the technology used in pulsoxymetry.
- a light source in the near-infrared range with two different wavelengths can be used.
- the light filter together with the light source can be combined together and outfitted to an existing camera phone to enhance its multi-spectrum imaging capability for measuring blood perfusion.
- the present embodiments can be used to monitor conditions in the ears, nose, throat, mouth, and eyes.
- an auxiliary light source could be used.
- a light guide could also be used to take a picture of a location that is hard to reach, or in some cases or situations, stabilization and magnification could also be used.
- the present embodiments can be used to monitor for a disease condition that has visible bulging, swelling, or protruding feature on body surface, including but not limited to peripheral vascular disease, skin lumps, hernia, and hemorrhoids.
- a disease condition that has visible bulging, swelling, or protruding feature on body surface, including but not limited to peripheral vascular disease, skin lumps, hernia, and hemorrhoids.
- the size and shape of those lesions are of clinical relevance and can be easily measured and tracked with the present embodiments.
- the present embodiments can be used for plastic reconstructive surgery or weight loss regimen, where changes of body shape can be measured and documented.
- the present embodiments can be used to monitor patient excretion, such as defecation and urination, the visual characteristics of which might have clinical relevance.
- the present embodiments can be used to monitor patient caloric intake based on volume and identification of food groups for a number of medical conditions in which both fluid and solid intake must to be monitored.
- the present disclosure may be applied to methods and system for chronic disease management based on patient's self-monitoring and self-reporting using mobile devices.
- the disclosure utilizes a camera-enabled mobile device (with or without special add-on device, such as stereo camera, structured light, multi-spectrum light imager, or other light source or light guide) to obtain visual information for the site of interest, including but not limited to ostomy, wound, ulcer, skin conditions, dental, ear, nose, throat, and eyes.
- this task could be achieved by utilizing a webcam enabled laptop, or a combination of a camera and a computer system.
- Visual information including but not limited to the size, shape, color, hue, saturation, contrast, texture, pattern, 3D surface, or volumetric information, are of great clinical value in monitoring disease progression.
- the present embodiments disclose a techniques of using camera-enabled mobile device (with or without additional apparatus to enhance, improve, or add imaging capabilities) to acquire images, analyze, and extract clinical relevant features in the acquired image, which are later transmitted to a remote server. Alternatively, all the image analysis and feature recognition could be performed on the remote server site. Medical professionals or a computer-automated algorithm can access those patient data and determine the risk of deteriorating conditions, early warning signs for complications, or patients' compliance to treatments. In one embodiment, computer automation will serve as the first line of defense. This system is able to screen all patient data for early warning signs.
- a care provider and/or patients can be altered when certain indication is out of normal range or trending unfavorably.
- Care providers can then evaluate the case, either confirm or dismiss the alert and take appropriate action to address the alert, including communicating with patients, adjusting therapy, or reminding patient to adhere to treatment.
- the system can be used to place targeted advertisement and make product recommendations.
- the depth information and the image information can be obtained by a single imaging device or sensor.
- the imaging device or sensor is able to capture depth information of the wound in addition to capturing an image of the wound.
- the computer processor can be implemented as discrete logic gates, as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Complex Programmable Logic Device (CPLD).
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- CPLD Complex Programmable Logic Device
- An FPGA or CPLD implementation can be coded in VHDL, Verilog or any other hardware description language and the code can be stored in an electronic memory directly within the FPGA or CPLD, or as a separate electronic memory.
- the electronic memory can be non-volatile, such as ROM, EPROM, EEPROM or FLASH memory.
- the electronic memory can also be volatile, such as static or dynamic RAM, and a processor, such as a microcontroller or microprocessor, can be provided to manage the electronic memory as well as the interaction between the FPGA or CPLD and the electronic memory.
- the computer processor can execute a computer program including a set of computer-readable instructions that perform the functions described herein, the program being stored in any of the above-described non-transitory electronic memories and/or a hard disk drive, CD, DVD, FLASH drive or any other known storage media.
- the computer-readable instructions can be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with a processor, such as a Xenon processor from Intel of America or an Opteron processor from AMD of America and an operating system, such as Microsoft VISTA, UNIX, Solaris, LINUX, Apple, MAC-OSX and other operating systems known to those skilled in the art.
- the computer 1000 includes a bus B or other communication mechanism for communicating information, and a processor/CPU 1004 coupled with the bus B for processing the information.
- the computer 1000 also includes a main memory/memory unit 1003 , such as a random access memory (RAM) or other dynamic storage device (e.g., dynamic RAM (DRAM), static RAM (SRAM), and synchronous DRAM (SDRAM)), coupled to the bus B for storing information and instructions to be executed by processor/CPU 1004 .
- the memory unit 1003 can be used for storing temporary variables or other intermediate information during the execution of instructions by the CPU 1004 .
- the computer 1000 can also further include a read only memory (ROM) or other static storage device (e.g., programmable ROM (PROM), erasable PROM (EPROM), and electrically erasable PROM (EEPROM)) coupled to the bus B for storing static information and instructions for the CPU 1004 .
- ROM read only memory
- PROM programmable ROM
- EPROM erasable PROM
- EEPROM electrically erasable PROM
- the computer 1000 can also include a disk controller coupled to the bus B to control one or more storage devices for storing information and instructions, such as mass storage 1002 , and drive device 1006 (e.g., read-only compact disc drive, read/write compact disc drive, compact disc jukebox, and removable magneto-optical drive).
- the storage devices can be added to the computer 1000 using an appropriate device interface (e.g., small computer system interface (SCSI), integrated device electronics (IDE), enhanced-IDE (E-IDE), direct memory access (DMA), or ultra-DMA).
- SCSI small computer system interface
- IDE integrated device electronics
- E-IDE enhanced-IDE
- DMA direct memory access
- ultra-DMA ultra-DMA
- the computer 1000 can also include special purpose logic devices (e.g., application specific integrated circuits (ASICs)) or configurable logic devices (e.g., simple programmable logic devices (SPLDs), complex programmable logic devices (CPLDs), and field programmable gate arrays (FPGAs)).
- ASICs application specific integrated circuits
- SPLDs simple programmable logic devices
- CPLDs complex programmable logic devices
- FPGAs field programmable gate arrays
- the computer 1000 can also include a display controller coupled to the bus B to control a display, for displaying information to a computer user.
- the computer system includes input devices, such as a keyboard and a pointing device, for interacting with a computer user and providing information to the processor.
- the pointing device for example, may be a mouse, a trackball, or a pointing stick for communicating direction information and command selections to the processor and for controlling cursor movement on the display.
- a printer can provide printed listings of data stored and/or generated by the computer system.
- the computer 1000 performs at least a portion of the processing steps of the invention in response to the CPU 1004 executing one or more sequences of one or more instructions contained in a memory, such as the memory unit 1003 .
- a memory such as the memory unit 1003 .
- Such instructions can be read into the memory unit from another computer readable medium, such as the mass storage 1002 or a removable media 1001 .
- One or more processors in a multi-processing arrangement can also be employed to execute the sequences of instructions contained in memory unit 1003 .
- hard-wired circuitry can be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
- the computer 1000 includes at least one computer readable medium 1001 or memory for holding instructions programmed according to the teachings of the invention and for containing data structures, tables, records, or other data described herein.
- Examples of computer readable media are compact discs, hard disks, magneto-optical disks, PROMs (EPROM, EEPROM, flash EPROM), DRAM, SRAM, SDRAM, or any other magnetic medium, compact discs (e.g., CD-ROM), or any other medium from which a computer can read.
- the present invention includes software for controlling the main processing unit 1004 , for driving a device or devices for implementing the invention, and for enabling the main processing unit 1004 to interact with a human user.
- software can include, but is not limited to, device drivers, operating systems, development tools, and applications software.
- Such computer readable media further includes the computer program product of the present invention for performing all or a portion (if processing is distributed) of the processing performed in implementing the invention.
- the computer code elements on the medium of the present invention can be any interpretable or executable code mechanism, including but not limited to scripts, interpretable programs, dynamic link libraries (DLLs), Java classes, and complete executable programs. Moreover, parts of the processing of the present invention can be distributed for better performance, reliability, and/or cost.
- Non-volatile media includes, for example, optical, magnetic disks, and magneto-optical disks, such as the mass storage 1002 or the removable media 1001 .
- Volatile media includes dynamic memory, such as the memory unit 1003 .
- Various forms of computer readable media can be involved in carrying out one or more sequences of one or more instructions to the CPU 1004 for execution.
- the instructions can initially be carried on a magnetic disk of a remote computer.
- An input coupled to the bus B can receive the data and place the data on the bus B.
- the bus B carries the data to the memory unit 1003 , from which the CPU 1004 retrieves and executes the instructions.
- the instructions received by the memory unit 1003 can optionally be stored on mass storage 1002 either before or after execution by the CPU 1004 .
- the computer 1000 also includes a communication interface 1005 coupled to the bus B.
- the communication interface 1004 provides a two-way data communication coupling to a network that is connected to, for example, a local area network (LAN), or to another communications network such as the Internet.
- the communication interface 1005 can be a network interface card to attach to any packet switched LAN.
- the communication interface 1005 can be an asymmetrical digital subscriber line (ADSL) card, an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of communications line.
- Wireless links can also be implemented.
- the communication interface 1005 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
- the network typically provides data communication through one or more networks to other data devices.
- the network can provide a connection to another computer through a local network (e.g., a LAN) or through equipment operated by a service provider, which provides communication services through a communications network.
- the local network and the communications network use, for example, electrical, electromagnetic, or optical signals that carry digital data streams, and the associated physical layer (e.g., CAT 5 cable, coaxial cable, optical fiber, etc).
- the network can provide a connection to a mobile device such as laptop computer, or cellular telephone.
- any processes, descriptions or blocks in flowcharts should be understood as representing modules, segments or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the exemplary embodiments of the present advancements in which functions can be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending upon the functionality involved, as would be understood by those skilled in the art.
Abstract
Described herein is a system and method for determining characteristics of a wound. The system includes a first imaging sensor that obtains imaging information of a wound area and a second imaging sensor that obtains topology information of the wound area. The system further includes circuitry that designates a representative background portion of the wound area from the imaging information of the wound area, that designates a representative wound portion of the wound area from the imaging information of the wound area, that determines the boundary of the wound portion within the imaging information of the wound area based on the designated representative background and wound portions, that correlates the imaging information and the topology information, that applies the boundary of the wound portion designated within the imaging information to the topology information to designate a mask area, and that determines characteristics of the wound portion within the mask area based on the topology information and the imaging information.
Description
- This application is a Divisional of U.S. application Ser. No. 14/491,794, filed Sep. 19, 2014, which is based upon and claims the benefit of priority under 35 U.S.C. §119(e) from U.S. Ser. No. 61/983,022, filed Apr. 23, 2014, and U.S. Ser. No. 61/911,162, filed Dec. 3, 2013, the entire contents of each of which are incorporated herein by reference.
- Chronic and complex wounds, including venous, diabetic, and pressure ulcers, surgical wounds, ostomy and other complex wounds affect millions of patients in the United States alone. Billions of dollars are spent on the treatment of chronic wounds in the United States annually, including over billions on wound care products. The cost of treating chronic wounds continues to grow year after year due to an aging population and rising incidence of diabetes and obesity. The treatment cost for chronic wound has become a significant financial burden to the individual and society.
- While advances in medical technology have helped bring new treatment modalities to the various wound types, there is a large unmet need for accurate and objective assessment of a wound and the wound healing progress, including wound depth, volume, area, circumference measurements and classification of the wound. Objective assessment of wound healing progress is the basis for determining the effectiveness of a treatment pathway, and critical in selecting the best treatment plan. However, in the clinical setting, such measurements are usually inaccurate, subjective, and inconsistent (performed manually with ruler, transparency tracing, and Q-tips for depth). Because of the lack of standards and reliability, these measurements are often difficult to apply to clinical practice for medical decision making. Furthermore, the inconsistencies in wound assessment also render remote management or multidisciplinary wound care coordination difficult to implement. To address this issue, many wound measurement tools have been developed using structured light and stereo-photography. These systems, however, require specialized/expensive equipment, which is difficult to use, lack integration with medical record management systems, and are overall inefficient at accurately assessing a wound. As a result, these wound measurement tools are not practical in point-of-care settings.
- Another unmet need in wound and many other disease management areas (including but not limited to dermatology, aesthetics, cosmetics, oncology, ophthalmology, and otolaryngology) is the growing need for an efficient, secure, and collaborative system for managing visual and other multimedia information.
- The “background” description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description which may not otherwise qualify as prior art at the time of filing, are neither expressly or impliedly admitted as prior art against the present invention.
- A more complete appreciation of the disclosed embodiments and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
-
FIG. 1 illustrates an example of a system diagram according to one embodiment; -
FIG. 2 illustrates another example of a system diagram according to one embodiment; -
FIG. 3 illustrates an overview of the processing performed at the server using the information obtained from an imaging sensor and a structure sensor according to one embodiment; -
FIG. 4 illustrates a flow diagram showing the process for wound segmentation using 2D image information; -
FIG. 5 illustrates an exemplary wound in which a foreground area corresponding to the wound and a background area have been designated according to one embodiment; -
FIG. 6 illustrates an implementation example according to one embodiment; -
FIG. 7 illustrates a flow diagram showing the process for computing the 3D measurements from the structure sensor data and the obtained segmented image; -
FIG. 8 illustrates an example of the calibration of the structure sensor with the imaging sensor; -
FIG. 9 illustrates that the foreground mask is applied to the depth data such that only the depth information within the area is obtained; -
FIG. 10 illustrates the process for classifying the tissue in the wound; -
FIGS. 11A-F illustrate a comparison between manual and automatic segmentation; -
FIG. 12 illustrates an example of the system interface according to one embodiment; and -
FIG. 13 illustrates an exemplary computing system according to one embodiment. - The present disclosure describes system for determining characteristics of a medical injury. The system includes one or more imaging sensors that obtain imaging information and topology information of an area of interest, and circuitry configured to determine a boundary of an injury portion within the imaging information of the area of interest, correlate the imaging information and the topology information, apply the boundary of the injury portion designated within the imaging information to the topology information to designate a mask area, and determine characteristics of the injury portion within the mask area based on the topology information and the imaging information.
- The system further includes an embodiment in which the circuitry is further configured to designate a representative background portion of the area of interest from the imaging information of the area of interest, designate a representative injury portion of the area of interest from the imaging information of the area of interest, and determine the boundary of the injury portion within the imaging information of the area of interest based on the designated representative background and injury portions.
- The system further includes an embodiment in which the circuitry is further configured to designate a representative injury portion of the area of interest from the imaging information of the area of interest based on user input or pixel characteristic differences.
- The system further includes an embodiment in which the circuitry is further configured to classify the injury portion within the mask area based on the imaging information.
- The system further includes an embodiment in which the circuitry is further configured to classify the injury portion within the mask area by being configured to divide the injury portion into tiles, to calculate a measure of central tendency for imaging values of each tile, and to classify each tile using injury type information generated by a previously trained classifier.
- The system further includes an embodiment in which the injury type information includes healthy, slough, and eschar tissue.
- The system further includes an embodiment in which the previously trained support vector machine generates the injury type information using circuitry configured to, for a set of annotated images, divide each image into tiles, to calculate a measure of central tendency for imaging values of each tile, to designate each tile according to an injury type, and to apply cross-validation using a separate test set.
- The system further includes an embodiment in which the characteristics of the injury portion within the mask area include depth, width, and length of the injury.
- The system further includes an embodiment in which the characteristics of the injury portion within the mask area include perimeter, area, and volume of the injury.
- The system further includes an embodiment in which the circuitry is configured to determine the boundary of the injury portion within the imaging information of the area of interest by utilizing an automatic image segmentation algorithm.
- The system further includes an embodiment in which the circuitry is configured to determine the boundary of the injury portion within the imaging information of the area of interest by detecting contours in the representative injury portion of the area of interest.
- The system further includes an embodiment in which the circuitry is configured to determine the boundary of the injury portion within the imaging information of the area of interest by detecting contours in the representative injury portion of the area of interest and iterating over all contours.
- The system further includes an embodiment in which the medical injury is a wound.
- Further described is an embodiment of a device for determining characteristics of a medical injury. The device includes circuitry configured to determine a boundary of an injury portion within imaging information of an area of interest, correlate the imaging information and topology information obtained by one or more imaging sensors, apply the boundary of the injury portion designated within the imaging information to the topology information to designate a mask area, and determine characteristics of the injury portion within the mask area based on the topology information and the imaging information.
- The device further includes an embodiment in which the circuitry is further configured to designate a representative background portion of the area of interest from the imaging information of the area of interest, designate a representative injury portion of the area of interest from the imaging information of the area of interest, and determine the boundary of the injury portion within the imaging information of the area of interest based on the designated representative background and injury portions.
- The device further includes an embodiment in which the circuitry is further configured to designate a representative injury portion of the area of interest from the imaging information of the area of interest based on user input or pixel characteristic differences.
- The device further includes an embodiment in which the circuitry is further configured to classify the injury portion within the mask area based on the imaging information.
- The device further includes an embodiment in which the circuitry is further configured to classify the injury portion within the mask area by being configured to divide the injury portion into tiles, to calculate a measure of central tendency for imaging values of each tile, and to classify each tile using injury type information generated by a previously trained support vector machine.
- The device further includes an embodiment in which the injury type information includes healthy, slough, and eschar tissue.
- The device further includes an embodiment in which the previously trained support vector machine generates the injury type information using circuitry configured to, for a set of annotated images, divide each image into tiles, to calculate a measure of central tendency for imaging values of each tile, to designate each tile according to a injury type, and to apply cross-validation using a separate test set.
- The device further includes an embodiment in which the characteristics of the injury portion within the mask area include depth, width, and length of the injury.
- The device further includes an embodiment in which the characteristics of the injury portion within the mask area include perimeter, area and volume of the injury.
- The device further includes an embodiment in which the circuitry is configured to determine the boundary of the injury portion within the imaging information of the area of interest by utilizing a grab cut algorithm.
- The device further includes an embodiment in which the circuitry is configured to determine the boundary of the injury portion within the imaging information of the area of interest by detecting contours in the representative injury portion of the area of interest.
- The device further includes an embodiment in which the circuitry is configured to determine the boundary of the injury portion within the imaging information of the area of interest by detecting contours in the representative injury portion of the area of interest and iterating over all contours.
- The device further includes an embodiment in which the medical injury is a wound.
- Also described is an embodiment of a method fo determining characteristics of a medical injury. The method includes the steps of determining, using processing circuitry, a boundary of an injury portion within imaging information of an area of interest, correlating, using the processing circuitry, the imaging information and topology information obtained by one or more imaging sensors, applying, using the processing circuitry, the boundary of the injury portion designated within the imaging information to the topology information to designate a mask area, and determining, using the processing circuitry, characteristics of the injury portion within the mask area based on the topology information and the imaging information.
- The method further includes an embodiment including the further steps of designating a representative background portion of the area of interest from the imaging information of the area of interest, designating a representative injury portion of the area of interest from the imaging information of the area of interest, and determining the boundary of the injury portion within the imaging information of the area of interest based on the designated representative background and injury portions.
- The method further includes an embodiment including the further step of designating a representative injury portion of the area of interest from the imaging information of the area of interest based on user input or pixel characteristic differences.
- The method further includes an embodiment including the further step of classifying the injury portion within the mask area based on the imaging information.
- The method further includes an embodiment in which the further step of the injury portion within the mask area is further classified by dividing the injury portion into tiles, to calculate a measure of central tendency for imaging values of each tile, and to classify each tile using injury type information generated by a previously trained support vector machine.
- The method further includes an embodiment in which the injury type information includes healthy, slough, and eschar tissue.
- The method further includes an embodiment in which the previously trained support vector machine generates the injury type information using circuitry configured to, for a set of annotated images, divide each image into tiles, to calculate a measure of central tendency for imaging values of each tile, to designate each tile according to an injury type, and to apply cross-validation using a separate test set.
- The method further includes an embodiment in which the characteristics of the injury portion within the mask area include depth, width, and length of the injury.
- The method further includes an embodiment in which the characteristics of the injury portion within the mask area include perimeter, area and volume of the injury.
- The method further includes an embodiment including the further step of determining the boundary of the injury portion within the imaging information of the area of interest by utilizing a grab cut algorithm.
- The method further includes an embodiment including the further step of determining the boundary of the injury portion within the imaging information of the area of interest by detecting contours in the representative injury portion of the area of interest.
- The method further includes an embodiment including the further step of determining the boundary of the injury portion within the imaging information of the area of interest by detecting contours in the representative injury portion of the area of interest and iterating over all contours.
- The method further includes an embodiment in which the medical injury is a wound.
- Referring now to the drawings wherein like reference numbers designate identical or corresponding parts throughout the several views,
FIG. 1 illustrates a system for the volumetric assessment of chronic and complex wounds, such as but not limited to pressure ulcers, diabetic ulcers, arterial insufficiency ulcers, venous stasis ulcers, and burn wounds. Chronic wounds often require constant monitoring and attention. Beyond the visual information that can be obtained by a traditional single 2D camera, the three dimensional surface data is of particular clinical relevance. Thus, the capturing, analysis, and transmission of clinical data and imagery using mobile devices, a specialized camera with structured sensor capability, and a cloud/network infrastructure, can provide significant improvements over existing technology. The present embodiments incorporate the above components to provide a complete platform to capture, evaluate, document, and communicate clinical information for the purpose of wound prevention and treatment. -
FIG. 1 illustrates system diagram according to an embodiment. InFIG. 1 , there is included amobile device 1 having either attached/connected thereto or included therein a2D imaging sensor 2 and astructured sensor 3. The present embodiments are not limited to amobile device 1 but may be any computing device capable of transferring information betweensensors network 20. Themobile device 1 is connected to theserver 10 via the network. Theserver 10 is also connected toportal 11 andinformatics 12. - The
mobile device 1 may be a cellular/wireless enabled portable device with processing capacity and embedded 2D photo taking function via animaging sensor 2. Themobile device 1 may include user interaction through touch screen, stylus, mouse, keyboard, or other means of input. Themobile device 1 may also have (3D) structure sensing functionality through connection with astructure sensor 3 connected to themobile device 1 via an input/output port (e.g., a USB port) or other methods of connectivity such as via Bluetooth or near-field communication (“NFC”). The mobile device could be an Ipad™ by Apple Computer or a Galaxy Tab™ by Samsung or any other suitable mobile or tablet device having input/output capability. - The
imaging sensor 2 may be a digital CCD sensor embedded or included with themobile device 1 such as an iSight Camera™ by Apple Computer. Theimaging sensor 2 may have any suitable resolution (such as 640×480 resolution, for example) and have any suitable number of pixels (such as 8 megapixels). However, the challenge with using a 2D camera to make a measurement of a wound is the lack of scaling information and distortion correction. In practice, photographers often place a reference object (ruler or object of known dimensions, such as a penny) in the same scene for pictures to be taken, so measurements can be derived later. However, this method requires the camera to be perpendicular to the measuring plane and is cumbersome and not accurate. However, several techniques have been developed that address this issue, including an embodiment in which thestructure sensor 3 is included and is used in addition to theimaging sensor 2 as well as an embodiment in which thestructure sensor 3 is absent and an on-screen guide is utilized together with information from theimaging sensor 2 to provide scaling information and distortion correction. - The
structure sensor 3 may be a 3D imaging sensor such as the Occipital Structure Sensor™ developed by Occipital. Because of the spatial relationship of theimaging sensor 2 and thestructure sensor 3 is known, 2D images taken from theimaging sensor 2 can be mapped to the 3D structure data acquired by thestructure sensor 3. Thus, when obtaining information of a wound, images are obtained by both theimaging sensor 2 and thestructure sensor 3. In an alternative embodiment, imaging information may be obtained from only thestructure sensor 3 or from only theimaging sensor 2. - The
structure sensor 3 enables accurate 3D measurement using themobile device 1 without any further specialized device or any complicated process. Thestructure sensor 3 may be mounted to themobile device 1 using a bracket. In addition to being implemented using the Occipital Structure Sensor™, thestructure sensor 3 could also be implemented by a 3D stereo camera. Alternatively, thestructure sensor 3 may be an apparatus that can be used in tandem with an existing mobile device to enable the capture of stereoscopic images, an apparatus that can be used with an existing camera to generate structured light (such as taught by “three-dimensional scanner for hand-held phones”, J. Ryan Kruse, US20120281087A1), a miniaturized laser range scanner, or any potential apparatus that can be used in tandem with themobile device 1 to capture the three dimensional information of the wound site. - The
structure sensor 3 may be mounted to themobile device 1 using, for example, a bracket. Alternatively, thestructure sensor 3 may be external and not connected to themobile device 1. - When obtaining the image information and 3D information using the
imaging sensor 2 and thestructure sensor 3, an on-screen guide can be provided which directs the image obtaining user to take the best possible picture. The on-screen guide can be displayed while the user is attempting to take a picture and will alert the user regarding whether the device is at the most optimal position for capturing the image. In addition, the guide can direct the user, for example, to more up or down or left or right in addition to information regarding lighting and tilt. - In addition to the 2D visual information and 3D measurements obtained by the
imaging sensor 2 and thestructure sensor 3, other physiological information may also be an important part of the clinical diagnosis. This other physiological information may be measured with additional apparatuses together with amobile device 1. For instance, near-infrared thermal imaging can be used to detect heat to indicate infection, hyper spectral imaging techniques can be adapted to a mobile platform and be used for measuring tissue perfusion and necrosis, sensors can be used to detect and record odor, and other chemical sensors or bacterial detectors can be used in tandem with the current mobile platform. These additional sensors can be included in the same attachment as thestructure sensor 3 or may be implemented as different structures. The additional sensors may also be implemented as external devices which connect to themobile device 1 via wired or wireless communication. - The
server 10 may be implemented locally within a doctor's office or at a hospital or may be implemented in the cloud via a server implementing a cloud service such as Amazon™ AWS. Any server handling private medical information may be implemented as a secured HIPAA compliant server. A HIPAA compliant server is one that is compliant with The Health Insurance Portability and Accountability Act of 1996 (HIPAA; Pub. L. 104-191, 110 Stat. 1936, enacted Aug. 21, 1996). The server may include a database or connected to a database in which information regarding the wound is stored. Theserver 10 may execute processing for analyzing the information obtained by themobile device 1. The processing will be described in detail below. - The
practitioner portal 11 is connected to theserver 10 and is designed to provide information to the practitioner regarding the patient's wound. The wound information can be combined and integrated with a practitioner's existing electronic health record for the patient. - The
informatics 12 provides a pathway for anonymized clinical data in the database to be accessed to support clinical research and health informatics to advance wound care. -
FIG. 2 illustrates a diagram illustrating the flow process of the system according to an embodiment. In the process, an image of the patient is captured by themobile device 1, thereby generating image information. This information may include information from theimaging sensor 2 and thestructure sensor 3 or other sensors as is discussed above. - Using the
mobile device 1 and the associatedimaging sensor 2 and thestructure sensor 3 functionalities, 2D and 3D information of a wound is captured by themobile device 1. Table 1 shown below provides an example of the 3D measurements obtained by thestructure sensor 3. -
TABLE 1 Width (cm) Length (cm) Circumference (cm) Area (cm{circumflex over ( )}2) Depth (deepest, cm) Volume (cm{circumflex over ( )}3) Segmentation: granulation, slough, necrotic Percent (%) Area (cm{circumflex over ( )}2) Measurement Date - The
practitioner 5 enters information via the mobile device to augment the imaging information obtained from thepatient 4. In an alternative embodiment, the practitioner can enter the information via a different interface from themobile device 1, which captures the imaging information. - Table 2 shows an example of the relevant clinical parameters obtained from the practitioner.
-
TABLE 2 Location (figure) Pain: Co-morbidities At rest Demographics Type: With Movement Traumatic None Current Treatment: Pressure ulcer (Stage Scale (0-10) Topical Agent I/II/III/IV/unstageable) Venous stasis Irrigation Diabetic ulcer Characteristics: Negative Pressure Surgical wound Undermining (Yes/No, Secondary intention direction, length) Other Tunneling (Yes/No, Debridement direction, length) (Surgical/chemical) Burn Odor (Yes/No) Other Drainage: Peri-Wound skin Clinical Serous Edema Blood glucose level Serousanguineous Erythema Doppler signal Purulent Excoriation Ankle brachial index Amount (min/mod/large) Maceration - The information found in tables 1 and 2 are transferred from the mobile device 1 (or other device) to the
server 10 where the data is stored in a database together with previously generated wound parameters. Image analysis is then performed on the transferred information. The image analysis can be performed at theserver 10 or, in an alternative embodiment, the image analysis can be performed locally on themobile device 1. -
FIG. 3 illustrates an overview of the processing performed at the server using the information obtained from theimaging sensor 2 and thestructure sensor 3. In particular wound segmentation is performed using the 2D image information to obtain wound boundary information and the result is combined with the calculated 3D mesh information. Thus, the wound boundary segmented in the 2D image can be mapped into the 3D space, and the 3D measurements can be calculated from 3D structure data. Theserver 10 is not limited to performing processing using imaging data from theimaging sensor structure sensor 3. Theserver 10 may also perform the processing using saved images or images obtained remotely and forwarded to theserver 10. As noted above, the image processing may also be performed on themobile device 1 and in this instance, the image acquisition may be performed by using the built-in camera, downloading an image from the internet (dedicated server) or if available through an input/output interface such as a USB interface, using a USB flash drive. -
FIG. 4 shows a flow diagram showing the process for wound segmentation using the 2D image information. Image segmentation is performed with semi-automatic algorithm. The processing of the image may include user or practitioner interaction during the image segmenting process. Alternatively, the processes may be performed without any user interaction such that any user input described inFIG. 4 would be replaced with automatic predictions or inputs. - In step S100, the 2D image information is obtained at the
server 10 or at themobile device 1. In step S101 the obtained image is scaled. As computation time depends on image size, in order to assure “real-time” (on-the-fly) segmentation, the image is scaled down. In step S102 the obtained image is cropped and the cropped area is stored. In particular, in order to have a closer look at the wound, the system zooms (e.g. ×2) into the image in order to focus on a predefined region in the center of the image. This assumes that the image places the wound approximately in the center of the image. The invisible part of the image defines the cropping region. Alternatively, the system could include a wound detection step at the cropping process to detect the location of wound for circumstances when the wound is not in the center of the image, for example. In step S103, the grab cut algorithm is initialized with a rectangle defining the region of interest (ROI). The ROI is defined at an offset of 20 pixels from the border. Alternatively, the ROI could be defined as any number of different shapes or wound locations or any offset number of pixels from the border. - In step S104, the acquired image and directions for segmentation are shown to the user. Specifically, the user is shown the cropped and zoomed image. The user first indicates, parts of the object and parts of the background, or vice versa, using his/her finger or a stylus pen on a touchscreen, the mouse or a touchpad. Thus, the wound (foreground area) and the background are identified by user interaction.
FIG. 5 illustrates an exemplary wound in which theforeground area 51 corresponding to the wound and thebackground area 50 have been designated by the user. As is illustrated inFIG. 5 , thearea 53 has been designated as a background area to ensure that this area is not detected as part of the wound. However, in an alternative embodiment, the system distinguishes this area from the main wound based on distance from the center of the picture and based on the existence of the other wound in the image and/or relative size between wounds. As is noted previously, the detection of the foreground and background portions of the image may be automatically performed at theserver 10 or themobile device 1. Given the diversity of wound shape, color, place, patients position, other objects inside the image (blanket, hand, etc.) detection of wounds is a difficult task requires smart algorithms like the grab cut algorithm which weighs homogeneity of the wound with border detection or machine learning methods which learn to classify pixel regions as wound or not wound. Both methods can be adjusted to work automatically without user interaction to provide an initial result. The inherent difficulty of segmenting wounds often requires that the segmentation process utilize post-processing which can be performed by the user to correct under and/or over segmentation or by another algorithm. - Described here are two exemplary algorithms that perform automatic wound image segmentation. Other algorithms could also be used. The first exemplary algorithm is grab cut based, where the user is shown the wound and an overlay of a rectangle in the center of the image. The user is asked to align the wound inside the rectangle and take an image. Everything outside the rectangle is considered as background and everything inside the rectangle is assigned with probabilities of being background or foreground. After initialization of the grab cut algorithm, an initial result will be calculated automatically and shown to the user.
- The second exemplary approach is machine learning based, which requires the system, to learn, from several hundred images, background as skin, hand or other objects, and foreground as granulation, slough of eschar tissue, etc. After training the machine learning algorithm, a new image is divided into tiles and classified as background or foreground.
- Both exemplary approaches also may give the user the possibility to correct for eventual errors and adjust the segmentation afterwards.
- The further image processing starts automatically after or in response to the definition of both the object (foreground) and background, respectively. Alternatively, the further image processing is performed after the user provides an indication that the designation is complete.
- The further image processing begins in step S105 in which the center pixel position inside the foreground definition is obtained. The grab cut algorithm can find several unconnected patches as wounds. When the user defines a region as wound, this indicates the user's intention to segment this region as wound and thereby use one pixel inside this region as the foreground. The system can then iterate over the wound patches and discard the patches not including the foreground pixel. The iteration starts after S107, depending on if more than one contour has been found.
- In step S106, the output mask is filtered for foreground pixels. After each iteration, the grab cut algorithm outputs a mask defining the background as 0, the foreground as 1, likely background as 2, and likely foreground as 3. This mask is filtered for only foreground pixels. All other assignments (0, 2, 3) are replaced by 0. The result is a binary mask which contains 1 for foreground pixels and 0 for background pixels.
- In step S107, the contours in the foreground mask are detected and it is determined whether there is more than one contour in the foreground mask. When more than one contour is found, in step S109, the system iterates over all contours and detects whether each contour includes a foreground pixel. As described above, the result of one segmentation iteration can be several foreground patches on the image. In S107 the binary mask is used to detect the contours of these patches. In step S109, it is determined whether the foreground pixel is inside of one of the contours, and if so then this contour is defined as the wound of interest otherwise if the foreground pixel is not inside the contour, then this area is determined to be not the wound.
- For each contour that does not include a foreground pixel, the contour is filled with the background value in step S111. To ensure that only one contour enters S112, the system detects again for contours in the modified binary mask and addresses any additional contours by iterating again over the contours. In the case that only one contour is left, the flow proceeds to step S112.
- In step S112, the next iteration of the grab cut algorithm is performed. This process generates an initial segmentation that delineates the object border using a prominent polygon overlay.
- In step S113, it is determined whether the user is satisfied with the result. If not, the flow returns to step S104 whereby the user can refine the segmentation by using additional indications for object or background or both until satisfied.
- In step S114, the resulting images are uncropped using the stored cropped area. The resulting image is output as a segmented image.
- This semi-automatic segmentation algorithm can be implemented using the grab cut algorithm as described by Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. 2004. “GrabCut”: interactive foreground extraction using iterated graph cuts. In ACM SIGGRAPH 2004 Papers (SIGGRAPH '04), Joe Marks (Ed.). ACM, New York, N.Y., USA, 309-314. DOI=10.1145/1186562.1015720, herein incorporated by reference or another segmentation algorithm such as a graph cut algorithm. In this process, the user specifies the seed regions for wound and non-wound areas using simple finger swipes on a touchscreen. The segmentation result is displayed in real-time, and the user also has the flexibility to fine-tune the segmentation if needed. This algorithm requires minimal supervision and delivers a very fast performance.
- Exemplary evidence of the effectiveness of the image segmentation was obtained using a selection of 60 wound images which were used for validation. Five clinicians were asked to trace wound boundaries using a stylus on a windows tablet running the Matlab program. The results were compared against the present wound border segmentation process using a normalized overlap score. As shown in
FIG. 6 , the present implementation of the segmentation algorithm showed very good overlap with the experts' manual segmentations (overlap score of around 90%). The algorithm also reduced task time from around 40 seconds to <4 seconds. -
FIG. 7 shows a flow diagram showing the process for computing the 3D measurements from thestructure sensor 3 data and the obtained segmented image from the process shown inFIG. 4 . Thestructure sensor 3 data is topology information that provides information about the medical injury (wound). In particular, once the wound border is segmented in the 2D color image, the segmentation can be mapped into 3D space, enabling the 3D wound model to be extracted. Dimensions such as width and length can be calculated by applying Principal Component Analysis (PCA) to the point cloud. Alternatively, the rotated rectangle of the minimum area enclosing the wound can be found. The width and the length of the rectangle define the extent of the wound i.e. width and length, respectively. The perimeter can be computed by adding the line segments delineating the wound boundary. For area, volume, and depth, a reference plane is first created using paraboloid fitting to close the 3D wound model. This reference plane follows the anatomical shape of the surrounding body curvature, representing what normal skin surface should be without the wound. The area of the wound can be calculated as the surface area of the reference plane enclosed within the wound boundary. The volume is the space encapsulated by the reference plane and the wound surface; depth is the maximum distance between these two surfaces. These automated algorithms can be implemented, for instance, in OpenCV. - Another important aspect is the aligning of the
structure sensor 3 with theimaging sensor 2. In one embodiment, these two sensors have a rigid 6DOF transform between them because of the fixed mounting bracket. In this embodiment, a chessboard target and a stereo calibration algorithm, such as is found in OpenCV, is used to determine the transformation. To do so, the individual sensors are calibrated using a zero distortion model for thestructure sensor 3, and a distortion and de-centering model for theimaging sensor 2. Then, with all internal sensor parameters fixed (including focal length), the external transformation is calculated between the two sensors using a stereo calibration function such as the OpenCV stereoCalibrate function. As shown inFIG. 8 , both sensors observe the same planer surface, allowing the computation of the extrinsic calibration, similar to that of calibrating a Kinect depth camera with its own RGB camera. Alternatively, automated calibration method of a color camera with a depth camera can be used. With good calibration, the segmented wound border in the color image can be more accurately mapped onto the 3D structure data, and accurate wound dimensions can be computed. - In step S200 of
FIG. 7 , depth maps obtained by thestructure sensor 3 and the foreground mask and corresponding to the segmented image, are obtained. The foreground mask is a binary image with the same size as the color image but encodes the wound as foreground by assigning 1 to pixels belonging to the wound and 0 otherwise (background). - In step S201, the all depth maps are combined into a single depth map. This step is performed to ensure a uniform depth map. In particular, depth maps are very noisy. Therefore some depth values in the depth map are missing. By storing several consecutive depth maps, it is possible to fill in the gaps by combining all the depth maps to one depth map. Although this step reduces the majority of missing depth values, some gaps can still remain. S202 applies another method to fill in gaps using the depth values of neighboring pixels.
- In step S202, any missing depth information is filled by interpolating from neighboring values.
- In step S203, the depth information within the area, which is represented by the foreground mask is used for further processing. For instance,
FIG. 9 illustrates that theforeground mask 32 is applied to the depth data such that only the depth information within the area is obtained. - In step S204, the pixel position in the 2D image space is transformed to corresponding 3D coordinates in camera space using the depth map and intrinsic parameters of the
structure sensor 3. - In step S205, the contour within the foreground mask area is determined and in step S206, the determined contour is projected into 3D space whereby the perimeter is calculated. This perimeter is the total length of the wound border (like circumference of a circle), but in 3D space.
- In step S207, the minimal enclosing rectangle is projected into 3D space and the length and width of the wound are calculated. Thus, S206 calculates the perimeter of the wound and S207 calculates the maximum length and width of the wound.
- In step S208, the segmentation (foreground mask) is projected into 3D space and the area is calculated.
- In step S209, a parabolic shape is fit to estimate the surface using the contour depth information.
- In step S210, the deepest point within the wound is calculated and in step S211, the volume of the wound is calculated.
- In step S212, width, length, perimeter, area, volume, depth (deepest) and segmentation of the wound, determined from the previous steps, are output.
- The
server 10 may also perform wound tissue classification processing. Alternatively, this processing can also be performed at themobile device 1.FIG. 10 illustrates the process for classifying the tissue in the wound. In this process, after extracting the wound border, the wound tissue can be classified into granulation, and/or slough, and/or eschar tissues. A tile-based multi-class Support Vector Machine (SVM) classifier can be used to automate the task. The SVM may be trained on 100 images of different wounds, each providing hundreds of tiles to learn the features for the classifier. Cross validation and grid search can be used to optimize the learning process and the quality of generalization. Experimental testing showed good overlap (Overlap Score >80%) between manual and automatic segmentation as is shown inFIGS. 11A-F .FIG. 11A shows the original wound image,FIG. 11B shows the overlay with classification algorithm output,FIG. 11C shows automatic classification for granulation andFIG. 11D shows automatic classification for slough tissues.FIG. 11E shows manual classification by an expert for granulation andFIG. 11F shows manual classification by an expert for slough tissues. - In step S300 of
FIG. 10 , the color image obtained by theimage sensor 2 is obtained along with the foreground mask. - In step S301, the color information within the area of the color image corresponding to the foreground mask is obtained.
- In step S302, the color information with the mask is divided into tiles, such as square tiles. Other shaped tiles may also be used. In step S303, the features are calculated for each tile. The features are extracted elements. In particular, in one example, when an image is given in RGB color format, the may be converted from RGB to, HSV, LAB and/or grayscale formats and following features extracted: a) average and standard deviation of H, S values in each tile, respectively; b) average and standard deviation of L, A, and B values in each tile, respectively; and c) average and standard deviation of gray values, respectively. In step S304, each tile is classified using a trained support vector machine. The training process is shown in steps S400-S406.
- In step S400, a set of images are obtained that annotate areas of healthy, slough, and/or eschar tissue in the wound. These images are then further processed in step S401 so that the respective color information, within the annotated area, are linked to the respective annotation. In step S402, the images are each divided in to square tiles. The features noted above are then calculated for each tile in step S403. In step S404, each tile is labeled according to the tissue class to which it belongs. In steps S405, cross validation is applied to find the best parameters for the support vector machine using a separate test set. In step S406 a SVM model is generated.
- The model generated in step S406 is used in step S304 to classify each tile. In step S305, each tile is colored with a predetermined color according to the class it belongs such that each class has a different color. In step S306, the classified image having the colors highlighted thereon is output.
- When the processing is performed at the
server 10, theserver 10 transfers data back to themobile device 1 for display to the practitioner. The data may also be transmitted is a different device in place of themobile device 1. -
FIG. 12 illustrates an example of the interface provided to the practitioner. The information populating this interface is either generated within themobile device 1 or sent to themobile device 1 or an alternative device from theserver 10. - The interface displays the
color image 60 and provides the user with the ability to mark thewound 51 by selectingtoggle 61 and to mark thebackground 50 by selectingtoggle 62.Button 63 enables the user to erase markings. Once themarkings segmentation button 64, which initiates the processing shown inFIG. 4 . Once this processing is complete, the wound border information is returned to the interface to be displayed asborder 72. - The user may then select the
measurement button 65 which initiates the processing shown inFIG. 7 . Once this processing is complete and the result of the processing obtained, aminimal enclosing rectangle 71 is displayed and thewound measurements 66 are shown. - The user may also select the wound
tissue classification button 67 which classifies different portions of the wound based on theclasses 68 using different colors. Once thebutton 67 is selected the processing shown inFIG. 10 is performed. The result of the processing is overlaid onto the wound. - The data generated by the
server 10, in addition to being forwarded to themobile device 1, is also stored in a database. The database may be local to theserver 10 or in remote or cloud location. As the data may be medically sensitive, the data may be stored in encrypted form and/or in a protected way to ensure that privacy is maintained. - The data generated by the
server 10 or locally at themobile device 1 can be stored in the database. This data includes the wound image and the relevant clinical data both manually entered and automatically generated using the image processing methods. Using historical data in the database for a particular patient, the patient's wound healing progress can be analyzed to output parameters similar to those listed on Table 3 shown below. Thus, this information together with other visual features of the wound can then be integrated to support clinical decisions. From the database, clinical information, including the information listed on Tables 1, 2, and 3, can be accessed for reporting on the wound management orpractitioner portal 11. The information stored in the database can be incorporated into a patient's existing electronic health record managed by the practitioner. Using themanagement portal 11, a practitioner, who is a physician, a nurse, a researcher, or anyone with the proper authorization and credentials, can access the information to provide wound management in a HIPAA compliant manner. This portal can also be used for care co-ordination. -
TABLE 3 % Change (/week, vs. last measurement, vs. specific measurement date): Area Volume Depth (deepest) Tissue classification Absolute Change (/week, vs. last measure- ment, vs. specific measurement date): Area Volume Depth (deepest) Tissue Classification Benchmark scope: individual, practice, institution, region, national
Similarly, anonymized clinical data in the database can be accessed via aninformatics interface 12 to support clinical research and health informatics to advance wound care. - The present embodiments provide significant advantages. For example, the present system is able to effectively ensure that wounds are measured uniformly. The uniformity of the system makes consulting and cross-referencing much more feasible. Another advantage of the system is that the audit process for documentation with health insurers and Medicare/Medicaid fiscal intermediaries is significantly enhanced in the event of a chart audit for services rendered. The stored images prove treatment plans, skin substitute applications and progression (or lack) of healing. Another advantage of this system is the ability to supplement internal organizational audits regarding wound benchmark healing programs.
- Additionally, the
mobile device 1 could further include educational materials for patients and reference materials for care providers, such as guide for classifying wounds and front-line treatment modalities, current CPT coding guidelines for skin substitutes, updates on pharmaceuticals relevant to wound management/infection control, and the ability to secure link to existing electronic health record (EHR) systems. - To support collaboration between care givers, the system is also able to enable a patient to grant access to medical history data when being transferred between different facilities, and to allow the care team to collectively contribute to patient medical records along with the patient themselves through self-monitoring and self-reporting.
- The present embodiments can also be applied to other applications besides wound measurement.
- In another embodiment, the present embodiments can be also used as a preventive measure for population with higher risk of developing chronic wounds, such as diabetic patients who are prone to diabetic foot ulcer, an immobilized patient that is prone to developing pressure ulcers, and a patient with peripheral vascular diseases. A main reason for developing ulcers is the poor blood supply leading to ischemic tissue, which eventually develops into necrosis and ulcers. The present embodiments incorporated with multi-spectrum imaging or other advanced imaging technology and/or an image analysis algorithms, can be used to assess the blood supply or blood perfusion on body surfaces. For instance, a band-pass, band-stop, low-pass, or high-pass filter can be used to take images under different wavelengths of light, which can be used to analyze the blood oxygen contents in the superficial layers of the skin, similar to the technology used in pulsoxymetry. For instance, a light source in the near-infrared range with two different wavelengths can be used. The light filter together with the light source can be combined together and outfitted to an existing camera phone to enhance its multi-spectrum imaging capability for measuring blood perfusion.
- In yet another embodiment, the present embodiments can be used to monitor conditions in the ears, nose, throat, mouth, and eyes. To enhance the visual images, an auxiliary light source could be used. A light guide could also be used to take a picture of a location that is hard to reach, or in some cases or situations, stabilization and magnification could also be used. These features can be developed to be outfitted to an existing mobile device to allow the patient to take a better quality picture, which enables computers to extract clinical information more accurately, and enables care providers to better monitor disease progression and to detect and address risk for complications.
- In yet another embodiment, the present embodiments can be used to monitor for a disease condition that has visible bulging, swelling, or protruding feature on body surface, including but not limited to peripheral vascular disease, skin lumps, hernia, and hemorrhoids. The size and shape of those lesions are of clinical relevance and can be easily measured and tracked with the present embodiments.
- In yet another embodiment, the present embodiments can be used for plastic reconstructive surgery or weight loss regimen, where changes of body shape can be measured and documented.
- In yet another embodiment, the present embodiments can be used to monitor patient excretion, such as defecation and urination, the visual characteristics of which might have clinical relevance.
- In yet another embodiment, the present embodiments can be used to monitor patient caloric intake based on volume and identification of food groups for a number of medical conditions in which both fluid and solid intake must to be monitored.
- In summary, the present disclosure may be applied to methods and system for chronic disease management based on patient's self-monitoring and self-reporting using mobile devices. Specifically, the disclosure utilizes a camera-enabled mobile device (with or without special add-on device, such as stereo camera, structured light, multi-spectrum light imager, or other light source or light guide) to obtain visual information for the site of interest, including but not limited to ostomy, wound, ulcer, skin conditions, dental, ear, nose, throat, and eyes. Alternatively, this task could be achieved by utilizing a webcam enabled laptop, or a combination of a camera and a computer system. Visual information, including but not limited to the size, shape, color, hue, saturation, contrast, texture, pattern, 3D surface, or volumetric information, are of great clinical value in monitoring disease progression. The present embodiments disclose a techniques of using camera-enabled mobile device (with or without additional apparatus to enhance, improve, or add imaging capabilities) to acquire images, analyze, and extract clinical relevant features in the acquired image, which are later transmitted to a remote server. Alternatively, all the image analysis and feature recognition could be performed on the remote server site. Medical professionals or a computer-automated algorithm can access those patient data and determine the risk of deteriorating conditions, early warning signs for complications, or patients' compliance to treatments. In one embodiment, computer automation will serve as the first line of defense. This system is able to screen all patient data for early warning signs. Once a risk is identified, a care provider and/or patients can be altered when certain indication is out of normal range or trending unfavorably. Care providers can then evaluate the case, either confirm or dismiss the alert and take appropriate action to address the alert, including communicating with patients, adjusting therapy, or reminding patient to adhere to treatment. Based on the clinical information extracted from the image, patient communications, and patient's disease profile, and provider's treatment plans, the system can be used to place targeted advertisement and make product recommendations.
- In an alternative embodiment, the depth information and the image information can be obtained by a single imaging device or sensor. In this embodiment, the imaging device or sensor is able to capture depth information of the wound in addition to capturing an image of the wound. In addition, it is also possible to determine the distance of portions of the image such as the wound based on auto-focusing information. For instance, by identifying the distance differences between the wound and the background it is possible to determine depth information of the wound.
- At least certain portions of the processing described above, such the processes shown in
FIGS. 4, 7 and 10 , for example, can be implemented or aided by using some form of embedded or external computer having at least one microprocessor or by using a circuitry/processing circuitry. Any of the above described processes may be performed using a computer or circuitry or processing circuitry. As one of ordinary skill in the art would recognize, the computer processor can be implemented as discrete logic gates, as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Complex Programmable Logic Device (CPLD). An FPGA or CPLD implementation can be coded in VHDL, Verilog or any other hardware description language and the code can be stored in an electronic memory directly within the FPGA or CPLD, or as a separate electronic memory. Further, the electronic memory can be non-volatile, such as ROM, EPROM, EEPROM or FLASH memory. The electronic memory can also be volatile, such as static or dynamic RAM, and a processor, such as a microcontroller or microprocessor, can be provided to manage the electronic memory as well as the interaction between the FPGA or CPLD and the electronic memory. - Alternatively, the computer processor can execute a computer program including a set of computer-readable instructions that perform the functions described herein, the program being stored in any of the above-described non-transitory electronic memories and/or a hard disk drive, CD, DVD, FLASH drive or any other known storage media. Further, the computer-readable instructions can be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with a processor, such as a Xenon processor from Intel of America or an Opteron processor from AMD of America and an operating system, such as Microsoft VISTA, UNIX, Solaris, LINUX, Apple, MAC-OSX and other operating systems known to those skilled in the art.
- In addition, certain features of the embodiments can be implemented using a computer-based system (
FIG. 13 ). Thecomputer 1000 includes a bus B or other communication mechanism for communicating information, and a processor/CPU 1004 coupled with the bus B for processing the information. Thecomputer 1000 also includes a main memory/memory unit 1003, such as a random access memory (RAM) or other dynamic storage device (e.g., dynamic RAM (DRAM), static RAM (SRAM), and synchronous DRAM (SDRAM)), coupled to the bus B for storing information and instructions to be executed by processor/CPU 1004. In addition, thememory unit 1003 can be used for storing temporary variables or other intermediate information during the execution of instructions by theCPU 1004. Thecomputer 1000 can also further include a read only memory (ROM) or other static storage device (e.g., programmable ROM (PROM), erasable PROM (EPROM), and electrically erasable PROM (EEPROM)) coupled to the bus B for storing static information and instructions for theCPU 1004. - The
computer 1000 can also include a disk controller coupled to the bus B to control one or more storage devices for storing information and instructions, such asmass storage 1002, and drive device 1006 (e.g., read-only compact disc drive, read/write compact disc drive, compact disc jukebox, and removable magneto-optical drive). The storage devices can be added to thecomputer 1000 using an appropriate device interface (e.g., small computer system interface (SCSI), integrated device electronics (IDE), enhanced-IDE (E-IDE), direct memory access (DMA), or ultra-DMA). - The
computer 1000 can also include special purpose logic devices (e.g., application specific integrated circuits (ASICs)) or configurable logic devices (e.g., simple programmable logic devices (SPLDs), complex programmable logic devices (CPLDs), and field programmable gate arrays (FPGAs)). - The
computer 1000 can also include a display controller coupled to the bus B to control a display, for displaying information to a computer user. The computer system includes input devices, such as a keyboard and a pointing device, for interacting with a computer user and providing information to the processor. The pointing device, for example, may be a mouse, a trackball, or a pointing stick for communicating direction information and command selections to the processor and for controlling cursor movement on the display. In addition, a printer can provide printed listings of data stored and/or generated by the computer system. - The
computer 1000 performs at least a portion of the processing steps of the invention in response to theCPU 1004 executing one or more sequences of one or more instructions contained in a memory, such as thememory unit 1003. Such instructions can be read into the memory unit from another computer readable medium, such as themass storage 1002 or aremovable media 1001. One or more processors in a multi-processing arrangement can also be employed to execute the sequences of instructions contained inmemory unit 1003. In alternative embodiments, hard-wired circuitry can be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software. - As stated above, the
computer 1000 includes at least one computer readable medium 1001 or memory for holding instructions programmed according to the teachings of the invention and for containing data structures, tables, records, or other data described herein. Examples of computer readable media are compact discs, hard disks, magneto-optical disks, PROMs (EPROM, EEPROM, flash EPROM), DRAM, SRAM, SDRAM, or any other magnetic medium, compact discs (e.g., CD-ROM), or any other medium from which a computer can read. - Stored on any one or on a combination of computer readable media, the present invention includes software for controlling the
main processing unit 1004, for driving a device or devices for implementing the invention, and for enabling themain processing unit 1004 to interact with a human user. Such software can include, but is not limited to, device drivers, operating systems, development tools, and applications software. Such computer readable media further includes the computer program product of the present invention for performing all or a portion (if processing is distributed) of the processing performed in implementing the invention. - The computer code elements on the medium of the present invention can be any interpretable or executable code mechanism, including but not limited to scripts, interpretable programs, dynamic link libraries (DLLs), Java classes, and complete executable programs. Moreover, parts of the processing of the present invention can be distributed for better performance, reliability, and/or cost.
- The term “computer readable medium” as used herein refers to any medium that participates in providing instructions to the
CPU 1004 for execution. A computer readable medium can take many forms, including but not limited to, non-volatile media, and volatile media. Non-volatile media includes, for example, optical, magnetic disks, and magneto-optical disks, such as themass storage 1002 or theremovable media 1001. Volatile media includes dynamic memory, such as thememory unit 1003. - Various forms of computer readable media can be involved in carrying out one or more sequences of one or more instructions to the
CPU 1004 for execution. For example, the instructions can initially be carried on a magnetic disk of a remote computer. An input coupled to the bus B can receive the data and place the data on the bus B. The bus B carries the data to thememory unit 1003, from which theCPU 1004 retrieves and executes the instructions. The instructions received by thememory unit 1003 can optionally be stored onmass storage 1002 either before or after execution by theCPU 1004. - The
computer 1000 also includes acommunication interface 1005 coupled to the bus B. Thecommunication interface 1004 provides a two-way data communication coupling to a network that is connected to, for example, a local area network (LAN), or to another communications network such as the Internet. For example, thecommunication interface 1005 can be a network interface card to attach to any packet switched LAN. As another example, thecommunication interface 1005 can be an asymmetrical digital subscriber line (ADSL) card, an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of communications line. Wireless links can also be implemented. In any such implementation, thecommunication interface 1005 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. - The network typically provides data communication through one or more networks to other data devices. For example, the network can provide a connection to another computer through a local network (e.g., a LAN) or through equipment operated by a service provider, which provides communication services through a communications network. The local network and the communications network use, for example, electrical, electromagnetic, or optical signals that carry digital data streams, and the associated physical layer (e.g.,
CAT 5 cable, coaxial cable, optical fiber, etc). Moreover, the network can provide a connection to a mobile device such as laptop computer, or cellular telephone. - In the above description, any processes, descriptions or blocks in flowcharts should be understood as representing modules, segments or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the exemplary embodiments of the present advancements in which functions can be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending upon the functionality involved, as would be understood by those skilled in the art.
- While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods, apparatuses and systems described herein can be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods, apparatuses and systems described herein can be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Claims (17)
1. A method of treatment of a medical injury, comprising:
obtaining, using a two-dimensional image sensor, a two-dimensional image of an area of interest;
obtaining, using a depth camera with structured-light sensor, a depth map of the area of interest, the depth map representing a three-dimensional surface topography;
determining, using processing circuitry, a boundary of an injury portion within the two-dimensional image of the area of interest;
correlating, using the processing circuitry, the two-dimensional image and the depth map;
applying, using the processing circuitry, the boundary of the injury portion designated within the two-dimensional image to the depth map to designate a mask area;
determining, using the processing circuitry, characteristics of the injury portion within the mask area based on both the depth map and the two-dimensional image; and
instructing, using the processing circuitry, treatment of the medical injury based on the determined characteristics of the injury portion within the mask area.
2. The method according to claim 1 , further comprising:
treating the medical injury based on the determined characteristics of the injury portion within the mask area using one or more of a pressure-ulcer treatment, a diabetic-ulcer treatment, an arterial-insufficiency-ulcer treatment, a venous-stasis ulcer treatment, and a burn-wound treatment.
3. The method according to claim 1 , further comprising:
storing, using a non-transitory computer readable storage, the two-dimensional image and the depth map to document progress over time of the medical injury;
comparing, using the processing circuitry, the determined characteristics of the injury portion within the mask area to wound-healing benchmarks to generate a comparison; and
revising, using the processing circuitry, the instructing of the treatment of the medical injury based in the comparison.
4. The method according to claim 1 , wherein the instructing of the treatment of the medical injury further includes recommending a skin substitute and providing a current procedural terminology (CPT) coding guideline for the recommended skin substitute.
5. The method according to claim 1 , wherein the instructing of the treatment of the medical injury further includes
providing the two-dimensional image and the depth map to a remote location that is remote from a location of the medical injury,
generating a treatment plan at the remote location, and
transmitting the treatment plan to the location of the medical injury, wherein
the instructing, using the processing circuitry, of the treatment of the medical injury is performed using the transmitted treatment plan.
6. The method according to claim 1 , further comprising:
designating, using the processing circuitry, a representative background portion of the area of interest from the two-dimensional image of the area of interest; and
designating, using the processing circuitry, a representative injury portion of the area of interest from the two-dimensional image of the area of interest, wherein
the determining of the boundary of the injury portion within the two-dimensional image is performed based on the designated representative background portion and the injury portion.
7. The method according to claim 6 , further comprising designating, using the processing circuitry, a representative injury portion of the area of interest from the two-dimensional image of the area of interest based on user input or pixel characteristic differences.
8. The method according to claim 1 , further comprising classifying, using the processing circuitry, the injury portion within the mask area by dividing the injury portion into tiles, to calculate a measure of central tendency for imaging values of each tile, and to classify each tile using injury type information generated by a previously trained support vector machine, wherein the injury type information includes healthy, slough, and eschar tissue.
9. The method according to claim 8 , wherein the previously trained support vector machine generates the injury type information using circuitry configured to, for a set of annotated images, divide each image into tiles, to calculate a measure of central tendency for imaging values of each tile, to designate each tile according to an injury type, and to apply cross-validation using a separate test set.
10. The method according to claim 1 , wherein the characteristics of the injury portion within the mask area include a depth, a width, and a length of the injury.
11. The method according to claim 1 , wherein the characteristics of the injury portion within the mask area include a perimeter, an area, and a volume of the injury.
12. The method according to claim 1 , wherein the method further comprises determining, using the processing circuitry, the boundary of the injury portion within the two-dimensional image of the area of interest by utilizing a grab cut algorithm.
13. The method according to claim 1 , wherein the method further comprises determining, using the processing circuitry, the boundary of the injury portion within the two-dimensional image of the area of interest by detecting contours in the representative injury portion of the area of interest.
14. The method according to claim 1 , wherein the method further comprises determining, using the processing circuitry, the boundary of the injury portion within the two-dimensional image of the area of interest by detecting contours in the representative injury portion of the area of interest and iterating over all the contours.
15. The method according to claim 1 , wherein the medical injury is a wound.
16. The method according to claim 1 , wherein the depth camera with structured-light sensor includes a transmitter configured to transmit structured light and a receiver configured to sense reflections and backscatter of the structured light.
17. The method according to claim 16 , wherein the structured light has an infrared spectrum.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/083,081 US20160206205A1 (en) | 2013-12-03 | 2016-03-28 | Method and system for wound assessment and management |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361911162P | 2013-12-03 | 2013-12-03 | |
US201461983022P | 2014-04-23 | 2014-04-23 | |
US14/491,794 US11337612B2 (en) | 2013-12-03 | 2014-09-19 | Method and system for wound assessment and management |
US15/083,081 US20160206205A1 (en) | 2013-12-03 | 2016-03-28 | Method and system for wound assessment and management |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/491,794 Division US11337612B2 (en) | 2013-12-03 | 2014-09-19 | Method and system for wound assessment and management |
Publications (1)
Publication Number | Publication Date |
---|---|
US20160206205A1 true US20160206205A1 (en) | 2016-07-21 |
Family
ID=53264054
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/491,794 Active US11337612B2 (en) | 2013-12-03 | 2014-09-19 | Method and system for wound assessment and management |
US15/083,081 Abandoned US20160206205A1 (en) | 2013-12-03 | 2016-03-28 | Method and system for wound assessment and management |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/491,794 Active US11337612B2 (en) | 2013-12-03 | 2014-09-19 | Method and system for wound assessment and management |
Country Status (8)
Country | Link |
---|---|
US (2) | US11337612B2 (en) |
EP (1) | EP3077956B1 (en) |
JP (1) | JP6595474B2 (en) |
KR (1) | KR102317478B1 (en) |
CN (1) | CN106164929B (en) |
AU (2) | AU2014357720A1 (en) |
CA (1) | CA2930184A1 (en) |
WO (1) | WO2015084462A1 (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9704265B2 (en) * | 2014-12-19 | 2017-07-11 | SZ DJI Technology Co., Ltd. | Optical-flow imaging system and method using ultrasonic depth sensing |
CN107071071A (en) * | 2017-06-15 | 2017-08-18 | 深圳市创艺工业技术有限公司 | A kind of medical treatment & health system based on mobile terminal and cloud computing |
US9955910B2 (en) | 2005-10-14 | 2018-05-01 | Aranz Healthcare Limited | Method of monitoring a surface feature and apparatus therefor |
US10013527B2 (en) | 2016-05-02 | 2018-07-03 | Aranz Healthcare Limited | Automatically assessing an anatomical surface feature and securely managing information related to the same |
WO2018185560A3 (en) * | 2017-04-04 | 2019-02-28 | Aranz Healthcare Limited | Anatomical surface assessment methods, devices and systems |
WO2019241288A1 (en) * | 2018-06-11 | 2019-12-19 | The General Hospital Corporation | Skin construct transfer system and method |
US10874302B2 (en) | 2011-11-28 | 2020-12-29 | Aranz Healthcare Limited | Handheld skin measuring or monitoring device |
US20210137453A1 (en) * | 2019-11-12 | 2021-05-13 | Md Ortho Systems Llc | Systems and methods for self-guided injury treatment |
US20210142888A1 (en) * | 2019-11-11 | 2021-05-13 | Healthy.Io Ltd. | Image processing systems and methods for caring for skin features |
WO2021155010A1 (en) * | 2020-01-28 | 2021-08-05 | Zebra Technologies Corporation | System and method for lesion monitoring |
US11116407B2 (en) | 2016-11-17 | 2021-09-14 | Aranz Healthcare Limited | Anatomical surface assessment methods, devices and systems |
US11308618B2 (en) | 2019-04-14 | 2022-04-19 | Holovisions LLC | Healthy-Selfie(TM): a portable phone-moving device for telemedicine imaging using a mobile phone |
WO2022106672A1 (en) * | 2020-11-23 | 2022-05-27 | Roche Diagnostics Gmbh | Method and devices for point-of-care applications |
US20220211438A1 (en) * | 2021-01-04 | 2022-07-07 | Healthy.Io Ltd | Rearranging and selecting frames of medical videos |
EP3899988A4 (en) * | 2018-12-18 | 2022-09-14 | Mölnlycke Health Care AB | A method for selecting a wound product for a patient |
Families Citing this family (81)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9286537B2 (en) * | 2014-01-22 | 2016-03-15 | Cognizant Technology Solutions India Pvt. Ltd. | System and method for classifying a skin infection |
WO2015123468A1 (en) * | 2014-02-12 | 2015-08-20 | Mobile Heartbeat Llc | System for setting and controlling functionalities of mobile devices |
US10531977B2 (en) | 2014-04-17 | 2020-01-14 | Coloplast A/S | Thermoresponsive skin barrier appliances |
US9959486B2 (en) * | 2014-10-20 | 2018-05-01 | Siemens Healthcare Gmbh | Voxel-level machine learning with or without cloud-based support in medical imaging |
US9990472B2 (en) * | 2015-03-23 | 2018-06-05 | Ohio State Innovation Foundation | System and method for segmentation and automated measurement of chronic wound images |
CA2994024C (en) * | 2015-07-29 | 2019-03-05 | Synaptive Medical (Barbados) Inc. | Handheld scanner for rapid registration in a medical navigation system |
FR3046692B1 (en) * | 2016-01-07 | 2018-01-05 | Urgo Recherche Innovation Et Developpement | DIGITAL ANALYSIS OF A DIGITAL IMAGE REPRESENTING A WOUND FOR ITS AUTOMATIC CHARACTERIZATION |
KR102508831B1 (en) | 2016-02-17 | 2023-03-10 | 삼성전자주식회사 | Remote image transmission system, display apparatus and guide displaying method of thereof |
DE102016111327A1 (en) * | 2016-06-21 | 2017-12-21 | Jonathan Volker Herrmann | Method and system for assessing wounds |
US10769786B2 (en) * | 2016-06-28 | 2020-09-08 | Kci Licensing, Inc. | Semi-automated system for real-time wound image segmentation and photogrammetry on a mobile platform |
CN110192390A (en) | 2016-11-24 | 2019-08-30 | 华盛顿大学 | The light-field capture of head-mounted display and rendering |
US10425633B2 (en) * | 2016-12-30 | 2019-09-24 | Konica Minolta Laboratory U.S.A., Inc. | Method and system for capturing images for wound assessment with moisture detection |
CN106691821A (en) * | 2017-01-20 | 2017-05-24 | 中国人民解放军第四军医大学 | Infrared fast healing device of locally-supplying-oxygen-to-wound type |
US10366490B2 (en) * | 2017-03-27 | 2019-07-30 | Siemens Healthcare Gmbh | Highly integrated annotation and segmentation system for medical imaging |
SG10201706752XA (en) | 2017-08-17 | 2019-03-28 | Iko Pte Ltd | Systems and methods for analyzing cutaneous conditions |
US11244456B2 (en) | 2017-10-03 | 2022-02-08 | Ohio State Innovation Foundation | System and method for image segmentation and digital analysis for clinical trial scoring in skin disease |
CN111225611B (en) * | 2017-10-17 | 2024-03-05 | 克罗尼卡雷私人有限公司 | Systems and methods for facilitating analysis of wounds in a target object |
US10909684B2 (en) * | 2017-11-20 | 2021-02-02 | University Of Iowa Research Foundation | Systems and methods for airway tree segmentation |
US10500084B2 (en) | 2017-12-22 | 2019-12-10 | Coloplast A/S | Accessory devices of an ostomy system, and related methods for communicating leakage state |
WO2019120441A1 (en) | 2017-12-22 | 2019-06-27 | Coloplast A/S | Sensor assembly part and a base plate for an ostomy appliance and a method for manufacturing a sensor assembly part and a base plate |
US11559423B2 (en) | 2017-12-22 | 2023-01-24 | Coloplast A/S | Medical appliance system, monitor device, and method of monitoring a medical appliance |
WO2019120443A1 (en) | 2017-12-22 | 2019-06-27 | Coloplast A/S | Sensor assembly part and a base plate for an ostomy appliance and a method for manufacturing a base plate or a sensor assembly part |
JP7422074B2 (en) | 2017-12-22 | 2024-01-25 | コロプラスト アクティーゼルスカブ | Ostomy system base plate and sensor assembly with leakage sensor |
DK3727232T3 (en) | 2017-12-22 | 2022-04-19 | Coloplast As | OUTDOOR DEVICE WITH SELECTIVE SENSOR POINTS AND ASSOCIATED PROCEDURE |
WO2019120427A1 (en) | 2017-12-22 | 2019-06-27 | Coloplast A/S | Sensor assembly part for an ostomy appliance and a method for manufacturing a sensor assembly part |
EP4275663A3 (en) | 2017-12-22 | 2024-01-17 | Coloplast A/S | Moisture detecting base plate for an ostomy appliance and a system for determining moisture propagation in a base plate and/or a sensor assembly part |
US10849781B2 (en) | 2017-12-22 | 2020-12-01 | Coloplast A/S | Base plate for an ostomy appliance |
US11589811B2 (en) | 2017-12-22 | 2023-02-28 | Coloplast A/S | Monitor device of a medical system and associated method for operating a monitor device |
US11628084B2 (en) | 2017-12-22 | 2023-04-18 | Coloplast A/S | Sensor assembly part and a base plate for a medical appliance and a device for connecting to a base plate or a sensor assembly part |
US11707376B2 (en) | 2017-12-22 | 2023-07-25 | Coloplast A/S | Base plate for a medical appliance and a sensor assembly part for a base plate and a method for manufacturing a base plate and sensor assembly part |
US11707377B2 (en) | 2017-12-22 | 2023-07-25 | Coloplast A/S | Coupling part with a hinge for a medical base plate and sensor assembly part |
CN111447896B (en) | 2017-12-22 | 2023-03-28 | 科洛普拉斯特公司 | Base plate for an ostomy appliance, monitoring device and system for an ostomy appliance |
WO2019120439A1 (en) | 2017-12-22 | 2019-06-27 | Coloplast A/S | Calibration methods for ostomy appliance tools |
EP3727241A1 (en) | 2017-12-22 | 2020-10-28 | Coloplast A/S | Data collection schemes for an ostomy appliance and related methods |
WO2019120438A1 (en) | 2017-12-22 | 2019-06-27 | Coloplast A/S | Tools and methods for placing an ostomy appliance on a user |
US10799385B2 (en) | 2017-12-22 | 2020-10-13 | Coloplast A/S | Ostomy appliance with layered base plate |
DK3727234T3 (en) | 2017-12-22 | 2022-04-19 | Coloplast As | OSTOMY APPARATUS WITH ANGLE LEAK DETECTION |
BR112020015435A2 (en) * | 2018-02-02 | 2020-12-08 | Moleculight Inc. | WOUND IMAGE AND ANALYSIS |
EP3755282A1 (en) | 2018-02-20 | 2020-12-30 | Coloplast A/S | Sensor assembly part and a base plate for an ostomy appliance and a device for connecting to a base plate and/or a sensor assembly part |
CN108596232B (en) * | 2018-04-16 | 2022-03-08 | 杭州睿珀智能科技有限公司 | Automatic insole classification method based on shape and color characteristics |
CN108606782A (en) * | 2018-04-28 | 2018-10-02 | 泰州市榕兴医疗用品股份有限公司 | A kind of surface of a wound imaging system |
CN109009134A (en) * | 2018-07-06 | 2018-12-18 | 上海理工大学 | A kind of scanning means of body surface three-dimensional information |
CN109087285A (en) * | 2018-07-13 | 2018-12-25 | 中国人民解放军海军工程大学 | Surgery wound detects debridement robot |
CN109065151A (en) * | 2018-07-13 | 2018-12-21 | 中国人民解放军海军工程大学 | Intelligence treats Non-surgical wound processing system |
CN109410318B (en) * | 2018-09-30 | 2020-09-08 | 先临三维科技股份有限公司 | Three-dimensional model generation method, device, equipment and storage medium |
CN109330566A (en) * | 2018-11-21 | 2019-02-15 | 佛山市第人民医院(中山大学附属佛山医院) | Wound monitoring method and device |
IT201800010536A1 (en) * | 2018-11-23 | 2020-05-23 | Torino Politecnico | Device and method for the detection and monitoring of skin diseases |
JP6531273B1 (en) * | 2018-11-30 | 2019-06-19 | Arithmer株式会社 | Dimension data calculation apparatus, program, method, product manufacturing apparatus, and product manufacturing system |
US11922649B2 (en) | 2018-11-30 | 2024-03-05 | Arithmer Inc. | Measurement data calculation apparatus, product manufacturing apparatus, information processing apparatus, silhouette image generating apparatus, and terminal apparatus |
KR102282348B1 (en) * | 2018-12-04 | 2021-07-27 | 주식회사 하이로닉 | Apparatus, method and system for providing procedure information of beauty procedure |
KR20210110805A (en) | 2018-12-20 | 2021-09-09 | 컬러플라스트 에이/에스 | Classification of stoma/urostomy status using image data conversion, apparatus and related method |
EP3897481B1 (en) * | 2018-12-20 | 2023-08-09 | Coloplast A/S | Ostomy condition classification with masking, devices and related methods |
CN109700465A (en) * | 2019-01-07 | 2019-05-03 | 广东体达康医疗科技有限公司 | A kind of mobile three-dimensional wound scanning device and its workflow |
US11612512B2 (en) | 2019-01-31 | 2023-03-28 | Coloplast A/S | Moisture detecting base plate for an ostomy appliance and a system for determining moisture propagation in a base plate and/or a sensor assembly part |
US10957043B2 (en) * | 2019-02-28 | 2021-03-23 | Endosoftllc | AI systems for detecting and sizing lesions |
US11756681B2 (en) | 2019-05-07 | 2023-09-12 | Medtronic, Inc. | Evaluation of post implantation patient status and medical device performance |
KR102165699B1 (en) * | 2019-05-24 | 2020-10-14 | 동서대학교 산학협력단 | Skin disease care System for user specific real time service |
CN110151141A (en) * | 2019-06-20 | 2019-08-23 | 上海市肺科医院 | A kind of pressure injury intelligent evaluation system |
US11324401B1 (en) | 2019-09-05 | 2022-05-10 | Allscripts Software, Llc | Computing system for wound tracking |
US20220361952A1 (en) * | 2019-10-07 | 2022-11-17 | Intuitive Surgical Operations, Inc. | Physical medical element placement systems |
US20210153959A1 (en) * | 2019-11-26 | 2021-05-27 | Intuitive Surgical Operations, Inc. | Physical medical element affixation systems, methods, and materials |
US20210181930A1 (en) * | 2019-12-17 | 2021-06-17 | Palantir Technologies Inc. | Image tiling and distributive modification |
CN113119103B (en) * | 2019-12-31 | 2022-10-14 | 深圳富泰宏精密工业有限公司 | Method and computer device for determining depth standard value of marker |
CN111184517A (en) * | 2020-01-14 | 2020-05-22 | 南方医科大学珠江医院 | Wound measuring and recording system |
US11484245B2 (en) * | 2020-03-05 | 2022-11-01 | International Business Machines Corporation | Automatic association between physical and visual skin properties |
US11659998B2 (en) | 2020-03-05 | 2023-05-30 | International Business Machines Corporation | Automatic measurement using structured lights |
KR102192953B1 (en) * | 2020-03-31 | 2020-12-18 | 신현경 | A system for treating skin damage based on artificial intelligence and providing remote medical service |
DE102020118976A1 (en) * | 2020-05-26 | 2021-12-16 | Medical & Science Aktiengesellschaft | Method and arrangement for determining the surface-spatial temperature distribution in the mouth and throat of a test person |
KR102304370B1 (en) * | 2020-09-18 | 2021-09-24 | 동국대학교 산학협력단 | Apparatus and method of analyzing status and change of wound area based on deep learning |
CN112155553B (en) * | 2020-09-27 | 2023-05-23 | 甘肃省人民医院 | Wound surface evaluation system and method based on structured light 3D measurement |
CN112151177B (en) * | 2020-09-27 | 2023-12-15 | 甘肃省人民医院 | Evaluation management system and method for chronic wound surface |
EP3979258A1 (en) * | 2020-10-05 | 2022-04-06 | Hill-Rom Services, Inc. | Wound healing analysis and tracking |
US11908154B2 (en) * | 2021-02-04 | 2024-02-20 | Fibonacci Phyllotaxis Inc. | System and method for evaluating tumor stability |
KR102550631B1 (en) * | 2021-03-16 | 2023-07-03 | (주)파인헬스케어 | Apparatus for providing evaluation of bedsore stages and treatment recommendations using artificial intelligence and operation method thereof |
KR102540755B1 (en) * | 2021-04-30 | 2023-06-07 | 성균관대학교산학협력단 | Method of estimating hemoglobin concentration using skin image, or health information and body information, and hemoglobin concentration estimating device performing method |
WO2022248964A1 (en) * | 2021-05-28 | 2022-12-01 | Kci Manufacturing Unlimited Company | Method to detect and measure a wound site on a mobile device |
TWI801311B (en) * | 2021-09-30 | 2023-05-01 | 賴飛羆 | Method and system for analyzing image of chronic wound by deep learning model |
EP4202946A1 (en) * | 2021-12-21 | 2023-06-28 | Bull SAS | Method and system for tracking the evolution of a wound |
US20230255492A1 (en) * | 2022-02-13 | 2023-08-17 | National Cheng Kung University | Wound analyzing system and method |
EP4282330A1 (en) * | 2022-05-25 | 2023-11-29 | AI Labs Group, S.L. | Ai marker device, method for standardising an image using the ai marker device and method for grading the severity of a skin disease using both |
CN117442190B (en) * | 2023-12-21 | 2024-04-02 | 山东第一医科大学附属省立医院(山东省立医院) | Automatic wound surface measurement method and system based on target detection |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5301105A (en) * | 1991-04-08 | 1994-04-05 | Desmond D. Cummings | All care health management system |
US7450783B2 (en) * | 2003-09-12 | 2008-11-11 | Biopticon Corporation | Methods and systems for measuring the size and volume of features on live tissues |
US20100203135A1 (en) * | 2005-03-14 | 2010-08-12 | Paul Kemp | Skin Equivalent Culture |
US20130053677A1 (en) * | 2009-11-09 | 2013-02-28 | Jeffrey E. Schoenfeld | System and method for wound care management based on a three dimensional image of a foot |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5967979A (en) * | 1995-11-14 | 1999-10-19 | Verg, Inc. | Method and apparatus for photogrammetric assessment of biological tissue |
US6081612A (en) * | 1997-02-28 | 2000-06-27 | Electro Optical Sciences Inc. | Systems and methods for the multispectral imaging and characterization of skin tissue |
US6208749B1 (en) * | 1997-02-28 | 2001-03-27 | Electro-Optical Sciences, Inc. | Systems and methods for the multispectral imaging and characterization of skin tissue |
US6081739A (en) * | 1998-05-21 | 2000-06-27 | Lemchen; Marc S. | Scanning device or methodology to produce an image incorporating correlated superficial, three dimensional surface and x-ray images and measurements of an object |
CN1907225B (en) * | 2005-08-05 | 2011-02-02 | Ge医疗系统环球技术有限公司 | Process and apparatus for dividing intracerebral hemorrhage injury |
DE102006013476B4 (en) * | 2006-03-23 | 2012-11-15 | Siemens Ag | Method for positionally accurate representation of tissue regions of interest |
US20070276309A1 (en) * | 2006-05-12 | 2007-11-29 | Kci Licensing, Inc. | Systems and methods for wound area management |
US8063915B2 (en) * | 2006-06-01 | 2011-11-22 | Simquest Llc | Method and apparatus for collecting and analyzing surface wound data |
US20080045807A1 (en) | 2006-06-09 | 2008-02-21 | Psota Eric T | System and methods for evaluating and monitoring wounds |
US8000777B2 (en) | 2006-09-19 | 2011-08-16 | Kci Licensing, Inc. | System and method for tracking healing progress of tissue |
US8213695B2 (en) * | 2007-03-07 | 2012-07-03 | University Of Houston | Device and software for screening the skin |
WO2008130906A1 (en) | 2007-04-17 | 2008-10-30 | Mikos, Ltd. | System and method for using three dimensional infrared imaging to provide psychological profiles of individuals |
US8155405B2 (en) | 2007-04-20 | 2012-04-10 | Siemens Aktiengsellschaft | System and method for lesion segmentation in whole body magnetic resonance images |
EP2239675A1 (en) | 2009-04-07 | 2010-10-13 | BIOCRATES Life Sciences AG | Method for in vitro diagnosing a complex disease |
US20120206587A1 (en) * | 2009-12-04 | 2012-08-16 | Orscan Technologies Ltd | System and method for scanning a human body |
CN106498076A (en) * | 2010-05-11 | 2017-03-15 | 威拉赛特公司 | For diagnosing the method and composition of symptom |
US20120078113A1 (en) * | 2010-09-28 | 2012-03-29 | Point of Contact, LLC | Convergent parameter instrument |
DE102011006398A1 (en) * | 2011-03-30 | 2012-10-04 | Siemens Aktiengesellschaft | Method, image processing device and computer tomography system for determining a proportion of necrotic tissue and computer program product with program code sections for determining a proportion of necrotic tissue |
CN102930552B (en) * | 2012-11-22 | 2015-03-18 | 北京理工大学 | Brain tumor automatic extraction method based on symmetrically structured subtraction |
-
2014
- 2014-09-19 AU AU2014357720A patent/AU2014357720A1/en not_active Abandoned
- 2014-09-19 JP JP2016536570A patent/JP6595474B2/en active Active
- 2014-09-19 US US14/491,794 patent/US11337612B2/en active Active
- 2014-09-19 KR KR1020167017586A patent/KR102317478B1/en active IP Right Grant
- 2014-09-19 CA CA2930184A patent/CA2930184A1/en active Granted
- 2014-09-19 WO PCT/US2014/056587 patent/WO2015084462A1/en active Application Filing
- 2014-09-19 CN CN201480070811.7A patent/CN106164929B/en active Active
- 2014-09-19 EP EP14868672.8A patent/EP3077956B1/en active Active
-
2016
- 2016-03-28 US US15/083,081 patent/US20160206205A1/en not_active Abandoned
-
2020
- 2020-05-13 AU AU2020203128A patent/AU2020203128A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5301105A (en) * | 1991-04-08 | 1994-04-05 | Desmond D. Cummings | All care health management system |
US7450783B2 (en) * | 2003-09-12 | 2008-11-11 | Biopticon Corporation | Methods and systems for measuring the size and volume of features on live tissues |
US20100203135A1 (en) * | 2005-03-14 | 2010-08-12 | Paul Kemp | Skin Equivalent Culture |
US20130053677A1 (en) * | 2009-11-09 | 2013-02-28 | Jeffrey E. Schoenfeld | System and method for wound care management based on a three dimensional image of a foot |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9955910B2 (en) | 2005-10-14 | 2018-05-01 | Aranz Healthcare Limited | Method of monitoring a surface feature and apparatus therefor |
US10827970B2 (en) | 2005-10-14 | 2020-11-10 | Aranz Healthcare Limited | Method of monitoring a surface feature and apparatus therefor |
US11850025B2 (en) | 2011-11-28 | 2023-12-26 | Aranz Healthcare Limited | Handheld skin measuring or monitoring device |
US10874302B2 (en) | 2011-11-28 | 2020-12-29 | Aranz Healthcare Limited | Handheld skin measuring or monitoring device |
US9704265B2 (en) * | 2014-12-19 | 2017-07-11 | SZ DJI Technology Co., Ltd. | Optical-flow imaging system and method using ultrasonic depth sensing |
US11923073B2 (en) | 2016-05-02 | 2024-03-05 | Aranz Healthcare Limited | Automatically assessing an anatomical surface feature and securely managing information related to the same |
US10013527B2 (en) | 2016-05-02 | 2018-07-03 | Aranz Healthcare Limited | Automatically assessing an anatomical surface feature and securely managing information related to the same |
US11250945B2 (en) | 2016-05-02 | 2022-02-15 | Aranz Healthcare Limited | Automatically assessing an anatomical surface feature and securely managing information related to the same |
US10777317B2 (en) | 2016-05-02 | 2020-09-15 | Aranz Healthcare Limited | Automatically assessing an anatomical surface feature and securely managing information related to the same |
US11116407B2 (en) | 2016-11-17 | 2021-09-14 | Aranz Healthcare Limited | Anatomical surface assessment methods, devices and systems |
US11903723B2 (en) * | 2017-04-04 | 2024-02-20 | Aranz Healthcare Limited | Anatomical surface assessment methods, devices and systems |
WO2018185560A3 (en) * | 2017-04-04 | 2019-02-28 | Aranz Healthcare Limited | Anatomical surface assessment methods, devices and systems |
CN107071071A (en) * | 2017-06-15 | 2017-08-18 | 深圳市创艺工业技术有限公司 | A kind of medical treatment & health system based on mobile terminal and cloud computing |
WO2019241288A1 (en) * | 2018-06-11 | 2019-12-19 | The General Hospital Corporation | Skin construct transfer system and method |
EP3899988A4 (en) * | 2018-12-18 | 2022-09-14 | Mölnlycke Health Care AB | A method for selecting a wound product for a patient |
US11308618B2 (en) | 2019-04-14 | 2022-04-19 | Holovisions LLC | Healthy-Selfie(TM): a portable phone-moving device for telemedicine imaging using a mobile phone |
US20210142888A1 (en) * | 2019-11-11 | 2021-05-13 | Healthy.Io Ltd. | Image processing systems and methods for caring for skin features |
US11961608B2 (en) * | 2019-11-11 | 2024-04-16 | Healthy.Io Ltd. | Image processing systems and methods for caring for skin features |
US20210137453A1 (en) * | 2019-11-12 | 2021-05-13 | Md Ortho Systems Llc | Systems and methods for self-guided injury treatment |
WO2021155010A1 (en) * | 2020-01-28 | 2021-08-05 | Zebra Technologies Corporation | System and method for lesion monitoring |
WO2022106672A1 (en) * | 2020-11-23 | 2022-05-27 | Roche Diagnostics Gmbh | Method and devices for point-of-care applications |
US20220211438A1 (en) * | 2021-01-04 | 2022-07-07 | Healthy.Io Ltd | Rearranging and selecting frames of medical videos |
US11551807B2 (en) * | 2021-01-04 | 2023-01-10 | Healthy.Io Ltd | Rearranging and selecting frames of medical videos |
Also Published As
Publication number | Publication date |
---|---|
CN106164929A (en) | 2016-11-23 |
CA2930184A1 (en) | 2015-06-11 |
KR20160092013A (en) | 2016-08-03 |
AU2014357720A1 (en) | 2016-05-26 |
US20150150457A1 (en) | 2015-06-04 |
WO2015084462A1 (en) | 2015-06-11 |
KR102317478B1 (en) | 2021-10-25 |
EP3077956A4 (en) | 2017-04-05 |
US11337612B2 (en) | 2022-05-24 |
CN106164929B (en) | 2021-02-19 |
AU2020203128A1 (en) | 2020-06-04 |
JP6595474B2 (en) | 2019-10-23 |
EP3077956B1 (en) | 2023-10-11 |
JP2017504370A (en) | 2017-02-09 |
EP3077956A1 (en) | 2016-10-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11337612B2 (en) | Method and system for wound assessment and management | |
US11783480B2 (en) | Semi-automated system for real-time wound image segmentation and photogrammetry on a mobile platform | |
US20180279943A1 (en) | System and method for the analysis and transmission of data, images and video relating to mammalian skin damage conditions | |
Lucas et al. | Wound size imaging: ready for smart assessment and monitoring | |
US11749399B2 (en) | Cross section views of wounds | |
Sirazitdinova et al. | System design for 3D wound imaging using low-cost mobile devices | |
Pires et al. | Wound Area Assessment using Mobile Application. | |
Casas et al. | Imaging technologies applied to chronic wounds: a survey | |
US11195281B1 (en) | Imaging system and method for assessing wounds | |
Zenteno et al. | Volumetric monitoring of cutaneous leishmaniasis ulcers: can camera be as accurate as laser scanner? | |
CA2930184C (en) | Method and system for wound assessment and management | |
Lucas et al. | Optical imaging technology for wound assessment: a state of the art | |
Zenteno et al. | Volume estimation of skin ulcers: Can cameras be as accurate as laser scanners? | |
US11538157B1 (en) | Imaging system and method for assessing wounds | |
Friesen et al. | An mHealth technology for chronic wound management | |
US11967412B2 (en) | Selective reaction to failure to complete medical action | |
AU2021205077B2 (en) | Dermal image capture | |
KR20230024234A (en) | Method and apparatus for remote skin disease diagnosing using augmented and virtual reality | |
Dixit et al. | SYSTEMATIC FOOT ULCERS ANALYSIS SYSTEM FOR DIABETES PATIENT | |
Chin | Investigating a systems approach to the predictive modeling and analysis of time-varying wound progression and healing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |