WO2005104656A2 - An apparatus for combining data from a stereoscopic digital source with additional digital imaging devices - Google Patents

An apparatus for combining data from a stereoscopic digital source with additional digital imaging devices Download PDF

Info

Publication number
WO2005104656A2
WO2005104656A2 PCT/IL2005/000449 IL2005000449W WO2005104656A2 WO 2005104656 A2 WO2005104656 A2 WO 2005104656A2 IL 2005000449 W IL2005000449 W IL 2005000449W WO 2005104656 A2 WO2005104656 A2 WO 2005104656A2
Authority
WO
WIPO (PCT)
Prior art keywords
stream
real
time
input stream
decoded
Prior art date
Application number
PCT/IL2005/000449
Other languages
French (fr)
Other versions
WO2005104656A3 (en
Inventor
Michael Bendkowski
Ehud Katznelson
Original Assignee
Surgivision Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Surgivision Ltd. filed Critical Surgivision Ltd.
Priority to JP2007510236A priority Critical patent/JP2007536953A/en
Priority to EP05737670A priority patent/EP1751989A2/en
Priority to AU2005237325A priority patent/AU2005237325A1/en
Priority to CA002564867A priority patent/CA2564867A1/en
Publication of WO2005104656A2 publication Critical patent/WO2005104656A2/en
Publication of WO2005104656A3 publication Critical patent/WO2005104656A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/74Circuits for processing colour signals for obtaining special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • H04N5/45Picture in picture, e.g. displaying simultaneously another television channel in a region of the screen

Definitions

  • the present invention relates to dynamic imaging during an observation and manipulation session in general, and to an apparatus that combines images from a surgery navigation system with images from an electronic imaging system, such as a stereoscopic digital microscope, in particular.
  • a navigation system being aware of the area of the manipulated or observed objects to be displayed, and supplying images or streams of images of the relevant area.
  • a navigation system typically tracks the location of a predetermined object, such as a wand, the digital microscope, an endoscope, fixed points on the body of the patient or the like, and presents pre-acquired volumetric information depicting the relevant area.
  • the relevant area depends on the tracked equipment, be it the part of the objects captured by the digital microscope, which is determined by the location of the microscope and its current zoom and magnification parameters, the area of a marker attached to the object being manipulated upon and the like.
  • the user can also be interested in viewing other areas, for example volumetric information depicting the substance underneath the surface exposed to the user.
  • the navigation system extracts relevant volumetric information from pre-acquired data, such as MRI or CT scans, 2D images such as US, projection images such as digital X-ray, or the like and presents it on a screen visible to the user.
  • the endoscope for another example, presents the real-time viewed image on the same screen as the navigation system or on yet another screen.
  • Additional visual information can arrive directly from medical imaging devices such as CT or MRI, or from image archives, or the like and be presented on yet another screen.
  • the outcome is that the user has lots of relevant visual information, some of the information is changing in real-time, while other parts are acquired prior to the operation and are therefore constant but the desired views and information vary during the operation. Therefore, the available information is rather distributed.
  • the user In order to view, for example, the volumetric data depicting the substance underneath the surface viewed by the user, as retrieved by the navigation system, the user has to do the following: lift his or her head from the optical microscope, or alternatively, remove the head.
  • HMD mounted display
  • the user should also make sure he or she is not injuring the patient with held tools, most likely by lifting his or her hands and then lowering them back. All in all, looking at a source of information other than the HMD during an operation causes a substantial distraction from the course of the operation, and therefore implies sub-optimal use of these sources.
  • the present invention relates to a method and apparatus for combining data from a stereoscopic digital source with additional digital imaging devices, and presenting the combined data.
  • One aspect of the present invention regards a method for generating in real-time one or more combination streams and converting in real-time the one or more combination streams for presentation during an observation and manipulation session, said one or more combination streams combining one or more first real-time input stream of images taken by a digital source during an observation and manipulation session with one or more second stream of images, the method comprises the steps of: receiving the first real-time input stream from a digital source; decoding the first realtime input stream to generate a decoded first real-time input stream; digitizing the decoded first real-time input stream to generate a digitized first real-time input stream; receiving the one or more second input streams; decoding the one or more second input streams to generate one or more decoded second input streams; digitizing the one or more decoded second input streams to generate one or more second- digitized input
  • the first one or more real-time input streams or the one or more combination streams can comprise stereoscopic images.
  • the encoding of the output stream can generate an S-VGA format stream, or an S-Video format stream.
  • the one or more first real-time input stream can be in S-Video format.
  • the one or more control commands can be in USB format.
  • the one or more first real-time input streams are can be in RGB format.
  • the one or more second input streams are in S-Video format or in S-VGA format.
  • the one or more first real-time input streams can be supplied by a left channel of the digital source or by a right channel of the digital source.
  • the method can further comprise the steps of: routing the one or more one or more first input streams and the one or more second input streams to generate one or more output channels according to the one or more control commands; and executing the steps of decoding, digitizing, de-interlacing, combining and encoding separately for each output stream.
  • the routing can be performed separately for S-Video signals and for RGB signals.
  • the one or more output streams can be presented as the left channel or as a right channel of a main head mounted display or of a rotated head mounted display.
  • the one or more output stream can be in S-Video format.
  • the one or more output stream can be used for recording.
  • the combining step can replace one or more parts of one or more images from the one or more decoded first real-time input streams with one or more parts of one or more images from the one or more decoded second input streams.
  • the combining step can replace one or more parts of one or more images from the one or more decoded second input streams with one or more parts of one or more images from the one or more decoded first real-time -input streams.
  • the combining step can create one or more combination images, said combination images comprise one or more first images from the one or more decoded first input streams, summed with a one or more second images from the one or more decoded second input streams, where the one or more first images and the one or more second images are presented with a positive degree of transparency.
  • the method can further comprise the steps of: rotating the one or more first decoded real-time input streams to generate one or more rotated streams; encoding the one or more rotated streams into S-Video format; routing the one or more rotated streams and the one or more combination streams; and encoding the output of the routing step for generating a one or more main S-Video output stream.
  • Another aspect of the disclosed invention relates to an apparatus for generating in real-time one or more combination streams and converting in real- time the one or more combination streams for presentation during an observation and manipulation session, said one or more combination streams combining one or more first real-time input streams of images taken by one or more digital sources during an observation and manipulation session with one or more second streams of images
  • the apparatus comprises: a control component for routing the one or more first and the one or more second input streams and the one or more combination streams according to one or more user's commands; a local bus for transferring the one or more commands and data among components of the apparatus; and one or more channel handling components, the one or more channel handling components comprising: a first decoder for generating one or more first decoded input stream; a first analog to digital converter for converting the one or more first decoded input streams; a second decoder for generating one or more second decoded input streams; second analog to digital converter for converting the one or more second decoded input streams; de-interlacer component for de-interlac
  • the one or more first real-time input streams or the one or more combination streams can comprise stereoscopic images.
  • the one or more digital source can be a digital microscope or an add-on camera on an optical microscope.
  • the one or more encoders can generate an S-VGA format stream or an S-Video format stream.
  • the one or more first real-time input stream can be in S-Video format or in RGB format.
  • the one or more second input stream can be in S-Video format or in S-VGA format.
  • the one or more first real-time input stream can be supplied by a left channel or by a right channel of the one or more digital sources.
  • the apparatus can further comprise one or more video matrices for routing the one or more first real-time input streams and the one or more second input streams to generate one or more output streams according to the one or more commands.
  • the one or more video matrices can be a 16*8 cross point switch, or an RGB 8*8 cross point switch.
  • the one or more output streams can be presented as a left channel or a s aright channel of a main head mounted display or of a rotated head mounted display.
  • the one or more output streams can be in S- Video format.
  • the one or more output streams can be used for recording.
  • the combining step can replace one or more parts of one or more images from the one or more decoded first real-time input streams with one or more parts of one or more images from the one or more decoded second input streams.
  • the combining step can replace one or more parts of one or more images from the one or more decoded second first real-time input streams with one or more parts of one or more images from the one or more decoded first real-time input streams.
  • the combining component can outputs one or more combination images, the one or more combination images comprise one or more first images from the one or more decoded first input streams summed with one or more second images from the one or more decoded second input streams, where the one or more first images and the one or more second images are presented with a positive degree of transparency.
  • the apparatus can further comprise: a rotating component for rotating the one or more first decoded real-time input stream to generate one or more-rotated stream;- an encoder for encoding ihe. one. or ore_ rotated stream into S-Video format; a router for routing the one or more rotated stream and the one or more combination stream; and an encoder for encoding the one or more rotated streams and the one or more combination streams for generating one or more main S-Video output stream.
  • a rotating component for rotating the one or more first decoded real-time input stream to generate one or more-rotated stream
  • an encoder for encoding ihe. one. or ore_ rotated stream into S-Video format
  • a router for routing the one or more rotated stream and the one or more combination stream
  • an encoder for encoding the one or more rotated streams and the one or more combination streams for generating one or more main S-Video output stream.
  • Yet another aspect of the present invention regards an apparatus for capturing and presenting during an observation and manipulation session real-time and off-line visual information
  • the apparatus comprises: one or more digital sources supplying one or more real-time streams of images; one or more additional sources supplying one or more streams of images; a video interface unit for generating one or more combination streams by combining in real-time the one or more real-time streams of images with the one or more streams of images supplied by the one or more additional sources; one or more first head mounted display devices; and a system management device for receiving user input and one or more commands and transferring said input and one or more commands to the video interface unit.
  • the one or more digital sources can be a digital microscope or an add-on camera on an optical microscope.
  • the apparatus can further comprise a display device for displaying streams in S-Video format or a recording device for recording streams in S-Video format.
  • the apparatus can further comprise a second head mounted display device for displaying the one or more combination streams as the first head mounted display wherein each image is rotated in a predefined angle.
  • the one or more additional sources of stream of images is a navigation and visualization system.
  • the one or more additional source can be a CT scanner or an MRU scanner or a picture archive.
  • Fig. 1 is a schematic illustration of an exemplary environment in which the proposed invention is used
  • Fig. 2 is a schematic illustration of the video interface unit
  • Figs. 3 and 4 show a block diagram of the first and the second channel boards.
  • the apparatus presents real-time images taken by a digital source, such as a digital microscope or an add-on system added to a conventional microscope or any other source of stereoscopic video images, combined with visual presentations of volumetric data acquired prior to the operation by medical scanners and retrieved by a navigation system or with images retrieved from an archive.
  • the presented images are complementary, i.e., depict the same area or related areas. For example, while the digital microscope show the surface exposed to the surgeon, the navigation system can show the substance under the surface, which the surgeon cannot see.
  • the apparatus also constructs the updated model of the surface, as was changed by the surgeon removing substance, and retrieves the volumetric information matching the newly created surface.
  • Another aspect pf the disclosed invention relates to changing the color-scheme of the presented image.
  • the image captured-by the -optical mLcrosc_op.e_.or by digital, microscope is presented with the colors as captured.
  • certain types of tumors are better distinguished from their environment if a tumor-type-specific color scheme is used. Once the surgeon recognizes the tumor type, the correct color scheme can be used and applied to the digital image, which will present the tumor in a fashion that will help the surgeon distinguish it form the surrounding tissues.
  • Fig. 1 shows an exemplary environment in which the proposed apparatus and associated methods are implemented.
  • the core of the system is a Video Interface Unit (VIU) 104.
  • Unit 104 manipulates the images arriving from the various sources and sends them to the various presentation devices.
  • VIU 104 receives the streams and images from the various sources, processes and combines them, and creates the output streams.
  • the input for VIU 104 comes from one or more of the following components: a digital source 108, preferably a the camera module of a digital microscope or two add-on cameras added to a conventional microscope, for example 10 Systems manufactured by Surgivision, which sends the signals in a streaming format, such as S-Video or RGB, along communication line 112.
  • the output of digital source 108 comprises a stream of real-time images of the objects captured by the digital microscope.
  • the system further comprises real-time imaging tools 124 such as an endoscope, which send their signals in -Video format as well, along communication lime 128.
  • Real time imaging tools 124 also supply real-time images of the objects undergoing the observation and manipulation session.
  • Navigation or other visualization system 116 such as Treon manufactured by Medtronic sends the images in S-VGA format along communication line 120.
  • the images supplied by navigation or visualization system 116 are generally taken prior to or during the session, and are typically a few days to a few seconds old.
  • VIU 104 receives control signals and data from system computer 160, being the system management device. Additionally, system computer 160 transfers to VIU 104 image files from off-line imaging tools 133 such as CT or MRI scanners, and additional images from archive 132.
  • the data is received by any appropriate protocol, sucb as. for a non-limiting example, a JJSB; format.
  • the control signals include, for a non-limiting example, indication of which the sources of images the user would like to use, the way he or she would like to see the pictures of the various sources and other relevant parameters.
  • the image files provided by pictures archives, and additional visual information, such as graphic elements related to the images are transferred in the appropriate ormat.
  • VIU 104 outputs the images to one or more displays, of one or more types. One or multiple devices of each type can be used simultaneously.
  • One type of display is a HMD device 136, for example Callisto manufactured by Surgivision This is the device worn by the head surgeon, displaying the images as captured by the digital microscope, possibly fused or blended with other images.
  • HMD device 144 Another type of display is a rotated HMD device 144, possibly worn by another stuff member standing on the opposite side of the person being operated, relatively to the digital microscope.
  • Head mounted displays 136 and 144 receive the stream of images in S -Video format.
  • a recording/displaying device 152 such as DVD-R manufactured by Philips
  • the recording/display device receives the stream in S-VGA format.
  • System computer 160 takes input from the user as to which sources of images should be presented, which presentation devices should be used, and in what presentation mode. The user's commands are transferred to VIU 104 in appropriate format..
  • VIU 104 receives two sets of streams and images, one from the left (first) channel of each input source and one from the right (second) channel of each input source. For some output devices, VIU 104 also generates two streams. In a non-limiting manner, they are referred to as the left (first) channel and the right (second) channel. It is not necessary that the images of the first output channel are the products of images and streams of the first input channel. Both output streams can contain the same or different visual data from one or more of the input channels. Referring now to Fig. 2, showing the internal structure of VIU 104.
  • VIU 104 comprises a panel board 164, a first channel board 168 and a second channel board--l-72.
  • Local bus 173. connects the three boards.
  • Panel board 164 comprises an interface component such as a USB interface component, receiving the input in the appropriate format.
  • the input includes control signals about the preferred settings for the environment, image files from archives and additional information related to the images.
  • the images information is not related to a certain side or a certain board, therefore the apparatus comprises a single panel board.
  • the first channel board produces the output for the first channel of the output devices and the second channel board produces he output for the second channel of the output devices. Referring now to Figs. 3 and 4, describing the internal structure of the first and second channel boards.
  • a line 400 separates the components of the first channel board, which are shown above the line, from the components of the second channel board shown under the line.
  • the first and the second channel boards are identical, so that their components are a replica of each other.
  • This replication excludes video matrices 208 and 216 detailed below which are common to both channels, and therefore only one instance of each of them is used in the system.
  • a Mainl first input 176 to the system is the real-time stream arriving from the first channel of the camera module of the digital microscope.
  • Mainl first input 176 is in S-Video format.
  • An auxl first input 180 optionally carries another real-time stream in S-Video format.
  • this auxl is generated by real-time viewing tool 124 of Fig. 1, such as an endoscope. If the real-time viewing tool is stereoscopic, auxl first input 180 brings in the first stream, while auxl second 280 brings in the second stream, otherwise both auxl first 180 and auxl second 280 bring the same signal.
  • An aux2 first input 184 introduces input signals in S-VGA format. This input comes, for example, from navigation or visualization system 112 of Fig.
  • the .S-VGA signal undergoes conversion to S- Video format in the VGA to NTSC/Pal converter 200.
  • a main2 first input 188 introduces the first channel from a digital microscope or from add-on cameras added to an optical microscope, in the case the digital microscope outputs an RGB signal.
  • Mainl first input 176, auxl first input 180 and aux2 first input 184, together with the corresponding second inputs, namely main 1 second input 276, auxl second input 280 and aux2 second 284, together with signal 189 enters a first video matrix 208, which is preferably but not limited to a 16*8 cross point switch.
  • First video matrix 208 routes the input channels to the output channels according to the instructions coming from system computer 160 .
  • first video matrix 208 receives both the first channel and the second channel inputs, of all the images and stream sources in the apparatus, it can mix them in any way the user sees fit, for example, choose images of streams of the desired sources, and pass all first channel input to first channel output and all second channel input to second channel output. Another example is to pass the first channel input to both outputs, or any other combination. People skilled in the art will appreciate the possibilities of combining multiple sources and two or more sets of inputs.
  • Main2 first input 188 and main2 second input 288, carrying signals in RGB format are routed by second video matrix 216, which is preferably but not limited to an RGB 8*8 cross point switch. The RGB signals are routed by second video matrix 216 rather than by first video matrix 208 due to the different formats.
  • first video matrix 208 routes the first and the second RGB signals to the first and the second outputs according to the user's instructions, as received through signal 189. All inputs and all outputs of the switches are analog.
  • first video matrix 208 outputs two signals.
  • the first output is a first S-Video signal 213, containing the data from the digital microscope.
  • Signal 213 undergoes NTSC PAL decoding, analog to digital (A/D) conversion, and compression into the YUV 4:2:2 format in the main NTSC/PAL decoder 220 and becomes main first S-Video signal 221.
  • the second output of first video matrix 208 is the first s-video2 214, which undergoes NTSC/PAL decoding, A D conversion, and compression into the- YUV 4:2:2 format in the secondary NTSC/PAL decoder 224 and becomes the secondary first YUV 4:2:2 signal 235.
  • the output of second video matrix 216 is an RGB first input 215, containing the signal of the alternative digital microscope.
  • NTSC/PAL decoders 220 and 224 extract the audio, video and synchronization components of input signals 213 and 214, respectively, and the A/D converter converts the analog audio and video signals to digital.
  • Main first RGB signal 234 enters a real-time de-interlacer rotator chip 232, such as Matisse, manufactured by OPLUS, Yokneam, Israel.
  • De-interlacer rotator chip 232 converts the signal from interlaced format into progressive (i.e., line after line) format and rotates the image if instructed to do so by the USB signal.
  • Chip 232 outputs a rotated first progressive RGB signal 248 at 50 Hz, which is the input to an encoder 264.
  • the output of encoder 264 is the rotated S-VGA first signal, directed to the first channel of the rotated HMD.
  • De- interlacer and rotator chip 232 outputs also a main first progressive RGB signal 240 at 50 Hz.
  • Signal 240 is non-interlaced and non-rotated.
  • Signal 240, together with a secondary first YUV 4:2:2 signal 235 are sent to a real-time Picture- in- Picture (PIP) or blending chip 244, such as Rembrandt manufactured by OPLUS, Yokneam, Israel.
  • Chip 244 combines signals 240 and 235 according to the instructions transferred by the USB.
  • the combination takes one of a number of forms: a PIP presentation, where the images of one signal are presented on a rectangle partial to the images of the other signal.
  • Another possibility is the presentation of both streams on the same display real estate, where the images of each stream have a certain degree of transparency, so that both streams are viewed concurrently.
  • Persons skilled in the art will appreciate the fact that other presentation modes combining images from both streams- can he. generated, as. well.
  • the output of chip 244 is an out first progressive RGB signal 252 at 85 Hz.
  • Main first progressive RGB signal 240 and out first progressive signal 252 are input into a switch 256, which routes the input signals to the outputs as instructed by the USB input commands.
  • Output 260 that is directed to the display and recording device, needs to be in S-Video format, and therefore is directed into encoder 269 that converts the progressive signal to interlaced signal.
  • Output 261 of switch 256 which needs to be directed to the main S-VGA display, is encoded in encoder 268 and directed to the first channel of the main HMD of the apparatus.
  • any one of the outputs of the system namely rotated S-VGA first 275, main S-VGA first 277 and main S-video first 279 can be directed to multiple presentation devices, provided the devices can display the relevant format.
  • the second channel board is identical to the first channel board. Referring now to the bottom parts of Figs. 3 and 4, describing the second channel board of the VIU.
  • the Mainl second input 276 to the system is the real-time stream arriving from the second channel of the camera module of the digital microscope.
  • Mainl second input 276 is in S-Video format.
  • Auxl second input 180 optionally carries another real-time stream in S- Video format. For a non-limiting example, this input is generated by real-time imaging tool 124 of Fig.
  • auxl first input 180 brings in the first stream
  • auxl second input 280 brings in the second stream
  • both auxl first 180 and auxl second 280 bring the same signal.
  • Aux2 second input 184 introduces input signals in S-VGA format. This input comes, for example, from navigation or visualization system 116 of Fig. 1, or from off-line imaging tool 133 of Fig. 1.
  • the S-VGA signal undergoes conversion to S-Video format in a VGA to NTSC/Pal converter 300.
  • Main2 second input 288 introduces the second channel of the digital microscope, in the case the digital microscope outputs an RGB signal.
  • first video matrix 208 and second video matrix 216 route the first and second channels of the multiple sources of images and streams.
  • the first output is a second S-Video signal 313, containing the data from the digital microscope.
  • Signal 313 undergoes NTSC/PAL decoding, analog to digital (A D) conversion, and compression into the YUV 4:2:2 format in main NTSC/PAL decoder 320 and becomes a main second S-Video signal 321.
  • the second output of first video matrix 208 is a second s-video2 314, which undergoes NTSC/PAL decoding, A/D conversion, and compression into the YUV 4:2:2 format in secondary NTSC/PAL decoder 324 and becomes secondary second YUV 4:2:2 signal 335.
  • the output of second video matrix 216 is RGB second input 315, containing the signal of the alternative digital microscope.
  • NTSC/PAL decoders 320 and 324 extract the audio, video and synchronization components of input signals 313 and 314, respectively, and the A D conversion converts the analog audio and video signals to digital.
  • Second RGB input 315 if present, undergoes the A/D conversion in RGB A/D converter 328, which outputs a second RGB signal 333.
  • main second RGB signal 334 enters real-time de- interlacer rotator chip 332, such as Matisse, manufactured by OPLUS, Yokneam, Israel.
  • De-interlacer rotator chip 332 converts the signal from interlaced format into progressive (i.e., line after line) format and rotates the image if instructed to do so by the USB signal.
  • Chip 332 outputs a rotated second progressive RGB signal 348 at 50 Hz, which is the input to encoder 364.
  • the output of encoder 364 is the rotated S-VGA second signal, directed to the second channel of the rotated HMD.
  • De-interlacer and rotator chip 332 outputs a the main second progressive RGB signal 340 at 50 Hz.
  • Signal 340 is non-interlaced and non-rotated.
  • Signal 340, together with secondary first YUV 4:2:2 signal 335 are sent to a real-time Picture-in-Picture (PIP) or blending chip 344, such as Rembrandt manufactured by- OPLUS ? Yokneam, -Israel .
  • the stream can take the same forms as discussed for the first channel, i.e.
  • the output of chip 344 is an out second progressive RGB signal 352 at 85 Hz.
  • Main second progressive RGB signal 340 and out second progressive signal 352 are input into a switch 356, which routes the input signals to the outputs as instructed by the USB input commands.
  • Output 360 that is directed to the display and recording device needs to be in S-Video format, and therefore is directed into encoder 369 that converts the progressive signal to interlaced signal.
  • Output 361 of switch 356 which needs to be directed to the main S-VGA display, is encoded in encoder 368 and directed to the second channel of the main HMD of the apparatus.
  • the presented apparatus and method provides real-time display of the view field captured by a digital microscope.
  • the available display devices are a stereoscopic HMD device able of presenting S-VGA streams, a stereoscopic rotated HMD device able of presenting S-VGA streams for the benefit of a person viewing the view field from another point of view, a computer or a recording device that supports s-video format.
  • the captured stream is presented as is, or combined with images or streams of another source, such as a navigation and visualization system, a medical imaging device such as an MRI or a CT scanner, image archives and the like.
  • the combined presentation options include picture- in-picture presentation, blending of images of different sources presented on the same display where each picture can be partially transparent, stereoscopic, non- stereoscopic, or the like.
  • the sources of the presented streams and images, the desired displays and display modes are determined by the user, and presented to the system in the appropriate format.
  • Persons skilled in the art will appreciate that the present invention can also be used with different formats of streams and images, and additional types of display. It will be appreciated by persons skilled in the art that the present - invention -is— not— limited - to -what-has- been -pa_rticula_dy-_.shown_and_ esjcriJ _ed. hereinabove. Rather the scope of the present invention is defined only by the claims which follow.

Abstract

An apparatus and method for combining in real-time a first real-time input stream of images taken by a digital source (108), such as a digital microscope or an add-on camera for an optical microscope, during an observation and manipulation session (104), with a second stream of images. The combined stream is converted in real-time and presented on various display devices (136, 144 and 152) in various formats and display options.

Description

AN APPARATUS FOR COMBINING DATA FROM A STEREOSCOPIC DIGITAL SOURCE WITH ADDITIONAL DIGITAL IMAGING DEVICES
BACKGROUND OF THE INVENTION FIELD OF THE INVENTION The present invention relates to dynamic imaging during an observation and manipulation session in general, and to an apparatus that combines images from a surgery navigation system with images from an electronic imaging system, such as a stereoscopic digital microscope, in particular.
DISCUSSION OF THE RELATED ART Environments for performing observation and manipulation sessions, such as surgeries on general and neurosurgeries in particular are currently equipped with various types of imaging, capturing, and presentation equipment. For example electro-optical capturing tools such as digital microscopes, provide real-
- time, variably magnified,, high-resolution stereoscopic images, and present the captured images stereoscopically on a dedicated display, mounted on the head of a user, who is performing or participating in the observation and manipulation session. Another presentation option involves the use of a navigation system, being aware of the area of the manipulated or observed objects to be displayed, and supplying images or streams of images of the relevant area. For a non- limiting example, a navigation system typically tracks the location of a predetermined object, such as a wand, the digital microscope, an endoscope, fixed points on the body of the patient or the like, and presents pre-acquired volumetric information depicting the relevant area. The relevant area depends on the tracked equipment, be it the part of the objects captured by the digital microscope, which is determined by the location of the microscope and its current zoom and magnification parameters, the area of a marker attached to the object being manipulated upon and the like. The user can also be interested in viewing other areas, for example volumetric information depicting the substance underneath the surface exposed to the user. The navigation system extracts relevant volumetric information from pre-acquired data, such as MRI or CT scans, 2D images such as US, projection images such as digital X-ray, or the like and presents it on a screen visible to the user. The endoscope, for another example, presents the real-time viewed image on the same screen as the navigation system or on yet another screen. Additional visual information can arrive directly from medical imaging devices such as CT or MRI, or from image archives, or the like and be presented on yet another screen. The outcome is that the user has lots of relevant visual information, some of the information is changing in real-time, while other parts are acquired prior to the operation and are therefore constant but the desired views and information vary during the operation. Therefore, the available information is rather distributed. In order to view, for example, the volumetric data depicting the substance underneath the surface viewed by the user, as retrieved by the navigation system, the user has to do the following: lift his or her head from the optical microscope, or alternatively, remove the head. mounted display (HMD) used to view the area operated upon with the digital microscope if a HMD is used and lift his or her head, re-adjust his or her eyes to the zoom and light conditions, examine the presented pictures, lower the head back to the optical microscope or put on the HMD, and readjust to the light and zoom conditions of the optical microscope or the HMD. Throughout the abovementioned series of actions, the user should also make sure he or she is not injuring the patient with held tools, most likely by lifting his or her hands and then lowering them back. All in all, looking at a source of information other than the HMD during an operation causes a substantial distraction from the course of the operation, and therefore implies sub-optimal use of these sources. There is therefore a need in the art for a system that will enable a person performing an observation and manipulation session, such as a neurosurgeon, to view real-time stereoscopic visual information, such as from a digital microscope, fused or otherwise coordinated on the same display, with additional visual information from diverse sources.
SUMMARY OF THE PRESENT INVENTION The present invention relates to a method and apparatus for combining data from a stereoscopic digital source with additional digital imaging devices, and presenting the combined data. One aspect of the present invention regards a method for generating in real-time one or more combination streams and converting in real-time the one or more combination streams for presentation during an observation and manipulation session, said one or more combination streams combining one or more first real-time input stream of images taken by a digital source during an observation and manipulation session with one or more second stream of images, the method comprises the steps of: receiving the first real-time input stream from a digital source; decoding the first realtime input stream to generate a decoded first real-time input stream; digitizing the decoded first real-time input stream to generate a digitized first real-time input stream; receiving the one or more second input streams; decoding the one or more second input streams to generate one or more decoded second input streams; digitizing the one or more decoded second input streams to generate one or more second- digitized input streams; receiving one or more control commands from a user; de-interlacing in real-time the one or more digitized first real-time input stream; combining in real-time an at least one image from the one or more decoded first real-time input stream with one or more images of the one or more decoded second input streams to generate one or more output streams; and encoding in real-time the one or more output streams. The first one or more real-time input streams or the one or more combination streams can comprise stereoscopic images. The encoding of the output stream can generate an S-VGA format stream, or an S-Video format stream. The one or more first real-time input stream can be in S-Video format. The one or more control commands can be in USB format. The one or more first real-time input streams are can be in RGB format. The one or more second input streams are in S-Video format or in S-VGA format. The one or more first real-time input streams can be supplied by a left channel of the digital source or by a right channel of the digital source. The method can further comprise the steps of: routing the one or more one or more first input streams and the one or more second input streams to generate one or more output channels according to the one or more control commands; and executing the steps of decoding, digitizing, de-interlacing, combining and encoding separately for each output stream. The routing can be performed separately for S-Video signals and for RGB signals. The one or more output streams can be presented as the left channel or as a right channel of a main head mounted display or of a rotated head mounted display. The one or more output stream can be in S-Video format. The one or more output stream can be used for recording. The combining step can replace one or more parts of one or more images from the one or more decoded first real-time input streams with one or more parts of one or more images from the one or more decoded second input streams. The combining step can replace one or more parts of one or more images from the one or more decoded second input streams with one or more parts of one or more images from the one or more decoded first real-time -input streams. The combining step can create one or more combination images, said combination images comprise one or more first images from the one or more decoded first input streams, summed with a one or more second images from the one or more decoded second input streams, where the one or more first images and the one or more second images are presented with a positive degree of transparency. The method can further comprise the steps of: rotating the one or more first decoded real-time input streams to generate one or more rotated streams; encoding the one or more rotated streams into S-Video format; routing the one or more rotated streams and the one or more combination streams; and encoding the output of the routing step for generating a one or more main S-Video output stream. Another aspect of the disclosed invention relates to an apparatus for generating in real-time one or more combination streams and converting in real- time the one or more combination streams for presentation during an observation and manipulation session, said one or more combination streams combining one or more first real-time input streams of images taken by one or more digital sources during an observation and manipulation session with one or more second streams of images, the apparatus comprises: a control component for routing the one or more first and the one or more second input streams and the one or more combination streams according to one or more user's commands; a local bus for transferring the one or more commands and data among components of the apparatus; and one or more channel handling components, the one or more channel handling components comprising: a first decoder for generating one or more first decoded input stream; a first analog to digital converter for converting the one or more first decoded input streams; a second decoder for generating one or more second decoded input streams; second analog to digital converter for converting the one or more second decoded input streams; de-interlacer component for de-interlacing in real-time the one or more first input streams; a combining component for generating the one or more combination streams by combining the one or more first decoded input streams with the one or more second-decoded input-stream; and an encoder for encoding in- real-time the one or more combination streams. The one or more first real-time input streams or the one or more combination streams can comprise stereoscopic images. The one or more digital source can be a digital microscope or an add-on camera on an optical microscope. The one or more encoders can generate an S-VGA format stream or an S-Video format stream. The one or more first real-time input stream can be in S-Video format or in RGB format. The one or more second input stream can be in S-Video format or in S-VGA format. The one or more first real-time input stream can be supplied by a left channel or by a right channel of the one or more digital sources. The apparatus can further comprise one or more video matrices for routing the one or more first real-time input streams and the one or more second input streams to generate one or more output streams according to the one or more commands. The one or more video matrices can be a 16*8 cross point switch, or an RGB 8*8 cross point switch. The one or more output streams can be presented as a left channel or a s aright channel of a main head mounted display or of a rotated head mounted display. The one or more output streams can be in S- Video format. The one or more output streams can be used for recording. The combining step can replace one or more parts of one or more images from the one or more decoded first real-time input streams with one or more parts of one or more images from the one or more decoded second input streams. The combining step can replace one or more parts of one or more images from the one or more decoded second first real-time input streams with one or more parts of one or more images from the one or more decoded first real-time input streams. The combining component can outputs one or more combination images, the one or more combination images comprise one or more first images from the one or more decoded first input streams summed with one or more second images from the one or more decoded second input streams, where the one or more first images and the one or more second images are presented with a positive degree of transparency. The apparatus can further comprise: a rotating component for rotating the one or more first decoded real-time input stream to generate one or more-rotated stream;- an encoder for encoding ihe. one. or ore_ rotated stream into S-Video format; a router for routing the one or more rotated stream and the one or more combination stream; and an encoder for encoding the one or more rotated streams and the one or more combination streams for generating one or more main S-Video output stream.
1. Yet another aspect of the present invention regards an apparatus for capturing and presenting during an observation and manipulation session real-time and off-line visual information the apparatus comprises: one or more digital sources supplying one or more real-time streams of images; one or more additional sources supplying one or more streams of images; a video interface unit for generating one or more combination streams by combining in real-time the one or more real-time streams of images with the one or more streams of images supplied by the one or more additional sources; one or more first head mounted display devices; and a system management device for receiving user input and one or more commands and transferring said input and one or more commands to the video interface unit. The one or more digital sources can be a digital microscope or an add-on camera on an optical microscope. The apparatus can further comprise a display device for displaying streams in S-Video format or a recording device for recording streams in S-Video format. The apparatus can further comprise a second head mounted display device for displaying the one or more combination streams as the first head mounted display wherein each image is rotated in a predefined angle. The one or more additional sources of stream of images is a navigation and visualization system. The one or more additional source can be a CT scanner or an MRU scanner or a picture archive.
BRIEF DESCRIPTION OF THE DRAWINGS The present invention will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings in which: Fig. 1 is a schematic illustration of an exemplary environment in which the proposed invention is used; Fig. 2 is a schematic illustration of the video interface unit; and Figs. 3 and 4 show a block diagram of the first and the second channel boards.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT An apparatus and methods for presenting on one display, real-time relevant information from various sources during an observation and manipulation session, such as a surgery in general, and a neurosurgery in particular. The apparatus presents real-time images taken by a digital source, such as a digital microscope or an add-on system added to a conventional microscope or any other source of stereoscopic video images, combined with visual presentations of volumetric data acquired prior to the operation by medical scanners and retrieved by a navigation system or with images retrieved from an archive. The presented images are complementary, i.e., depict the same area or related areas. For example, while the digital microscope show the surface exposed to the surgeon, the navigation system can show the substance under the surface, which the surgeon cannot see. The apparatus also constructs the updated model of the surface, as was changed by the surgeon removing substance, and retrieves the volumetric information matching the newly created surface. Another aspect pf the disclosed invention relates to changing the color-scheme of the presented image. Currently^ the image captured-by the -optical mLcrosc_op.e_.or by digital, microscope is presented with the colors as captured. However, certain types of tumors are better distinguished from their environment if a tumor-type-specific color scheme is used. Once the surgeon recognizes the tumor type, the correct color scheme can be used and applied to the digital image, which will present the tumor in a fashion that will help the surgeon distinguish it form the surrounding tissues. Referring now to Fig. 1, which shows an exemplary environment in which the proposed apparatus and associated methods are implemented. The core of the system is a Video Interface Unit (VIU) 104. Unit 104 manipulates the images arriving from the various sources and sends them to the various presentation devices. VIU 104 receives the streams and images from the various sources, processes and combines them, and creates the output streams. The input for VIU 104 comes from one or more of the following components: a digital source 108, preferably a the camera module of a digital microscope or two add-on cameras added to a conventional microscope, for example 10 Systems manufactured by Surgivision, which sends the signals in a streaming format, such as S-Video or RGB, along communication line 112. The output of digital source 108 comprises a stream of real-time images of the objects captured by the digital microscope. The system further comprises real-time imaging tools 124 such as an endoscope, which send their signals in -Video format as well, along communication lime 128. Real time imaging tools 124 also supply real-time images of the objects undergoing the observation and manipulation session. Navigation or other visualization system 116, such as Treon manufactured by Medtronic sends the images in S-VGA format along communication line 120. The images supplied by navigation or visualization system 116 are generally taken prior to or during the session, and are typically a few days to a few seconds old. VIU 104 receives control signals and data from system computer 160, being the system management device. Additionally, system computer 160 transfers to VIU 104 image files from off-line imaging tools 133 such as CT or MRI scanners, and additional images from archive 132. The data is received by any appropriate protocol, sucb as. for a non-limiting example, a JJSB; format. The control signals include, for a non-limiting example, indication of which the sources of images the user would like to use, the way he or she would like to see the pictures of the various sources and other relevant parameters. The image files provided by pictures archives, and additional visual information, such as graphic elements related to the images are transferred in the appropriate ormat. VIU 104 outputs the images to one or more displays, of one or more types. One or multiple devices of each type can be used simultaneously. One type of display is a HMD device 136, for example Callisto manufactured by Surgivision This is the device worn by the head surgeon, displaying the images as captured by the digital microscope, possibly fused or blended with other images. Another type of display is a rotated HMD device 144, possibly worn by another stuff member standing on the opposite side of the person being operated, relatively to the digital microscope. Head mounted displays 136 and 144 receive the stream of images in S -Video format. Yet another destination for the output images is a recording/displaying device 152, such as DVD-R manufactured by Philips The recording/display device receives the stream in S-VGA format. System computer 160 takes input from the user as to which sources of images should be presented, which presentation devices should be used, and in what presentation mode. The user's commands are transferred to VIU 104 in appropriate format.. In a typical non-limiting environment, VIU 104 receives two sets of streams and images, one from the left (first) channel of each input source and one from the right (second) channel of each input source. For some output devices, VIU 104 also generates two streams. In a non-limiting manner, they are referred to as the left (first) channel and the right (second) channel. It is not necessary that the images of the first output channel are the products of images and streams of the first input channel. Both output streams can contain the same or different visual data from one or more of the input channels. Referring now to Fig. 2, showing the internal structure of VIU 104.
VIU 104 comprises a panel board 164, a first channel board 168 and a second channel board--l-72. Local bus 173. -connects the three boards. Panel board 164 comprises an interface component such as a USB interface component, receiving the input in the appropriate format. The input includes control signals about the preferred settings for the environment, image files from archives and additional information related to the images. The images information is not related to a certain side or a certain board, therefore the apparatus comprises a single panel board. In a typical non-limiting environment, the first channel board produces the output for the first channel of the output devices and the second channel board produces he output for the second channel of the output devices. Referring now to Figs. 3 and 4, describing the internal structure of the first and second channel boards. The points marked by A, B, C, and D at the right hand side of Fig. 3 and at the left hand side of Fig. 4 denote the connection points between Fig. 3 and Fig. 4. A line 400 separates the components of the first channel board, which are shown above the line, from the components of the second channel board shown under the line. As can be seen in Fig. 3, the first and the second channel boards are identical, so that their components are a replica of each other. This replication excludes video matrices 208 and 216 detailed below which are common to both channels, and therefore only one instance of each of them is used in the system. Referring now to the top parts of Figs. 3 and 4, describing the first channel board of the VIU. A Mainl first input 176 to the system is the real-time stream arriving from the first channel of the camera module of the digital microscope. Mainl first input 176 is in S-Video format. An auxl first input 180 optionally carries another real-time stream in S-Video format. For a non-limiting example, this auxl is generated by real-time viewing tool 124 of Fig. 1, such as an endoscope. If the real-time viewing tool is stereoscopic, auxl first input 180 brings in the first stream, while auxl second 280 brings in the second stream, otherwise both auxl first 180 and auxl second 280 bring the same signal. An aux2 first input 184 introduces input signals in S-VGA format. This input comes, for example, from navigation or visualization system 112 of Fig. 1, or from off line imaging tools 133 of Fig. 1. The .S-VGA signal undergoes conversion to S- Video format in the VGA to NTSC/Pal converter 200. A main2 first input 188 introduces the first channel from a digital microscope or from add-on cameras added to an optical microscope, in the case the digital microscope outputs an RGB signal. Mainl first input 176, auxl first input 180 and aux2 first input 184, together with the corresponding second inputs, namely main 1 second input 276, auxl second input 280 and aux2 second 284, together with signal 189 enters a first video matrix 208, which is preferably but not limited to a 16*8 cross point switch. First video matrix 208routes the input channels to the output channels according to the instructions coming from system computer 160 . Since first video matrix 208 receives both the first channel and the second channel inputs, of all the images and stream sources in the apparatus, it can mix them in any way the user sees fit, for example, choose images of streams of the desired sources, and pass all first channel input to first channel output and all second channel input to second channel output. Another example is to pass the first channel input to both outputs, or any other combination. People skilled in the art will appreciate the possibilities of combining multiple sources and two or more sets of inputs. Main2 first input 188 and main2 second input 288, carrying signals in RGB format are routed by second video matrix 216, which is preferably but not limited to an RGB 8*8 cross point switch. The RGB signals are routed by second video matrix 216 rather than by first video matrix 208 due to the different formats. Similarly to first video matrix 208second video matrix 216 routes the first and the second RGB signals to the first and the second outputs according to the user's instructions, as received through signal 189. All inputs and all outputs of the switches are analog. On the first channel board, first video matrix 208 outputs two signals. The first output is a first S-Video signal 213, containing the data from the digital microscope. Signal 213 undergoes NTSC PAL decoding, analog to digital (A/D) conversion, and compression into the YUV 4:2:2 format in the main NTSC/PAL decoder 220 and becomes main first S-Video signal 221. The second output of first video matrix 208 is the first s-video2 214, which undergoes NTSC/PAL decoding, A D conversion, and compression into the- YUV 4:2:2 format in the secondary NTSC/PAL decoder 224 and becomes the secondary first YUV 4:2:2 signal 235. The output of second video matrix 216 is an RGB first input 215, containing the signal of the alternative digital microscope. NTSC/PAL decoders 220 and 224 extract the audio, video and synchronization components of input signals 213 and 214, respectively, and the A/D converter converts the analog audio and video signals to digital. First RGB input 215, if present, undergoes the A/D conversion in RGB A/D converter 228, which outputs a first RGB signal 233. Since generally the apparatus employs one digital microscope, which outputs either an S-Video or an RGB signal, output signal 221 of main NTSC/PAL decoder 220 and output signal of RGB A/D converter 233 are alternately switched by a switch 222 to generate a main first RGB signal 234. Main first RGB signal 234 enters a real-time de-interlacer rotator chip 232, such as Matisse, manufactured by OPLUS, Yokneam, Israel. De-interlacer rotator chip 232, converts the signal from interlaced format into progressive (i.e., line after line) format and rotates the image if instructed to do so by the USB signal. Chip 232 outputs a rotated first progressive RGB signal 248 at 50 Hz, which is the input to an encoder 264. The output of encoder 264 is the rotated S-VGA first signal, directed to the first channel of the rotated HMD. De- interlacer and rotator chip 232 outputs also a main first progressive RGB signal 240 at 50 Hz. Signal 240 is non-interlaced and non-rotated. Signal 240, together with a secondary first YUV 4:2:2 signal 235 are sent to a real-time Picture- in- Picture (PIP) or blending chip 244, such as Rembrandt manufactured by OPLUS, Yokneam, Israel. Chip 244 combines signals 240 and 235 according to the instructions transferred by the USB. The combination takes one of a number of forms: a PIP presentation, where the images of one signal are presented on a rectangle partial to the images of the other signal. Another possibility is the presentation of both streams on the same display real estate, where the images of each stream have a certain degree of transparency, so that both streams are viewed concurrently. Persons skilled in the art will appreciate the fact that other presentation modes combining images from both streams- can he. generated, as. well. The output of chip 244 is an out first progressive RGB signal 252 at 85 Hz. Main first progressive RGB signal 240 and out first progressive signal 252 are input into a switch 256, which routes the input signals to the outputs as instructed by the USB input commands. Output 260 that is directed to the display and recording device, needs to be in S-Video format, and therefore is directed into encoder 269 that converts the progressive signal to interlaced signal. Output 261 of switch 256 which needs to be directed to the main S-VGA display, is encoded in encoder 268 and directed to the first channel of the main HMD of the apparatus. People skilled in the art will appreciate the fact that any one of the outputs of the system, namely rotated S-VGA first 275, main S-VGA first 277 and main S-video first 279 can be directed to multiple presentation devices, provided the devices can display the relevant format. The second channel board is identical to the first channel board. Referring now to the bottom parts of Figs. 3 and 4, describing the second channel board of the VIU. The Mainl second input 276 to the system is the real-time stream arriving from the second channel of the camera module of the digital microscope. Mainl second input 276 is in S-Video format. Auxl second input 180 optionally carries another real-time stream in S- Video format. For a non-limiting example, this input is generated by real-time imaging tool 124 of Fig. 1, such as an endoscope. If the real-time viewing tool is stereoscopic, auxl first input 180 brings in the first stream, while auxl second input 280 brings in the second stream, otherwise both auxl first 180 and auxl second 280 bring the same signal. Aux2 second input 184 introduces input signals in S-VGA format. This input comes, for example, from navigation or visualization system 116 of Fig. 1, or from off-line imaging tool 133 of Fig. 1. The S-VGA signal undergoes conversion to S-Video format in a VGA to NTSC/Pal converter 300. Main2 second input 288 introduces the second channel of the digital microscope, in the case the digital microscope outputs an RGB signal. As described above, first video matrix 208 and second video matrix 216 route the first and second channels of the multiple sources of images and streams. On-the-second-channel -board, -firstvideo matrix-208_outputs_tW-O_signals. The first output is a second S-Video signal 313, containing the data from the digital microscope. Signal 313 undergoes NTSC/PAL decoding, analog to digital (A D) conversion, and compression into the YUV 4:2:2 format in main NTSC/PAL decoder 320 and becomes a main second S-Video signal 321. The second output of first video matrix 208 is a second s-video2 314, which undergoes NTSC/PAL decoding, A/D conversion, and compression into the YUV 4:2:2 format in secondary NTSC/PAL decoder 324 and becomes secondary second YUV 4:2:2 signal 335. The output of second video matrix 216 is RGB second input 315, containing the signal of the alternative digital microscope. NTSC/PAL decoders 320 and 324 extract the audio, video and synchronization components of input signals 313 and 314, respectively, and the A D conversion converts the analog audio and video signals to digital. Second RGB input 315, if present, undergoes the A/D conversion in RGB A/D converter 328, which outputs a second RGB signal 333. Since generally the apparatus employs one digital microscope, which outputs either an S-Video or an RGB signal, the output signal of main NTSC/PAL decoder 321 and the output signal of RGB A/D converter 333 are alternately switched by switch 322 to generate main second RGB signal 334. Main second RGB signal 334 enters real-time de- interlacer rotator chip 332, such as Matisse, manufactured by OPLUS, Yokneam, Israel. De-interlacer rotator chip 332, converts the signal from interlaced format into progressive (i.e., line after line) format and rotates the image if instructed to do so by the USB signal. Chip 332 outputs a rotated second progressive RGB signal 348 at 50 Hz, which is the input to encoder 364. The output of encoder 364 is the rotated S-VGA second signal, directed to the second channel of the rotated HMD. De-interlacer and rotator chip 332 outputs a the main second progressive RGB signal 340 at 50 Hz. Signal 340 is non-interlaced and non-rotated. Signal 340, together with secondary first YUV 4:2:2 signal 335 are sent to a real-time Picture-in-Picture (PIP) or blending chip 344, such as Rembrandt manufactured by- OPLUS? Yokneam, -Israel . Chip -344-Combines -signals 3.4.0 and__335. according to the instructions transferred by the USB. The stream can take the same forms as discussed for the first channel, i.e. PIP, blending with a degree of transparency or any other type. The output of chip 344 is an out second progressive RGB signal 352 at 85 Hz. Main second progressive RGB signal 340 and out second progressive signal 352 are input into a switch 356, which routes the input signals to the outputs as instructed by the USB input commands. Output 360 that is directed to the display and recording device, needs to be in S-Video format, and therefore is directed into encoder 369 that converts the progressive signal to interlaced signal. Output 361 of switch 356 which needs to be directed to the main S-VGA display, is encoded in encoder 368 and directed to the second channel of the main HMD of the apparatus. The presented apparatus and method provides real-time display of the view field captured by a digital microscope. The available display devices are a stereoscopic HMD device able of presenting S-VGA streams, a stereoscopic rotated HMD device able of presenting S-VGA streams for the benefit of a person viewing the view field from another point of view, a computer or a recording device that supports s-video format. The captured stream is presented as is, or combined with images or streams of another source, such as a navigation and visualization system, a medical imaging device such as an MRI or a CT scanner, image archives and the like. The combined presentation options include picture- in-picture presentation, blending of images of different sources presented on the same display where each picture can be partially transparent, stereoscopic, non- stereoscopic, or the like. The sources of the presented streams and images, the desired displays and display modes are determined by the user, and presented to the system in the appropriate format. Persons skilled in the art will appreciate that the present invention can also be used with different formats of streams and images, and additional types of display. It will be appreciated by persons skilled in the art that the present - invention -is— not— limited - to -what-has- been -pa_rticula_dy-_.shown_and_ esjcriJ _ed. hereinabove. Rather the scope of the present invention is defined only by the claims which follow.

Claims

CLAIMS What is claimed is:
1. A method for generating in real-time an at least one combination stream and converting in real-time the at least one combination stream for presentation during an observation and manipulation session, said at least one combination stream combining at least one first real-time input stream of images taken by an at least one digital source during an observation and manipulation session with an at least one second stream of images, the method comprises the steps of: receiving the at least one first real-time input stream from the at least one digital source; decoding the first real-time input stream to generate an at least one decoded first real-time input stream; digitizing the at least one decoded first real-time input stream to generate an at least one digitized first real-time input stream; receiving the at least one second input stream; decoding-the-at_least_-one--se.c.ond np.ut_stre_a _ o„generate an at least one decoded second input stream; digitizing the at least one decoded second input stream to generate an at least one second digitized input stream; receiving an at least one control command from a user; de-interlacing in real-time the at least one digitized first real-time input stream; combining in real-time at least one image from the at least one decoded first real-time input stream with at least one image of the at least one decoded second input stream to generate at least one output stream; and encoding in real-time the at least one output stream.
2. The method of claim 1 wherein the first real-time input stream comprises stereoscopic images.
3. The method of claim 1 wherein the at least one combination stream comprises stereoscopic images.
4. The method of claim 1 wherein the encoding of the at least one output stream generates an S-VGA format stream. 5. The method of claim 1 wherein the encoding of the at least one output stream generates an S-Video format stream.
6. The method of claim 1 wherein the at least one first real-time input stream is in S-Video format.
7. The method of claim 1 wherein the at least one control command is in USB format.
8. The method of claim 1 wherein the at least one first real-time input stream is in RGB format.
9. The method of claim 1 wherein the at least one second input stream is in S- Video format. 10. The method of claim 1 wherein the at least one second input stream is in S- VGA format. 1-1-The-method-of claim-1-wherein-the-at least one_fkstxeal-tirneJnp-ut_s.trj3.airi is supplied by a left channel of the at least one digital source.
12. The method of claim 1 wherein the at least one first real-time input stream is supplied by a right channel of the at least one digital source.
13. The method of claim 1 further comprising the steps of: routing the at least one first input stream and the at least one second input stream to generate at least one output channel according to the at least one control command; and executing the steps of decoding, digitizing, de-interlacing, combining and encoding separately for each output stream.
14. The method of claim 13 wherein the routing is to be performed separately for S-Video signals and for RGB signals.
15. The method of claim 13 wherein the at least one output stream is presented as the left channel of a main head mounted display.
16. The method of claim 13 wherein the at least one output stream is presented as a right channel of a main head mounted display.
17. The method of claim 13 wherein the at least one output stream is presented as a left channel of a rotated head mounted display. 18. The method of claim 13 wherein the at least one output stream is presented as the right channel of a rotated head mounted display.
19. The method of claim 13 wherein the at least one output stream is in S- Video format.
20. The method of claim 13 wherein the at least one output stream is used for recording.
21. The method of claim 1 wherein the combining step replaces at least one part of at least one image from the decoded first real-time input stream with at least one part of at least one image from the decoded second input stream. 22. The method of claim 1 wherein the combining step replaces at least one part of at least one image from the at least one decoded second input •stream -with-at-least-one part-of a least oneJmage Ji_roιn e__at_fe.as.t_one_ decoded first real-time input stream. 23. The method of claim 1 wherein the combining step creates at least one combination image, said combination image comprises an at least one first image from the at least one decoded first input stream summed with an at least one second image from the at least one decoded second input stream, where the at least one first image and the at least one second image are presented with a positive degree of transparency. 24. The method of claim 1 further comprising the steps of: rotating the at least one first decoded real-time input stream to generate an at least one rotated stream; encoding the at least one rotated stream into S-Video format; routing the at least one rotated stream and the at least one combination stream; and encoding the output of the routing step for generating an at least one main S-Video output stream.
25. An apparatus for generating in real-time an at least one combination stream and converting in real-time the at least one combination stream for presentation during an observation and manipulation session, said at least one combination stream combining at least one first real-time input stream of images taken by an at least one digital source during an observation and manipulation session with an at least one second stream of images, the apparatus comprises: a control component for routing the at least one first and he at least one second input streams and the at least one combination stream according to an at least one user's command; a local bus for transferring the at least one user's command and data among components of the apparatus; at least one channel handling component, the at least one channel handling component-comprising : a first decoder for generating an at least one first decoded input stream; a first analog to digital converter for converting the at least one first decoded input stream; a second decoder for generating an at least one second decoded input stream; a second analog to digital converter for converting the at least one second decoded input stream; a de-interlacer component for de-interlacing in real-time the first input stream; a combining component for generating the at least one combination stream by combining the first decoded input stream with the at least one second decoded input stream; and at least one encoder for encoding in real-time the at least one combination stream. 26. The apparatus of claim 25 wherein the at least one first real-time input stream comprises stereoscopic images. 27. The apparatus of claim 25 wherein the at least one combination stream comprises stereoscopic images.
28. The apparatus of claim 25 wherein the at least one digital source is a digital microscope.
29. The apparatus of claim 25 wherein the at least one digital source is an add on camera on an optical microscope.
30. The apparatus of claim 25 wherein the at least one encoder generates an S- VGA format stream.
31. The apparatus of claim 25 wherein the at least one encoder generates an S- Video format stream. 32. The apparatus of claim 25 wherein the at least one first real-time input stream is in S-Video format. -33. The apparatus-of-claim_25 -whereinihe_.at least one first real-time input stream is in RGB format.
34. The apparatus of claim 25 wherein the at least one second input stream is in S-Video format.
35. The apparatus of claim 25 wherein the at least one second input stream is in S-VGA format.
36. The apparatus of claim 25 wherein the at least one first real-time input stream is supplied by a left channel of the at least one digital source. 37. The apparatus of claim 25 wherein the at least one first real-time input stream is supplied by a right channel of the at least one digital source.
38. The apparatus of claim 25 further comprising an at least one video matrix for routing the at least one first real-time input stream and the at least one second input stream to generate at least one output stream according to the control commands.
39. The apparatus of claim 38 wherein the video matrix is a 16*8 cross point switch.
40. The apparatus of claim 38 wherein the video matrix is an RGB 8*8 cross point switch. 41. The apparatus of claim 38 wherein the at least one output stream is presented as a left channel of a main head mounted display. 42. The apparatus of claim 25 wherein the at least one output stream is presented as a right channel of a main head mounted display. 43. The apparatus of claim 25 wherein the at least one output stream is presented as a left channel of a rotated head mounted display.
44. The apparatus of claim 25 wherein the at least one output stream is presented as a right channel of a rotated head mounted display.
45. The apparatus of claim 25 wherein the at least one output stream is in S- Video format. 46. The apparatus of claim 25 wherein the at least one output stream is used for recording.
-47-. The apparatus o_f-claim-25_wherein ;he-com.hining_step_replaces at least one part of at least one image from the at least one decoded first real-time input stream with at least one part of at least one image from the at least one decoded second input stream.
48. The apparatus of claim 25 wherein the combining component replaces at least one part of at least one image from the at least one decoded second stream input with at least one part of at least one image from the at least one decoded first real-time input stream. 49. The apparatus of claim 25 wherein the combining component outputs at least one combination image, said at least one combination image comprises an at least one first image from the at least one decoded first input stream summed with an at least one second image from the at least one decoded second input stream, where the at least one first image and the at least one second image are presented with a positive degree of transparency.
50. The apparatus of claim 25 further comprising: a rotating component for rotating the at least one first decoded real-time input stream to generate an at least one rotated stream; an encoder for encoding the at least one rotated stream into S-Video format; a router for routing the at least one rotated stream and the at least one combination stream; and an encoder for encoding the at least one rotated stream and the at least one combination stream for generating an at least one main S-Video output stream.
51. An apparatus for capturing and presenting during an observation and manipulation session real-time and off-line visual information the apparatus comprises: at least one digital source supplying at least one real-time stream of images; -at east-one additionaLsource-supplying.atJLeast_pn£ a video interface unit for generating an at least one combination stream by combining in real-time the at least one real-time stream of images with the at least one stream of images supplied by the at least one additional source; at least one first head mounted display device; and a system management device for receiving user input and at least one command and transferring said input and at least one command to the video interface unit. 52. The apparatus of claim 51 wherein the at least one digital source is a digital microscope.
53. The apparatus of claim 51 wherein the at least one digital source is an addon camera on an optical microscope.
54. The apparatus of claim 51 further comprising a display device for displaying streams in S-Video format.
55. The apparatus of claim 51 further comprising a recording device for recording streams in S-Video format.
56. The apparatus of claim 51 further comprising a second head mounted display device for displaying the at least one combination stream as the first head mounted display wherein each image is rotated in a predefined angle.
57. The apparatus of claim 51 wherein the at least one additional source is a navigation and visualization system.
58. The apparatus of claim 51 wherein the at least one additional source is a CT scanner.
59. The apparatus of claim 51 wherein the at least one additional source is an MRI scanner.
60. The apparatus of claim 51 wherein the at least one additional source of stream of images is a picture archive.
PCT/IL2005/000449 2004-04-30 2005-05-01 An apparatus for combining data from a stereoscopic digital source with additional digital imaging devices WO2005104656A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2007510236A JP2007536953A (en) 2004-04-30 2005-05-01 Apparatus for combining data from a stereoscopic video source with an additional digital imaging device
EP05737670A EP1751989A2 (en) 2004-04-30 2005-05-01 An apparatus for combining data from a stereoscopic digital source with additional digital imaging devices
AU2005237325A AU2005237325A1 (en) 2004-04-30 2005-05-01 An apparatus for combining data from a stereoscopic digital source with additional digital imaging devices
CA002564867A CA2564867A1 (en) 2004-04-30 2005-05-01 An apparatus for combining data from a stereoscopic digital source with additional digital imaging devices

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US56641504P 2004-04-30 2004-04-30
US60/566,415 2004-04-30

Publications (2)

Publication Number Publication Date
WO2005104656A2 true WO2005104656A2 (en) 2005-11-10
WO2005104656A3 WO2005104656A3 (en) 2006-04-27

Family

ID=35242094

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2005/000449 WO2005104656A2 (en) 2004-04-30 2005-05-01 An apparatus for combining data from a stereoscopic digital source with additional digital imaging devices

Country Status (5)

Country Link
EP (1) EP1751989A2 (en)
JP (1) JP2007536953A (en)
AU (1) AU2005237325A1 (en)
CA (1) CA2564867A1 (en)
WO (1) WO2005104656A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010114843A1 (en) * 2009-03-31 2010-10-07 Charles Luley Associates, Inc. Dba Cla Medical Injection of secondary images into microscope viewing fields
US9083958B2 (en) 2009-08-06 2015-07-14 Qualcomm Incorporated Transforming video data in accordance with three dimensional input formats

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013216409A1 (en) * 2013-08-19 2015-02-19 Carl Zeiss Microscopy Gmbh microscope

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5510832A (en) * 1993-12-01 1996-04-23 Medi-Vision Technologies, Inc. Synthesized stereoscopic imaging system and method
US5524194A (en) * 1990-09-17 1996-06-04 Canon Kabushiki Kaisha Data communication apparatus
US6046712A (en) * 1996-07-23 2000-04-04 Telxon Corporation Head mounted communication system for providing interactive visual communications with a remote system
US6456340B1 (en) * 1998-08-12 2002-09-24 Pixonics, Llc Apparatus and method for performing image transforms in a digital display system
US20030156188A1 (en) * 2002-01-28 2003-08-21 Abrams Thomas Algie Stereoscopic video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5524194A (en) * 1990-09-17 1996-06-04 Canon Kabushiki Kaisha Data communication apparatus
US5510832A (en) * 1993-12-01 1996-04-23 Medi-Vision Technologies, Inc. Synthesized stereoscopic imaging system and method
US6046712A (en) * 1996-07-23 2000-04-04 Telxon Corporation Head mounted communication system for providing interactive visual communications with a remote system
US6456340B1 (en) * 1998-08-12 2002-09-24 Pixonics, Llc Apparatus and method for performing image transforms in a digital display system
US20030156188A1 (en) * 2002-01-28 2003-08-21 Abrams Thomas Algie Stereoscopic video

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010114843A1 (en) * 2009-03-31 2010-10-07 Charles Luley Associates, Inc. Dba Cla Medical Injection of secondary images into microscope viewing fields
US9083958B2 (en) 2009-08-06 2015-07-14 Qualcomm Incorporated Transforming video data in accordance with three dimensional input formats

Also Published As

Publication number Publication date
JP2007536953A (en) 2007-12-20
WO2005104656A3 (en) 2006-04-27
EP1751989A2 (en) 2007-02-14
CA2564867A1 (en) 2005-11-10
AU2005237325A1 (en) 2005-11-10

Similar Documents

Publication Publication Date Title
US20070146478A1 (en) Stereoscopic 3D rig calibration and viewing device
AU683336B2 (en) Synthesized stereoscopic imaging system and method
US7907166B2 (en) Stereo telestration for robotic surgery
JP4890489B2 (en) General-purpose camera control unit
JP5904620B2 (en) Information display device
US20130021438A1 (en) 3d video processing unit
US20100245557A1 (en) Injection of secondary images into microscope viewing fields
US8217981B2 (en) Configuring videoconferencing systems to create video sessions with realistic presence
CN111480332B (en) Controller and control method
WO2005104656A2 (en) An apparatus for combining data from a stereoscopic digital source with additional digital imaging devices
US8823877B2 (en) Video signal processing apparatus and video signal processing method
US8780171B2 (en) Video signal processor and video signal processing method with markers for indicating correct component connection
JPH0670040A (en) Display form changing system for multi-position video conference system
EP2085904A2 (en) Medical support control system
JPH07184849A (en) Image processing display system
JPH07184850A (en) Image processor
EP4275645A1 (en) Improved multimedia dental station
JP2591439B2 (en) Video synthesis method for video conference
JPH06141312A (en) Video conference equipment
US20090189907A1 (en) Medical support control system
JP2001053947A (en) Image information transmitting device and system therefor
JPH08321992A (en) Image processing unit
JP2012128223A (en) Image display device
Salmimaa et al. 22‐1: Distinquished Paper and Invited Paper: Live Delivery of Neurosurgical Operating Theatre Experience in Virtual Reality
JPH10164541A (en) Multi-spot video conference system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

WWE Wipo information: entry into national phase

Ref document number: 2564867

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 2007510236

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Ref document number: DE

WWE Wipo information: entry into national phase

Ref document number: 4283/CHENP/2006

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 2005237325

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 2005737670

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 200580017626.2

Country of ref document: CN

ENP Entry into the national phase

Ref document number: 2005237325

Country of ref document: AU

Date of ref document: 20050501

Kind code of ref document: A

WWP Wipo information: published in national office

Ref document number: 2005237325

Country of ref document: AU

WWP Wipo information: published in national office

Ref document number: 2005737670

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 2005737670

Country of ref document: EP