WO2007042923A2 - Image acquisition, processing and display apparatus and operating method thereof - Google Patents

Image acquisition, processing and display apparatus and operating method thereof Download PDF

Info

Publication number
WO2007042923A2
WO2007042923A2 PCT/IB2006/002854 IB2006002854W WO2007042923A2 WO 2007042923 A2 WO2007042923 A2 WO 2007042923A2 IB 2006002854 W IB2006002854 W IB 2006002854W WO 2007042923 A2 WO2007042923 A2 WO 2007042923A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
acquisition
subject
processing
Prior art date
Application number
PCT/IB2006/002854
Other languages
French (fr)
Other versions
WO2007042923A3 (en
Inventor
Stefano Giomo
Original Assignee
Stefano Giomo
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Stefano Giomo filed Critical Stefano Giomo
Publication of WO2007042923A2 publication Critical patent/WO2007042923A2/en
Publication of WO2007042923A3 publication Critical patent/WO2007042923A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • H04N7/144Constructional details of the terminal equipment, e.g. arrangements of the camera and the display camera and display on the same optical axis, e.g. optically multiplexing the camera and display for eye to eye contact
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • H04N2005/2726Means for inserting a foreground image in a background image, i.e. inlay, outlay for simulating a person's appearance, e.g. hair style, glasses, clothes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Endoscopes (AREA)
  • Microscoopes, Condenser (AREA)
  • Image Processing (AREA)

Abstract

The present invention refers to an apparatus for acquiring, processing and displaying images, as well as the method of operation thereof . The image acquisition, processing and display apparatus according to the invention comprises an acquisition system (1) for the acquisition of images (10, 25), which defines an optical axis (2) of acquisition of a principal image (10) ; a processing unit (5) for processing said images (10, 25) to generate a processed image (17); a display system (6) for visually reproducing said processed image (17) on an area (A) of an image plane (18) . The apparatus is characterized in that the acquisition system (1) is disposed so that said area (A) is visible in its entirety, and in that said optical axis (2) intersects said image plane (18) at a point contained within said area (A) . Also claimed is a method of operation of the image acquisition, processing and display apparatus according to the invention.

Description

IMAGE ACQUISITION, PROCESSING AND DISPLAY APPARATUS AND
OPERATING METHOD THEREOF
DESCRIPTION
The present invention refers to an apparatus for acquiring, processing and displaying images, as well as the method of operation thereof.
Considerable efforts have been made up to now in view of providing equipment and methods enabling images to be acquired, the original features of such images to be desirably altered, and the thus altered images to be finally displayed.
An example of equipment and methods of the above- cited kind is disclosed in US patent No. 6,692,127, in which the basic operation of the described system may be summed up in two distinct phases, i.e. an "acquisition" phase and a "consultation" phase. In the "acquisition" phase, a certain number of photos of the subject are simultaneously snapped from different shooting angles,- the images thus obtained are necessary for the three- dimensional pattern, i.e. mesh and texture, of the face of the subject to be able to reconstruct it in an accurate manner. In the "consultation" phase, by making use of the data gathered in the "acquisition" phase, the user is then able to interact with the three-dimensional pattern of his/her own face by simulating his/her own look through the addition of virtual objects, such as eyeglasses, jewels, and the like. This interaction occurs through a display or an appropriate selection interface.
However, the apparatus described in US 6,692,127 has a number of drawbacks. One of these drawbacks lies in the fact that the images of the subject that are acquired by the system and stored therein, must be updated periodically if the user changes any of his/her own physical features, as this may for instance occur following a change in his/her hairstyle. In cases like this, the subject is disadvantageously forced to undergo a new process of acquisition of his/her image by going again to the site where the image acquisition system is located.
Another drawback, which is more particularly connected with the way in which the three-dimensional model of the subject is acquired, lies in the fact that the simulation of any virtual object being worn by the subject is done by using an image of the same subject that has been acquired and stored in a previous time. Although being capable to be rotated according to and into various viewing angles, the composite image of the subject and the virtually worn object will anyway remain tied up to the basic features of the stored image, without any possibility to simulate in a real-time a different posture of the body of the subject, such as for example a different way of looking or a different hairdressing in the case of an image of a face. As a result, the apparatus proposed in US 6,692,127 is scarcely versatile; further it can be considered as being interactive in a limited extent, since it is poorly apt to dynamically interact with the image of the subject. In addition, the operating method of the apparatus is found to be rather invasive, since it forces the user to travel to the shop or, anyway, the point of sale in view of having the three-dimensional pattern of his/her face duly acquired; he/she further needs a special physical interface in order to be able to interact with the apparatus. Furthermore, all operations to be performed for altering and displaying the composite image on a screen require again the use of special equipment to be provided to this purpose, such as for instance a computer running with special loaded programmes .
Therefore it is the object of the present invention to provide an apparatus for acquiring, processing and displaying images of one or more objects, as well as an operating method thereof, which effectively overcome the afore-indicated drawbacks and disadvantages of prior-art apparatuses and related operating methods .
Within this general object, it is a purpose of the present invention to provide an apparatus for acquiring, processing and displaying images of one or more objects, as well as an operating method thereof, which enables any virtual object to be simulated in' the worn state thereof, and which is further able to let a user make an appropriate choice among a plurality of objects that potentially meet with his/her particular requirements or correspond to the desired features.
Another purpose of the present invention is to provide an image acquisition, processing and display apparatus, the operating method of which does not necessarily require the user to undergo any particular procedure in view of having his/her own image acquired, nor necessarily requires the user to come into contact with or wear feature or marker elements of any kind.
Yet another purpose of the present invention is to provide an image acquisition, processing and display apparatus that is capable of simulating a given object being virtually worn, in which the user can view, on a real-time basis, i.e. as his/her own image is in the process of being acquired, the outcome of the composite image formed by his/her own image and the virtual representation of the object selected for montage.
Again, a further purpose of the present invention is to provide an image acquisition, processing and display apparatus and an operating method thereof that enable images to be acquired and altered on the basis of a choice made by the user or on the basis of a pre-defined choice.
Yet a further purpose of the present invention is to provide an apparatus that is suitable to store the biometric data concerning the physical features of the users, as well as the data concerning the particular choices made and other information related to the users, and to make these data available to the purpose of analyses to be subsequently done, e.g. in view of identifying the number of people who have interacted with the apparatus or the preferences of the users, or the like.
A further purpose of the present invention is to provide an apparatus that increases the interaction with the user and can act on the purchase choices of the user. To meet this purpose the system can be provided with suitable known means for acquisition/reproduction and transmitting/receiving a sound signal. With such apparatus the user can receive music or vocal information associated with the product the user is trying. Alternatively, when trying on the products, the user can be directed by a "personal shopper" (real or virtual) that suggest the most suitable products for the user.
According to the present invention, these aims as set forth above are reached in an image acquisition, processing and display apparatus incorporating the features and characteristics as recited in the appended claims 1 to 23 , and in an operating method thereof according to claims 24 to 32.
Features and advantages of the present invention will anyway be more readily understood from the description that is given below by way of non-limiting examples with reference to the accompanying drawings, in which:
Figure 1 is a side elevation view of a first embodiment of the image acquisition, processing and display apparatus according to the present invention;
- Figure 2 is a side elevation of a first modified embodiment of the image acquisition, processing and display apparatus shown in Figure 1;
- Figure 3 is a side elevation view of a second modified embodiment of the image acquisition, processing and display apparatus shown in Figure 1;
- Figure 4 is a side elevation view of a second embodiment of the image acquisition, processing and display apparatus according to the present invention;
- Figure 5 is a side elevation view of a third embodiment of the image acquisition, processing and display apparatus according to the present invention,-
- Figure 6 is a schematic view of the operating method of an image acquisition, processing and display apparatus according to the present invention; and
Figure 7 is a schematic view of a further embodiment of the image acquisition, processing and display apparatus according to the present invention.
With reference to Figure 1, the inventive image acquisition, processing and display apparatus comprises an acquisition system 1 that defines such an optical acquisition axis 2 as to make it possible for a principal image of a subject 3, who or which is situated within the viewing range of the acquisition system 1, to be acquired. As considered and discussed throughout the following description of the present invention, the subject 3 may consist of either one or more living beings or one or more obj ects . In Figure 1 , the acquisition system 1 is comprised of a single real video camera 21, the optical pickup axis of which coincides with the optical acquisition axis 2.
As explained hereinafter, the acquisition system 1 may comprise more than a single real camera 21, each one of which will then have its own optical pickup axis, and which will be arranged so that the principal or main image acquired by the system 1, i.e. the image resulting from the possibly provided processing step of support or auxiliary images picked up by each such camera 21, is similar to the image that would be acquired by a single camera (i.e. the ideal camera 20) virtually situated in front of the subject to be picked up, and having the optical acquisition axis 2 as its optical pickup axis. In Figure 1, the acquisition system 1 is shown in the simplest configuration thereof, i.e. in the configuration comprising a single camera 21 located in front of the subject 3. In this case, the camera 21 coincides with the ideal camera 20 as defined above.
The acquisition system 1 comprises means for generating a first signal that is representative of the principal image acquired. The signal being output may for example be representative of an image involving information falling within the visible spectrum, i.e. the range of wavelengths of radiations visible to the human eye, the infrared spectrum, ultraviolet or the distance between the subject 3 and the acquisition system 1 itself. The camera 21 is situated behind a composite or semitransparent mirror 4. This mirror is made of such material - of a type largely known as such in the art - that, if the same mirror is disposed so as to separate from each other two spaces or environments where different luminosities prevail, only the light radiating from the brighter space will permeate the material, which will then have its surface facing this brighter space looking as a mirror. In other words, the composite or semitransparent mirror 4 will behave as a conventional mirror in the brighter space and a transparent glass in the darker one. In Figure 1, the subject 3 stands in the brighter space, i.e. the environment with a greater luminosity, and the camera 21 in the darker space, i.e. the environment with a lower luminosity. In this manner, the subject 3 can be picked up or photographed by the acquisition system 1 and, at the same time, he/she can view, duly reflected on the mirror 4, the image being displayed on the screen of the display system 6, without being on the contrary able to see what is standing behind the same mirror.
The output signal from the acquisition system 1 is received by a processing unit 5, in which the principal image of the subject 3 is processed in the manner that shall be illustrated in greater detail hereinafter, and is in turn output by said unit 5 in the form of a second signal that is representative of the thus processed image .
A display system 6, e.g. in the form of a screen, receives the second signal being output by the processing unit 5 and reproduces it visually on the mirror 4 so that the subject 3 is able to see his/her own image as this looks out after processing in the processing unit 5. To this purpose, the mirror 4 is arranged in a position that is inclined at an angle of substantially 45° relative to the screen of the display system 6.
As explained in greater detail hereinafter, the image appearing on the mirror 4 in general dose not coincide with the principal image as picked up by the ideal camera 20, but it derives instead from an electronic modification (i.e. a modification that is generally termed "enhancement" in the art) of such image, so as to give an overall impression that is different from the one caused by the principal image itself. The processing unit 5 may be configured so as to avoid any processing of the image being acquired, while outputting - as a processed image - the same acquired image in an unaltered state. In the case that the two images coincide, the apparatus according to the present invention performs almost in the same way as a traditional mirror. The processed image displayed by the display system 6 forms itself on an image plane 18 that constitutes the surface, possibly virtual, on which the processed image, seen by the subject 3, is considered to be represented. In Figure 1, the display system 6 projects the processed image onto the mirror 4, which is inclined towards the subject 3. The processed image is reflected on the mirror and reaches the eyes of the subject 3, who perceives it as if it were represented on a virtual plane, i.e. the image plane 18, extending behind the inclined mirror 4.
A portion of this image plane 18 is constituted by an area A, in which there is represented the processed image. In order to prevent the view of the processed image from being impeded by the presence of component parts of the apparatus between them, the acquisition system 1 is disposed such that the area A is visible in its entirety. The optical axis 2 defined by the acquisition system 1 intersects the image plane 18 at a point contained within the area A.
Shown in Figure 2 is a first modified embodiment of the image acquisition, processing and display apparatus illustrated in Figure 1. In this case, the acquisition system 1, which includes a single camera 21, comprises a mirror 19 - of the type adapted to only reflect an image - that reflects the image of the subject 3 and allows the camera 21 to be located behind the semitransparent mirror 4 in such position as to have the optical axis thereof coinciding with the optical axis of the ideal camera 20 thanks to the image being so reflected by the mirror 19. In this modified embodiment of the inventive apparatus, the optical image acquisition axis 2 turns out as being defined by an ideal camera 20 that is virtually placed behind the semitransparent mirror 4 so as to acquire the same principal image of the subject 3 that is actually acquired as reflected by the real camera 21.
It will be readily appreciated that further mirrors 19 may be used in order to have the image of the subject 3 reflected a corresponding number of times, and to direct the same image towards a camera 21 that has been placed in the most suitable position within the apparatus .
The semitransparent mirror 4 is of the kind described above with reference to Figure 1, and is inclined at an angle of substantially 45° relative to a screen, which the display system 6 is provided with, so as to be able to reflect the image being produced by the display system 6 onto such screen. The image plane 18 is defined by the processed image in the same way as illustrated hereinbefore with reference to Figure 1. A portion of this image plane 18 is constituted by an area A, in which the processed image is visible. The optical axis 2 defined by the acquisition system 1 intersects the image plane 18 at a point contained within the area A.
Figure 3 shows a second modified embodiment of the image acquisition, processing and display apparatus illustrated in Figure 1. In departure from what has been described with reference to the embodiment illustrated in Figure 1, the positions of the screen, which the display system 6 is provided with, and the camera 21, which constitutes the acquisition system 1, are in this case exchanged. The semitransparent mirror 4 reflects the image of the subject 3 in the direction of the optical image pickup axis of the camera 21, whereas the position of the optical image acquisition axis 2 remains unaltered as compared with the one that has been described above with reference to Figure 1. The brightness, i.e. luminosity of the screen is such that, in the portion of semitransparent mirror 4 that is hit by the light beam emitted by the screen, the image being displayed by the latter is fully visible to the subject 3.
Illustrated in Figure 4 is a second embodiment of the image acquisition, processing and display apparatus according to the present invention. A semitransparent mirror 4 of the kind described above with reference to Figure 3 is arranged in an inclined position so as to reflect the image of a subject 3 towards the real camera 21 of the image acquisition system 1. Owing to the inclination of the mirror 4, the optical axis of the camera 21 is brought to coincide with the optical acquisition axis 2, which - as this has already been described with reference to Figure 3 - can be considered as being defined by an ideal camera 20 that is virtually placed behind the semitransparent mirror 4 so as to acquire the same principal image of the subject 3 that is actually acquired as reflected by the real camera 21.
Located behind the semitransparent mirror 4 there is a display system 6 comprising a screen. Similarly to what has been described above with reference to Figure 3, the brightness, i.e. luminosity of the screen is such that, in the portion of semitransparent mirror 4 that is hit by the light beam emitted by the screen, the image being displayed by the latter is fully visible to the subject 3.
In view of reducing the inclination of the mirror 4 to an angle of less than 45° relative to the surface of the screen, and in order to prevent the camera 21 from picking up also the image being displayed on the screen and filtering through the mirror 4, provided on the same screen there is a filter 22 that is capable of allowing the screen to be fully viewed only if it is looked at from certain particular viewing angles . An example of filter 22 that may suit the application is the one produced by 3M Company under the type designation PF500L, or PF400L, and known on the marketplace under the trade-name "Privacy Filter". Thanks to an inclination of the semitransparent filter 4 at an angle of less than 45° relative to the surface of the screen, the possibility is given for the overall size - and, hence, the overall space requirement - of the whole image acquisition, processing and display apparatus to be suitably reduced.
As this has already been described hereinbefore, a processing unit 5 receives from the acquisition system 1 a signal that is representative of the principal image of the subject 3, processes this signal in the manner that shall be illustrated in greater detail hereinafter with reference to Figure 5, and outputs in turn a second signal that is representative of the thus processed image .
The processed image generated by the processing unit 5 is displayed by the display system β on a screen, the surface of which coincides with the image plane 18 defined by the processed image. On this plane there is present an area A that is fully visible to the subject 3, and in which the processed image is displayed for viewing; the optical axis 2 defined by the acquisition system 1 intersects the image plane 18 at a point contained within this area A.
Illustrated in Figure 5 is a third embodiment of the image acquisition, processing and display apparatus according to the present invention. In this case, the acquisition system 1 comprises a computing unit 24 and a pair of cameras 21 disposed at the sides of a screen constituting the display system 6. The acquisition system 1 defines an optical acquisition axis 2 corresponding to the optical axis of an ideal camera 20 located behind or close by the screen of the display system 6 represented in Figure 5. In this embodiment, the principal image is obtained through a processing step - performed by the computing unit 24 - of the signals produced by the two cameras 21, and will be substantially similar to the image signal that would be issued by an ideal camera 20 located behind or close by the screen of the display system 6. The two cameras 21 are used to virtually create - starting from the support images 25 acquired by them - a frontal pickup of the subject 3 as this would be done by an ideal camera 20, owing to it being practically impossible for the screen of the display system 6 - situated in front of the subject 3 - to act as an image display means and an image acquisition means at the same time.
Although two cameras are used in this particular embodiment under examination, it will of course be possible for just a single one to be used, preferably in combination with the use of appropriate algorithms of any known type, such as those suitable to perform a nomographic image transformation. These algorithms are capable of reconstructing an image that is substantially similar to the principal image which would be obtained with a real or even an ideal camera disposed in front of the subject 3, by starting from a single support image corresponding to a non-frontal view of the subject 3.
Since the principal image is obtained by processing the signals produced by the cameras 21 by means of the computing unit 24, such image will be capable of being constructed so as to represent the subject 3 from a plurality of viewing angles, i.e. as if the subject 3 were photographed by a single ideal camera 20 located in various positions. These positions may be selected and set by the subject him/herself with the help of a control interface, which the image acquisition, processing and display apparatus will be duly provided with, or may each time be changed by the apparatus itself on the basis of the movements performed by the subject 3 while being shot by the acquisition system 1.
It will of course be readily appreciated that, for a desired principal image to be provided, use can also be made of more than two cameras 21.
Through the computing unit 24, the acquisition system 1 will be capable of providing the processing unit 5 not only with the principal image 10, but also - or even solely - with one or more of the individual support images 25 acquired by the two or more cameras 21. This third embodiment of the inventive apparatus - as explained in greater detail hereinafter - allow the apparatus to operate not only with the principal image, but also - or even solely - with one or more of the support images 25 supplied by the cameras 21.
In this third embodiment, the processed image 17 provided by the processing unit 5 is displayed on an image plane 18, which in this case coincides with the plane of the screen, since no reflection of the processed image is contemplated, actually. The area A in the plane 18 is the area on which the processed image is displayed. The acquisition system 1 is arranged so as to ensure that the area A is visible in its entirety and the optical acquisition axis 2 intersects the image plane 18 at a point contained within the area A. In this way, the subject 3 will be able to view his/her own image so as this is processed by the processing unit 5, without having the view of the processed image impeded or obstructed by the presence of component parts of the apparatus between them The image being displayed by the apparatus may correspond to an - electronically altered - image that substantially reproduces the subject 3 as if the latter were looking at him/herself in a mirror, or it may consist of an image reproducing the subject 3 from a "non-frontal" viewing direction or as if the same subject 3 were viewed from an inclined direction relative to the frontal viewing direction. These images may be displayed simultaneously (Figure 6) or separately. In this way, the user will be able to view his/her own image from various viewing angles, which turns out as being particularly advantageous when the effect of virtually wearing a garment, attire or the like is to be seen.
If it should be necessary to provide the device with a front protecting element 30 (such as a glass) and/or to locate the device in front of a transparent surface 31 (such as for instance a window) , it will be convenient that one or more cameras of the acquisition system 1 are provided with a filter 29 (such as for instance a polarizing filter) in order to limit or eliminate any glare and reflection caused by the display sistem 6 wich are generated on the transparent surfaces 30 and 31. This in order to avoid the acquisition system from taking, besides the image of the subject, even the reflections, wich are sources of noise cause by the display screen and formed on the surfaces 30 and 31.
To the purpouse of reducing the reflections on surfaces 30 and 31, instead of or in addition to the filter 29 we may make the transparent element 30 and/or 31 of a material suitable for limiting such phenomena, such as for example an antireflection glass .
In figure 7 it is represented a further embodiment of the invention, in which the realtime video enhancing 5 is processed by two different and separate modules, one being a local module 27 and the other being a remote module 28, these modules being reciprocally connected by suitable connection means such, for example, the Internet or GSM/H3G network. The range of operations performed respectively locally or remotely by this embodiment of the invention, depends on the specific implementation.
The local module 27 should be able to acquire at least an image (Principal or Support) and in the meantime to display at least an Processed Image 17 (according to the method described in the following fig. 6) .
An extreme possibility is the use of a multimedia system (such as a personal computer with a videocamera) , in which, according to the available computing power, it is possible to realize the image processing 5 in a complete or quasi-complete local fashion. At the opposite extreme it is foreseen a solution based on limited processing/computing power terminals, such as palm devices (PDA) or cellular phones, in which the local module 27 only performs acquisition and display of images . Other embodiments that are intermediate with respect to the two extremes above described may comprise, for example, a first local image processing step at module 27 and a further image refining performed by the remote module 28.
For example, with reference to a cellular videophone, a possible method is to make a videocall between the local module 27 and the remote processing module 28, a videocall in which the local module 27 sends a group of non-processed images of the subject and the remote processing module 28 answers with the corresponding flow of processed images. The local module 27, in addition to images, has the possibility to send and receive contextually other signals such as for example voice, music, control characters, DTMF code and so on. Moreover, when trying on the products, the user can be directed by a "personal shopper" (real or virtual) that suggests the most suitable products for the user.
It is to be noted therefore that the above described invention allows for a device that can perform a videocall, such as a cellular phone, of visualizing in real time the processing of the image 5 of the subject operating in a normal fashion, without the necessity that specific software be installed in the device. This embodiment is therefore apt for any physical device able to acquire and contextually visualize at least an image, be it a multimedia system, a cellular phone, a videophone, a videoconference system or similar. It is to be noted that in general a device such as the ones just now mentioned (mobile, PC with Webcam, videoconferencing system, and the like) , it is not able to acquire the Principal Image 10, but only one or more Support Images 25.
Figure 6 can be noticed to schematically illustrate the operating method of an image acquisition, processing and display apparatus according to the present invention. One or more signals issued by the acquisition system 1 are received by the processing unit 5. Prior to starting the processing procedures as described below, the signals being input from the acquisition system 1, and representative of the principal image 10 and/or one or more of the support images 25, may undergo a preprocessing step 26 to the purpose of having one or more characteristic parameters, or features, of the image altered accordingly. Such features may for instance include the colours of the image, the level of contrast, the brightness, the geometrical characteristics defining the orientation of the image, and the parameters quantifying the degree of distortion of the image. This pre-processing step becomes for example necessary whenever it is desired that the processed image displayed by the display system 6 downstream of the whole processing procedure be prevented from appearing as not being specular to the image of the subject 3. In other words, the pre-processing step 26 is carried out in view of letting the subject 3 feel as if he/she were sitting or standing in front of a mirror or avoiding that he/she, when moving while being shot by the acquisition system, sees his/her own processed image - as displayed by the display system 6 - moving in a direction opposite to the real one. If necessary, the pre-processing step 26 may be performed even in the case that the processing unit 5 is only working on one or more of the support images 25. The signals output by the acquisition system 1 are processed separately in two modules 7 and 8. The first module 7 has the task of applying a graphic style to one or more images. In this connection, the module 7 receives a - possibly pre-processed - signal representative of an image that may consist of the principal image 10, one or more of the support images 25 (if the apparatus and, possibly, the user enable such images to be acquired), or both image types, depending on the type of apparatus being used. The second module 8, which receives a signal of the same type as the one received by the module 7, has the task of preparing a two-dimensional representation of one or more virtual objects or items, generated so as to turn out as being consistent with the actual context of the image in which they are due to be inserted or, in other words, represented so as to comply with the dimensional and perspective constraints imposed by the context in which such items must be inserted.
Applying a graphic style to each image received by the module 7 involves a first operation 9, in which the received, possibly pre-processed, image is filtered by means of appropriate graphic filters in order to impart e.g. a chromatic alteration thereto or subtract one or more portions of the image therefrom. Applying a graphic style also comprises a second operation 12, in which graphic elements 15, i.e. other two- or three- dimensional images, or text messages are defined and so set as to enable them to be inserted in the context of the image being processed. The insertion of such graphic elements 15 may be dome either in a consistent manner, i.e. in such manner as to enable the dimensional and perspective constraints of the image to be duly complied with, or in a non-consistent manner. The first operation 9 and the second operation 12 can be performed independently of each other, so that the second operation 12 may be performed on the image optionally.
The second module 8 has the task of preparing a two- dimensional representation of one or more virtual objects or items to be inserted in an image, which may consist of the principal image 10 and/or one or more of the support images 25, so as to obtain the processed image 17. The actual aim that the module 8 is designed to reach lies in providing a mathematical description and identifying in a sufficiently accurate manner the position in the three-dimensional space of some physical characteristics of the subject 3, such as for instance - in the case that the subject 3 is a human - the position of the head, the eyes, the ears, the features of the face, and - on the basis of the so collected data - working out a two-dimensional image of the objects or items by arranging them in view of a consistent insertion thereof in an image. This two-dimensional image is obtained starting from three-dimensional models of said objects, in which all dimensional and behavioural characteristics thereof are known.
The procedure for representing virtual objects in view of obtaining the processed image 17 comprises a first step 13, referred to as "tracking", in which the principal image 10, along with one or more of the support images 25, as possibly pre-processed in the afore-cited step 26, are processed with the help of appropriate algorithms generally known as such in the art, in view of providing an accurate mathematical description of the physical features of the subject 3. For example, if the physical feature due to be described mathematically is the head of the subject 3, the tracking step 13 is for example capable of aligning a three-dimensional model of the head of the subject 3 with the latter and then associating, say, a coordinate system to the same head. A possible method for performing this operation by using a single image is described in L. Vacchetti, V. Lepetit and P. Fua, "Stable Real-Time 3D Tracking using Online and Offline Information" (IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 26, Ed. 10, October 2004, pp. 1385-92, ISSN:0162-8828) .
When suitably modified, the above-cited tracking step 13 will of course be able to be applied to searching out both other physical features of the subject, such as for example the eyes, the hands or the feet, and objects or items of any other type, such as boxes, watches or clocks, and the like.
The mathematical description of the physical features of the subject 3 and all other information obtained from the tracking module 13 are absolutely necessary in view of a correct operation of the system both in the initial phases (i.e. when tracking is started) and under full- running, i.e. steady-state conditions. In addition, these data can be used to derive important information pieces about the same subject, such as for instance - in the case that the subject is a human - the height, the colour of the hair, the intraocular distance, and similar biometric data. For example, with reference to the afore-cited tracking procedure (L. Vacchetti, V. Lepetit and P. Pua) and to the purpose of tracking the position of the head of the subject 3 in space, a possible method to have tracking started in a correct manner would entail identifying - within the acquired image - the position of the eyes of the subject and the silhouette of his/her face, so as to be able to adapt a general three-dimensional model or pattern of the face to the particular situation being viewed.
The procedure for representing virtual objects in an image so as to obtain the processed image 17 comprises a second step 14, generally termed "rendering" in the art, which receives at its input the mathematical description of the physical features of the subject 3 resulting from the tracking step 13, and uses this information to work out two-dimensional images of the virtual objects that should desirably be inserted in the image in a consistent manner, i.e. in such manner as to fully comply with and keep close to the perspective and dimensional constraints of the same image. The two- dimensional images of the objects are obtained by knowing the mathematical model that defines the particular item to be inserted three-dimensionally, wherein this model is then adapted on the basis of the data that define the features of the subject 3 mathematically. For example, if it is desired to have the image of a pair of glasses inserted on the face of the subject 3 represented in the principal image 10, and do this so as to ensure that such insertion is consistent, i.e. so as to ensure that the glasses appear as being worn by the subject in a plausibly real manner, the need arises for the correct position and the orientation to be assigned to the three-dimensional model to be known mathematically; it will further be necessary for the perspective constraints and the interactions, which such three-dimensional model has to comply with in connection with other objects being present in the general context of the principal image, such as this occurs for instance in the case of the three-dimensional model of the sidepiece of a pair of spectacles when a subject 3 turns his/her head.
The first image filtering operation 9, the second operation 12, in which the graphic elements 15 are defined, and each one of the two tracking and rendering steps 13, 14, along with the possibly provided preprocessing step 26, are able to interact with each other so that the results of each one of these procedures can be used to perform another one. Such results can furthermore be used not only by such component parts of the apparatus as the acquisition system 1 and the display system 6, but also by other electronic units that are operatively connected to the apparatus . For example, if a reflecting feature is desirably to be added to the mathematical model of an object in the rendering step 14, it would be possible for the filtered principal image 11 itself to be used as an image for reflection. In addition, the mathematical description of the physical features of the subject 3 resulting from the tracking step 13 can be used to perform the filtering procedure 9. As an example, if the filtering procedure 9 involves shading off some parts of the subject 3, it will be necessary for the position and the orientation of such parts in the overall context of the principal image to be known mathematically. The data coming from the tracking step 13 can also be used as input data for a control interface of the image acquisition, processing and display apparatus. Furthermore, the data obtained from the tracking step 13 can be used to update the position of the ideal camera 20 on the basis of the movements made by the subject 3 while being shot, i.e. photographed by the acquisition system 1. For instance, in the case that the third embodiment of the apparatus illustrated in Figure 5 is used, such data would be sent to the computing unit 24 so as to enable an appropriate synthesis of the principal image 10 to be done there, or, if the other afore-discussed embodiments of the apparatus are used, such data will enable the optical acquisition axis 2 to be oriented correctly, by shifting the real cameras 21 or orienting the reflective surfaces 4, 19 accordingly. The results of the first procedure 9, i.e. the filtered images 11, are combined, with the help of a graphic combination procedure 23, with the two- dimensional images 16 of the virtual objects obtained through the module 8, so as to compose the processed image 17. If the image received from the module 7 has also been caused to undergo the second procedure 12, i.e. the definition of the graphic elements 15 and the arrangement thereof in view of their insertion in the context of such image, the graphic combination procedure 23 will then also involve including such graphic elements 15 in the composition of the processed image 17. The latter is issued in the form of a signal from the processing unit 5 and sent in this form to the display system 6, which provides for this signal to be made available in the form of a visual representation. When it is the principal image 10 that is processed in the module 7, the processed image 17 that the subject 3 views in front of him/her will appear to the latter as being the result of a kind of superposition of his/her own image, as reflected by a common mirror, and virtual elements; such image may further be represented according to a particular graphic style, e.g. such as painted in watercolours, featuring outlined contours, and the like. If the module 7 works on the contrary on one or more support images 25, the processed image may correspond to a substantially "non-frontal" view of the figure of the subject 3 as enriched by the addition of virtual elements and possibly represented according to a particular graphic style.
As shown in Figure 6, the processed images 17 may not only comprise the front view of the subject 3 deriving from the processing of the principal image 10, but also views of the same subject 3 as shot, i.e. photographed from various viewing angles, deriving from a processing of the support images 25.
If the processing unit 5 being used can rely upon an adequate computing capacity, it will be possible for the time delay T elapsing from the moment at which the image of the subject 3 is acquired until the corresponding processed image 17 is represented visually, i.e. displayed, to be reduced to such value as to arouse in the subject 3 the feeling the image being displayed by the apparatus substantially corresponds to the image that would be produced in a common mirror if the effects that are virtually rendered in the processed image 17 were really existing on the scene. Again, if use can be made of an apparatus provided with adequate capacity, the number N of frames per second would be such as to enable the user to interact in a natural manner with the system. Under normal, steady-state operating conditions, the highest value of the time T will be lower than or equal to 15 seconds, while the number N of frames per second displayed by the display system 6 will be higher than or equal to 0.25 (which means that a time of less than or equal to 4 seconds will elapse from a frame to the next one) .
If the subject 3 being shot by the acquisition system 1 has for example to be simulated as virtually wearing a pair of spectacles, it will be necessary for the processing unit 5 to produce a virtual two-dimensional image of the spectacles and for this image to be updated in the position and orientation thereof, so as to effectively and really follow the movements performed by the head of the subject 3 as the latter is photographed by the acquisition system 1.
For the subject 3 to be able to interact with the image acquisition, processing and display apparatus, the latter may be provided with a control interface, by means of which the subject 3 will then be able to select whether and set the way in which the modules 7 and 8 have to intervene on the images being picked up by the acquisition system 1. A plurality of graphic image filters, a plurality of representations of graphic elements, as well as a plurality of mathematical models defining objects in the three-dimensional space are stored in data bases that are operatively connected with the modules 7 and 8. These data bases may be dedicated each to a single data typology or the various data may be stored all together in a single data base.
Through such control interface, the subject 3 may possibly decide to combine - as much and as far as he/she likes - the graphic styles, the filters 11, the graphic elements 15 and the objects 16 that he/she wants to see in the processed image 17 being displayed by the display system 6. Further to an interface calling for the use of the data supplied by the tracking module 13, other possible control interfaces that may be used in connection with the inventive apparatus are of the keyboard type, the voice-operated type, or of the type capable of recognizing the direction in which the subject 3 is looking, or the like.
The data bases may be resident in the image acquisition, processing and display apparatus itself, or may be provided in a remote site away from the same apparatus, where they would be accessed via appropriate electronic connection means, such as for instance a computer network, the Internet network, and the like. Similarly, even the processing unit 5 may be located at a remote site away from the acquisition system 1 and the display system 6, and may be connected to such systems via similar means as the ones described above for connecting the data bases to the apparatus .
If it is desired so, the apparatus may comprise first storage means suitable to record the input signals set by the subject 3 via the control interface, so as to enable statistical data to be collected concerning the selections made by the subjects interacting with the apparatus, as well as useful data on the preferences of the users in view of analysing the same. These data comprises not only the choices made by the user, but also the duration of interaction, gaze direction, movements, gestures and any other information that the system can handle and that can be related to the user's preferences and behaviour .
Second storage means may further be provided to record the images 10, 25 of the individual subjects being photographed by the image acquisition system 1 and/or to record the biometric features that univocally identify a subject 3. The so collected data can be combined with the statistical information contained in the first storage means so as to define the preferences of each single user, so that the apparatus will be able to automatically propose again the most favoured selection options of any given subject 3 that happens to again interact with the apparatus.
The operations performed by the modules 7 and 8 on the principal image 10 and/or one or more support images 25, in the case that the latter are available, can of course be scheduled when programming the processing unit 5, and further arranged so as to be unable to be modified by the user.
The apparatus according to the present invention may find valuable application in shops as an aid to potential buyers when trying on garments, spectacles, jewellery items, hairdressings, shoes and any other article or service they intend to buy. The apparatus may be sited even outside the shop or in a window thereof, owing to both wearing such articles to try-on purposes is simulated virtually and the fact that the possibility is in this way offered for the buyer to be informed on the products that he/she can find inside the shop. Other possible sites in which the inventive apparatus may find an application include discotheques, airports, arcades, crowded and passage areas in general. As a result, other typical uses of the inventive apparatus are those connected with activities of entertainment, selling, marketing and merchandising of products, and the like. The operation of the apparatus only requires one or more subjects placing themselves within the viewing, i.e. pickup range of the image acquisition system and, possibly, said subjects making choices concerning the kind of situation they would like to see simulated. The apparatus may also prove a valuable aid in monitoring users' preferences or deciding to order goods that are not available in the shop.
Fully apparent from the above description is therefore the ability of the the present invention to effectively reach the afore-cited aims and advantages by providing an image acquisition, processing and display apparatus suitable to simplify the process of selecting an object, such as for instance a garment, from a plurality of objects, all of them potentially answering certain desired characteristics. The samer apparatus can enable a user to interact therewith in a non-invasive manner, so as to be able to decide which kind of image the apparatus should desirably display. Advantageously, the apparatus can be pre-arranged so as to be able to store the data concerning the choices made by each single user and/or those made by the majority of the users, so as to inform the dealer, retailer or shopkeeper about the products which individual customers or the public in general like most of all.
It should be noticed that the materials used, as well as the shapes and the sizing of the individual items of the apparatus of the invention, may each time be selected so as to more appropriately meet the particular requirements or suit the particular application.
The possibility for an image being picked up by the acquisition system to be assigned any desired style by filtering it, i.e. introducing chromatic and morphologic alterations in it, and adding graphic elements, such as for instance technical specifications or advertising messages, in a not necessarily consistent manner, , enables the inventive apparatus to not only inform the user of the material qualities and features of the product he/she is wearing virtually, but also to provide useful indications allowing him/her to most suitably make his/her choices or take buying decisions.
The various items and parts entering the construction of the apparatus of the present invention shall of course not be embodied strictly and solely in the manners that have been described and illustrated above, but can rather be implemented in a number of different embodiments, all of which falling within the scope of the present invention.

Claims

1. Image acquisition, processing and display apparatus comprising:
- an acquisition system (1) for the acquisition of images (10, 25) , which defines an optical axis (2) of acquisition of a principal image (10) , said system being provided with means for generating a first signal that is representative of said images (10, 25);
- a processing unit (5) for processing said images (10, 25) , which is suitable to receive said first signal and output a second signal that is representative of a processed image (17) ; a display system (6) for visually reproducing said processed image (17) on an area (A) of an image plane (18); characterized in that:
- said acquisition system (1) is disposed so that said area (A) is visible in its entirety, and in that said optical axis (2) intersects said image plane (18) at a point contained within said area
(A) .
- said acquisition system (1) comprises at least one video camera (21) .
- said at least one video camera (21) having means for acquiring signals, within the visible spectrum and/or within the infrared, near infrared and/or ultraviolet spectrum, and/or representative of the distance between the subject [3] and said video camera (21) .
- said processed image (17) is an "enhanced image" as the one obtained from the enhancing process, said display system (6) comprising at least a screen.
2. Apparatus according to claim 1, characterized in that it comprises a semitransparent mirror (4) inclined relative to said screen.
3. Apparatus according to claim 1 , characterized in that said at least one video camera (21) is provided with a polarizing filter (29) adopted to limit or eliminate any glare and reflection from the surfaces (30,31) caused by the display system (6) .
4. Apparatus according to claim 1, characterized in that the material of said surface (30,31) is suitable to limit or eliminate any glare and reflection from said surface (30,31) caused by the display system (6).
5. Apparatus according to one or more of the preceding claims, characterized in that said acquisition system (1) acquires at least one support image (25) .
6. Apparatus according to claim 5, characterized in that it comprises a computing unit (24) that is suitable to process said at least one support image (25) so as to obtain an image substantially similar to said principal image (10) .
7. Apparatus according to one or more of the preceding claims, characterized in that said acquisition system (1) comprises one or more mirrors (19) for reflecting the image of at least a subject (3) towards said acquisition system (1) .
8. Apparatus according to one or more of the claims 1 to 7 , characterized in that said semitransparent mirror
(4) is inclined at an angle of substantially 45° relative to a screen provided in said display system
(6) being arranged in order to reflect said processed image (17) .
9. Apparatus according to one or more of the claims 1 to 7 , characterized in that said semitransparent mirror
(4) is inclined at an angle of substantially 45° relative to a screen provided in said display system
(6) being arranged to reflect the image of said at least a subject (3) .
10. Apparatus according to one or more of the claims 1 to 7, characterized in that said semitransparent mirror (4) is inclined at an angle of less than 45° relative to a screen provided in said display system
(6) , said mirror (4) being arranged to reflect the image of said at least a subject (3) .
11. Apparatus according to claim 1, characterized in that said screen is provided with a filter (22) of a kind suitable to limit or fully bar the view of the image displayed on said screen as the angle at which said screen is viewed varies .
12. Apparatus according to one or more of the preceding claims, characterized in that it further comprises a control interface suitable to receive input signals entered by a user.
13. Apparatus according to claim 12, characterized in that it further comprises first storage means for recording input signals and other information received from said control interface such as for instance choices made, interaction time, gaze direction, user gestures and movements and the like.
14. Apparatus according to one or more of the of the preceding claims, characterized in that it further comprises second storage means for recording said images (10, 25) or identifying parameters of the images (10, 25) such as biometric data, of at least a subject (3) .
15. Image acquisition, processing and display apparatus including:
- an acquisition system (1) for the acquisition of images (10, 25) , which defines an optical axis (2) of acquisition of a principal image (10) , said system being provided with means for generating a first signal that is representative of said images (10, 25) ;
- a processing unit (5) for processing said images (10, 25), which is suitable to receive said first signal and output a second signal that is representative of an enhanced image (17); a display system (6) for visually reproducing said processed image (17) on an area (A) of an image plane (18); characterized in that:
- said acquisition system (1) comprises at least one video camera (21) ; said at least one video camera (21) having means for acquiring signals, within the visible spectrum and/or within the infrared, near infrared and/or ultraviolet spectrum, and/or representative of the distance between the subject [3] and said video camera (21) ; said processed image (17) is an "enhanced image" as the one obtained from the enhancing process .
- said display system (6) comprising at least a screen.
16. Apparatus according to claim 15, characterized in that said local module (27) is a device chosen from a multimedia system, a cellular phone, a videophone, or a videoconference system, or any other device capable of acquiring at least one image (10,25) and displaying at least one processed image (17) .
17. Apparatus according to one or more of the claims 15 and 16, characterized in that the local module (27) processes said image in a complete or quasi-complete local fashion.
18. Apparatus according to one or more of the claims 15 to 17, characterized in that said local module (27) performs only the acquisition and the display of images .
19. Apparatus according to one or more of the claims 15 to 18, characterized in that a bidirectional video stream, such as a "videocall", can be performed between said the local module (27) and said remote processing module (28) , said bidirectional video stream being directed in such a way that said local device (27) sends a group of non-processed images of a subject and said remote processing module (28) answers with a corresponding flow of processed images
17
20. Apparatus according to one or more of the claims 15 to 19, characterized in that said processing unit (5) is located at a remote site away from said acquisition system (1) and said display system (6) .
21. Apparatus according to one or more of the claims 15 to 20, characterized in that said processing unit (5) comprises a first module (7) provided with a first data base, in which there are stored a plurality of graphic image filters and a plurality of representations of graphic elements for modifying said images (10, 25) .
22. Apparatus according to one or more of the claims 15 to 21, characterized in that said processing unit (5) comprises a second module (8) provided with a second data base, in which there are stored a plurality of mathematical models defining objects in the three- dimensional space.
23. Apparatus according to one or more of the claims 15 to 22, characterized in that said first or said second data base is located at a site remote from said apparatus and is connected thereto via electronic connection means .
24. Method of operation of an image acquisition, processing and displaying apparatus for obtaining an "enhanced image" (17) according to any of the preceding claims, characterized by the steps of:
(a) acquiring at least one image (10, 25) of at least a subject (3), said at least one single image (10,25) being the principal image (10) or one or more support images (25) ;
(b) filtering (9) said at least one image (10, 25) and obtaining at least one filtered image (11) ;
(c) using said at least one image (10, 25) to obtain a mathematical description of at least a physical feature of said at least a subject (3);
(d) using said mathematical description to work out at least a two-dimensional image (16) of one or more virtual objects;
(e) graphically combining (23) said at least two- dimensional image (16) with the filtered image (11) to obtain a processed image (17);
(f) visually displaying the processed image (17) .
25. Method of operation according to claim 24, characterized in that the step (a) is preceded by a pre-processing step (26) , in which at least a characteristic parameter or feature of said at least an image (10, 25) is changed.
26. Method of operation according to claim 24 or 25, characterized in that the step (a) comprises acquiring a principal image (10), the step (b) comprises filtering said principal image (10) and obtaining a filtered principal image (11) , and the step (c) comprises using said principal image (10) .
27. Method of operation according to claim one or more of the claims 24 to 25, characterized in that the step (a) comprises acquiring one or more support images (25), the step (b) comprises filtering (9) a principal image (10) obtained by processing said one or more support images (25) and/or filtering (9) said one or more support images (25) , and the step (c) comprises using said principal image (10) and/or one or more of said support images (25) .
28. Method of operation according to one or more of the claims 24 to 27, characterized by a step for defining graphic elements (15) and arranging said graphic elements (15) so as to be able to insert them in the context of said at least an image (10, 25) , and in that the graphic combination step (23) comprises inserting said graphic elements (15) in the composition of the processed image (17).
29. Method of operation according to one or more of the claims 24 to 28, characterized in that the steps of filtering (9) and/or defining the graphic elements (15) , mathematically describing a physical feature of at least a subject (3), and working out a two- dimensional image (16) of one or more virtual objects interact with each other so that the results generated by one of these procedures can either be used to carry out a second one of said procedures or by one or more electronic units operatively connected to said apparatus .
30. Method of operation according to one or more of the claims 24 to 29, characterized in that, in said step (d) , said at least a two-dimensional image (16) is generated so as to turn out as being consistent with the context of said at least a filtered image (11) , as to comply with the dimensional and perspective constraints imposed by the context in which such items must be inserted.
31. Method according to one or more of the claims 24 to
30, characterized in that the maximum value (T) of the time elapsing from the moment at which the images (10, 25) are acquired to the moment at which the corresponding processed image (17) is displayed is less than or equal to 15 seconds.
32. Method according to one or more of the claims 24 to
31, characterized in that, under normal or steady- state operating conditions, the number of frames per second displayed by the display system (6) is greater than or equal to 0.25.
PCT/IB2006/002854 2005-10-14 2006-10-09 Image acquisition, processing and display apparatus and operating method thereof WO2007042923A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT000074A ITPN20050074A1 (en) 2005-10-14 2005-10-14 ACQUISITION, PROCESSING AND VISUALIZATION OF IMAGES AND RELATED OPERATING METHOD
ITPN2005A000074 2005-10-14

Publications (2)

Publication Number Publication Date
WO2007042923A2 true WO2007042923A2 (en) 2007-04-19
WO2007042923A3 WO2007042923A3 (en) 2007-10-04

Family

ID=36579124

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/002854 WO2007042923A2 (en) 2005-10-14 2006-10-09 Image acquisition, processing and display apparatus and operating method thereof

Country Status (2)

Country Link
IT (1) ITPN20050074A1 (en)
WO (1) WO2007042923A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2577966A4 (en) * 2010-06-03 2017-07-12 Mebe Viewcom AB A studio for life-size videoconferencing
GB2582161A (en) * 2019-03-13 2020-09-16 Csba Ltd Video conferencing device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07177481A (en) * 1993-12-20 1995-07-14 Victor Co Of Japan Ltd Two-way video image communication equipment
DE19635753A1 (en) * 1996-09-03 1998-04-23 Kaufhof Warenhaus Ag Virtual imaging device for selecting clothing from catalogue
WO1999023609A1 (en) * 1997-10-30 1999-05-14 Headscanning Patent B.V. A method and a device for displaying at least part of the human body with a modified appearance thereof
US6151009A (en) * 1996-08-21 2000-11-21 Carnegie Mellon University Method and apparatus for merging real and synthetic images
US20050018140A1 (en) * 2003-06-18 2005-01-27 Pioneer Corporation Display apparatus and image processing system
WO2005057398A2 (en) * 2003-12-09 2005-06-23 Matthew Bell Interactive video window display system
US6944327B1 (en) * 1999-11-04 2005-09-13 Stefano Soatto Method and system for selecting and designing eyeglass frames

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07177481A (en) * 1993-12-20 1995-07-14 Victor Co Of Japan Ltd Two-way video image communication equipment
US6151009A (en) * 1996-08-21 2000-11-21 Carnegie Mellon University Method and apparatus for merging real and synthetic images
DE19635753A1 (en) * 1996-09-03 1998-04-23 Kaufhof Warenhaus Ag Virtual imaging device for selecting clothing from catalogue
WO1999023609A1 (en) * 1997-10-30 1999-05-14 Headscanning Patent B.V. A method and a device for displaying at least part of the human body with a modified appearance thereof
US6944327B1 (en) * 1999-11-04 2005-09-13 Stefano Soatto Method and system for selecting and designing eyeglass frames
US20050018140A1 (en) * 2003-06-18 2005-01-27 Pioneer Corporation Display apparatus and image processing system
WO2005057398A2 (en) * 2003-12-09 2005-06-23 Matthew Bell Interactive video window display system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DARRELL T ET AL: "A virtual mirror interface using real-time robust face tracking" PROCEEDINGS THIRD IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (CAT. NO.98EX107) IEEE COMPUT. SOC LOS ALAMITOS, CA, USA, 1998, pages 616-621, XP002083775 ISBN: 0-8186-8344-9 *
REICHER T: "A FRAMEWORK FOR DYNAMICALLY ADAPTABLE AUGMENTED REALITY SYSTEMS, related work, UbiCom" INTERNET CITATION, [Online] 16 April 2004 (2004-04-16), XP002386581 Retrieved from the Internet: URL:http://tumb1.biblio.tu-muenchen.de/pub l/diss/in/2004/reicher.pdf> [retrieved on 2006-06-21] *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2577966A4 (en) * 2010-06-03 2017-07-12 Mebe Viewcom AB A studio for life-size videoconferencing
GB2582161A (en) * 2019-03-13 2020-09-16 Csba Ltd Video conferencing device
GB2582161B (en) * 2019-03-13 2021-04-28 Csba Ltd Video conferencing device

Also Published As

Publication number Publication date
ITPN20050074A1 (en) 2007-04-15
WO2007042923A3 (en) 2007-10-04

Similar Documents

Publication Publication Date Title
AU2019246856B2 (en) Devices, systems and methods of capturing and displaying appearances
US6633289B1 (en) Method and a device for displaying at least part of the human body with a modified appearance thereof
RU2668408C2 (en) Devices, systems and methods of virtualising mirror
US8982109B2 (en) Devices, systems and methods of capturing and displaying appearances
US20170323374A1 (en) Augmented reality image analysis methods for the virtual fashion items worn
CN109840825A (en) The recommender system of physical features based on user
US7948481B2 (en) Devices, systems and methods of capturing and displaying appearances
US7500755B2 (en) Display apparatus and image processing system
CN108427498A (en) A kind of exchange method and device based on augmented reality
US20220044311A1 (en) Method for enhancing a user's image while e-commerce shopping for the purpose of enhancing the item that is for sale
CN102201099A (en) Motion-based interactive shopping environment
WO2010042990A1 (en) Online marketing of facial products using real-time face tracking
CA2979228A1 (en) Holographic interactive retail system
CN107211165A (en) Devices, systems, and methods for automatically delaying video display
CN108537628A (en) Method and system for creating customed product
KR20130027801A (en) User terminal for style matching, style matching system using the user terminal and method thereof
US20190066197A1 (en) System and Method for Clothing Promotion
WO2012054983A1 (en) Eyewear selection system
WO2007042923A2 (en) Image acquisition, processing and display apparatus and operating method thereof
KR20070050165A (en) Business method & system related to a fashionable items utilizing internet.
KR20190045740A (en) A method for operating a glasses fitting system using a smart mirror
CN114758106A (en) Online simulation shopping system
CN106774838A (en) The method and device of intelligent glasses and its display information
KR20220079274A (en) Method of glasses wearing simulation
JP2003030494A (en) Support system for selecting spectacles

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06809006

Country of ref document: EP

Kind code of ref document: A2