WO2016045924A1 - A background light enhancing apparatus responsive to a remotely generated video signal - Google Patents

A background light enhancing apparatus responsive to a remotely generated video signal Download PDF

Info

Publication number
WO2016045924A1
WO2016045924A1 PCT/EP2015/070079 EP2015070079W WO2016045924A1 WO 2016045924 A1 WO2016045924 A1 WO 2016045924A1 EP 2015070079 W EP2015070079 W EP 2015070079W WO 2016045924 A1 WO2016045924 A1 WO 2016045924A1
Authority
WO
WIPO (PCT)
Prior art keywords
video signal
image
region
video
display panel
Prior art date
Application number
PCT/EP2015/070079
Other languages
French (fr)
Inventor
Anton Werner Keller
Fabian Nicola SCHLUMBERGER
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of WO2016045924A1 publication Critical patent/WO2016045924A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
    • G09G3/3406Control of illumination source
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0626Adjustment of display parameters for control of overall brightness
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/16Calculation or use of calculated indices related to luminance levels in display data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Definitions

  • the present embodiment is directed to video telephony and, in particular, to an arrangement for enhancing light conditions to which a participant in a video conference is exposed.
  • Videoconferencing using, for example, Skype or Facetime has become a common tool in the home environment.
  • video cameras are used to allow participants at remote endpoints to view and hear each other.
  • the image of the participant displayed at the receiving end may be of low quality. It may be desireable to utilize the light produced in a display screen for illuminating or lightening up a head/face of a participant in the
  • a first participant and, for example, a remote second participant may participate in the video conference.
  • a videotelephony device employed by the remote, second participant may generate a video signal containing, for example, a camera captured image of, for example, the face of the remote, second participant.
  • the generated video signal is transmitted via a communication network such as the internet.
  • the transmitted video signal is received via the communication network as an input video signal to a video processor used in a videotelephony device, for example, a smart phone, a television receiver, a personal computer or a tablet, employed by the first participant .
  • the video processor generates a display drive video signal containing, for example, the captured image of the head/face of the remote, second participant for display in a display panel of the videotelephony device of the first participant.
  • the video processor also generates a first transmitter drive video signal.
  • the first transmitter drive video signal is applied via the
  • the communication network is configured to be displayed in a first region of a display panel of the videotelephony device of the second participant for enhancing the background lighting to which an object of a camera of the videotelephony device of the remote second participant is exposed.
  • the video processor is additionally responsive to the input video signal for valuating illumination, for example, brightness content of the received input video signal and for regulating the brightness content of the first transmitter drive video signal in a negative feedback manner in accordance with the valuated brightness content of the received input video signal. For example, if the input video signal has low brightness, the first transmitter drive video signal will cause an increase in the light produced in the first region of the display. Consequently, the background light that is applied to the object of the camera of the second participant will increase in a manner to increase the brightness associated with the input video signal.
  • the video processor is further responsive to a camera generated video signal containing, for example, the image of the face/head of the first participant, for generating a second transmitter drive video signal.
  • the second transmitter drive video signal is applied to the communication network and is configured to be displayed in a second region that is separated from the first region of the display panel of the remote videotelephony device of the second participant.
  • the image produced in the first region is, for example, unrelated to the image produced in the second region.
  • a video communication apparatus for employing an advantageous method includes an interface capable of receiving an input video signal from a communication network containing a first image capable of being displayed in a first display panel.
  • a video processor is configured to valuate illumination content of the input video signal and to generate, in
  • the transmitter drive video signal that is capable of being transmitted in the communication network and of being displayed in a first region of a second display panel, when received from the communication network, to produce in the first region background light that is regulated, in accordance with the illumination content valuation.
  • the video processor is responsive to a second video signal containing a second image for generating a second output transmitter drive video signal containing the second image that is capable of being transmitted in the communication network and of being displayed in a second region of the second display panel, when received from the communication network.
  • FIGURE 1 A illustrates a block diagram of a phone, for example, a prior art phone, used in a video conference;
  • FIGURE IB illustrates a block diagram of a smart phone, embodying an advantageous feature, operated by a first participant that is engaged in the video -conference with a second participant of FIGURE 1 A;
  • FIGURE 2A illustrates an image of the second participant of Figure
  • FIGURE 2B illustrates a display panel of the smart phone of Figure IB having a light enhancing region
  • FIGURE 3 illustrates an image of the first participant of Figure IB that is captured in the camera of Figure IB;
  • Figures 4A, 4B and 4C illustrate three examples, respectively, of asymmetric lighting of the participant of Figure IB;
  • Figure 5 illustrates a so called selfie image captured in the camera and displayed in the display of Figure IB
  • FIGURE 6 illustrates a display panel of Figure 1A having a light enhancing region controlled by the phone of Figure IB;
  • Figure 7 illustrates asymmetric lighting of the participant of Figure 1A controlled by the phone of Figure IB.
  • FIGURE 1 A illustrates a block diagram of, for example, a prior art smart phone 300 operated by a participant A that is engaged in a video- conference with a participant B of Figure IB located, for example, remotely from participant B.
  • FIGURE IB illustrates a block diagram of a smart phone 200, providing an advantageous feature, and operated by participant B. Similar symbols and numerals in FIGURES 1A and IB indicate similar items or functions.
  • FIGURE 2A illustrates an image 101a of participant A of Figure 1A that is captured in a camera 307 of phone 300 for producing a video signal 307a containing captured image 101a of Figure 2A.
  • Image 101a includes an image portion 101b depicting a head/face of participant A, an image portion 101c depicting a body of participant A and a background portion 10 Id that excludes the other two image portions. Similar symbols and numerals in FIGURES 1A, IB and 2A indicate similar items or functions.
  • Video signal 307a of Figure 1A is coupled substantially without video or picture image content modification via a conventional video processor 302, implemented in, for example, a microprocessor, not shown, to a conventional receiver -transmitter stage 303 of phone 300.
  • Receiver- transmitter stage 303 of video processor 302 transmits the content of video signal 307a in a conventional manner via a phone or data/internet
  • a conventional receiver-transmitter stage 205 of Figure IB receives via network 400 the signal transmitted by
  • receiver-transmitter stage 303 of Figure 1A that contains image 101a of Figure 2A forming an input video signal 205a of Figure IB.
  • Input video signal 205a contains the same video or picture image content as image 101a of Figure 2A.
  • a video processor 206 of Figure IB implemented in, for example, a microprocessor, not shown, detects or recognizes in input video signal 205a a portion signal, not shown, forming an image portion 101b of Figure 2 A depicting the head/face image of participant A of Figure 1 A using well known pattern recognition technique.
  • the detected or recognized portion may also include a band 101 e of what would be otherwise background portion 101 d, in addition to the signal portion associated with head/face image portion 101b.
  • Detecting or recognizing the head/face contained in image portion 101b is performed using a method similar to recognition methods explained, for example, in US 6,661 ,907, in US 6,343,141 and in an article, entitled, "Detecting Faces in Images: A Survey” by IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 24, NO. 1 , JANUARY 2002.
  • video signal 205a that contains head/face image portion 101b of Figure 2 A of participant A is extracted and applied in a video processor 203 to generate a display drive video signal 203b.
  • video processor 203 of Figure IB synthesizes a display drive video signal 203c that is combined with display drive video signal 203b to form a combined display drive video signal 203 a.
  • signal 203a contains both display drive video signals 203c and 203b.
  • Display drive video signals 203 c and 203b are applied to a conventional display device 204c having a display panel 204.
  • display drive video signal 203b produces in a region 204b of display panel 204 of Figure 2B an image portion having, for example, the same picture image content as head/face image portion 101b of Figure 2A and is referred to in Figure 2B using the same symbol 101b.
  • Similar symbols and numerals in FIGURES 1A, IB, 2A and 2B indicate similar items or functions.
  • synthesized display drive video signal 203c of Figure IB produces light in a region 204a of display panel 204 of Figure 2B that excludes head/face image portion 101b and is non-overlapping with region 204b of display panel 204.
  • Light producing region 204a is used for generating and regulating illumination in region 204a to lighten up, for example, the head/face of participant B forming the object of a camera 207.
  • FIGURE 3 illustrates an image 201a captured in camera 207 of Figure IB of participant B forming the object of camera 207. Similar symbols and numerals in FIGURES 1A, IB, 2A, 2B and 3 indicate similar items or functions.
  • Image 201a of Figure 3 depicts a head/face image portion 201b of participant B, a body image portion 201c of participant B and a background image portion 20 Id that excludes at least head/face image portion 201b of participant B.
  • a video signal 207a of Figure IB contains image 201a of Figure 3.
  • Video signal 207a of Figure IB is processed in a video processor 202 implemented in, for example, the same microprocessor, not shown, which also implements processors 203 and 206 that were mentioned before.
  • video processor 202, video processor 203 and video processor 206 may be combined to form a single video processor 250.
  • video processor 202 uses a pattern recognition technique referred to before, detects or recognizes and extracts a signal portion, not shown, of video signal 207a that contains the image pattern of head/face image portion 201b of Figure 3 of participant B of Figure IB to the exclusion of the rest of image 201a.
  • the video content of a band portion 20 le may also be included in the detected and extracted portion.
  • video processor 202 of Figure IB valuates illumination exposure parameter, for example, brightness, signal-to-noise ratio content of captured image 201a or other optical characteristics of captured image 201a of Figure 3 such as color, or a combination thereof, by analyzing the detected and extracted portion, not shown, of video signal 207a of Figure IB that contains head/face image portion 201b of Figure 3.
  • illumination exposure parameter for example, brightness, signal-to-noise ratio content of captured image 201a or other optical characteristics of captured image 201a of Figure 3 such as color, or a combination thereof.
  • the integration process is applied to captured image 201a of participant B of Figure 3 in its entirety. In other alternatives, the integration process is applied solely to head/ face image portion 201b or to a combination of head/face image portion 201b and portion 20 le of captured image 201a of participant B of Figure IB .
  • An output signal 202a of processor 202 contains a value indicative of the illumination such as the aforementioned brightness content exposure of the image of participant B.
  • the valuation may indicates that the illumination of head/face image portion 201b of Figure 3 is insufficiently low to be below a threshold level.
  • a combination of the brightness content, color content and gamma correction values contained in display drive video signal 203 c, that excludes the content of image of head/face image portion 101b of Figure 2A is regulated in, for example, a closed loop negative feedback manner.
  • the regulation is performed in accordance with the valuated illumination/brightness content in signal 202a of Figure IB.
  • illumination of light producing region 204a of Figure 2B that excludes region 204b is controlled in a manner to vary the light exposure to which the object of camera 207 of Figure IB such as the head/face of participant B.
  • the illumination of light producing region 204a of Figure 2B is obtained for enhancing the overall illumination or brightness content, contrast and/or color temperature in a closed loop negative feedback manner.
  • adaptive sense signal 202a also changes.
  • the brightness and/or color content of light producing region 204a of Figure 2B might be increased up to attain white color with maximum light output.
  • the regulated light output can be controlled by, for example, controlling light valves or cells, not shown, forming pixels of display panel 204 that may be formed using, for example, liquid crystal display (LCD) or organic light-emitting diode (OLED) technology.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • the regulated light output may be additionally or alternatively controlled by selectively controlling back lighting, not shown, of display panel 204.
  • an improved or better lighted head/face image portion 201b of participant B of Figure IB is thereby obtained.
  • signal 207a of camera 207 of Figure IB will contain image 201a of participant B of Figure 3 that is , advantageously, optimally brighter.
  • Signal 207a of camera 207 of Figure IB is applied in video processor 203 to transmitter- receiver stage 205 that transmits via
  • receiver- transmitter stage 303 of Figure 1 A receives transmitted drive video signal 203d and displays it, for example, unmodified, in a display panel 304 of a display device 304c of phone 300.
  • the color temperature of, in particular, the skin of the image of participant B of Figure IB may be analyzed by processor 202.
  • processor 203 can vary the color of light producing region 204a of Figure 2B in accordance with the analysis results of processor 202 of Figure IB.
  • image 201a of participant B of Figure 3 that is transmitted to participant A of Figure 1A can become, advantageously, more presentable or so-called healthier looking.
  • the object of camera 207 of Figure IB includes more than one person , for example, a family, resulting in more than a single head/face in the picture, each head/face can be detected or recognized and taken into account to be displayed in a similar manner described before with respect to single participant B.
  • an under-exposed head/face portion may be selectively lightened up by light producing region 204a of display 204 to illuminate mainly the darker side in an asymmetric manner.
  • Partial exposure occurs when only one portion of the image is poorly lighted as a result of, for example, an external light source, not shown, such as a lamp that illuminates mainly one side of the face of participant B.
  • Figures 4A, 4B and 4C illustrate three examples of such so- called asymmetric lighting. Similar symbols and numerals in FIGURES 1 A, IB, 2A, 2B, 3, 4A, 4B and 4C indicate similar items or functions.
  • light producing region 204a occupies only a right portion of display panel 204 for producing light output at the right side of the image of participant A that is directed to the head/face of participant B of Figure IB .
  • the proportional size of head/ face portion 101b is scaled down and relocated to the left and down.
  • the area of light producing region 204a becomes proportionally larger than in Figure 4A to allow for better asymmetrical lighting.
  • head/ face image portion 101b is relocated to one corner or side allowing the rest of the area for producing more light output at the lower and right sides of head/ face portion 101b that is directed to the head/face of participant B of Figure IB.
  • the size of the area occupied by light producing region 204a of Figures 4A, 4B and 4C is regulated by processor 203 of Figure IB in accordance with valuated illumination distribution content in head/face image portion 201b of Figure 3.
  • phone 200 of Figure IB is also capable of providing enhanced lighting content of head/face image portion 201b of Figure 3 of participant B of Figure IB also when camera 207 is used, in a manner unrelated to video telephony, to capture and store, for example, a self-portrait photograph referred to as selfie.
  • the portion, not shown, of video signal 207a that contains mainly head/face image portion 201b of Figure 3 of participant B is extracted and applied in video processor 203 to generate display drive video signal 203b of display drive output video signal 203a of video processor 203. This is done in an analogous manner described before with respect to Figure 2B.
  • display drive video signal 203b contains mainly the extracted head/face image portion 201b of participant B of Figure 3 for display in display panel 204 of Figure 5.
  • Similar symbols and numerals in FIGURES 1A, IB, 2A, 2B, 3, 4A, 4B, 4C and 5 indicate similar items or functions. In this way, participant B can view his head/face image portion 201b captured by camera 207 of Figure IB.
  • a combination of the brightness content, color content and gamma correction values associated with display drive video signal 203c is regulated in, for example, a closed loop negative feedback manner.
  • the regulation is performed in accordance with the illumination/brightness content in signal 202a of Figure IB.
  • the illumination of light producing region 204a of Figure 5 is controlled in a manner to vary the light exposure to which the selfle picture taker B is subjected in a manner to enhance the overall brightness content, contrast and color temperature in a closed loop feedback manner.
  • Transmitter drive output video signal 203 e is contained in the
  • Transmitter drive video signal 203e has, for example, substantially the same visual content as the aforementioned extracted portion signal containing just head/face image portion 201b of Figure 3 of participant B of Figure IB.
  • video processor 203 of Figure IB synthesizes a transmitter drive video signal 203f that is also contained in signal 203d.
  • signal 203d that contains both signals 203 e and 203 f is applied to receiver-transmitter stage 205 that transmits signal 203d via network 400.
  • Receiver-transmitter stage 303 of Figure 1A that receives signal 203d via network 400 displays it, for example, without modifying its visual contents, in display panel 304 of Figure 6 of phone 300 of Figure 1A.
  • display drive video signal 203e produces an image portion having, for example, the same visual content as the head/face image portion 201b of Figure 3 for display in a display region 304b of display panel 304 of Figure 6.
  • Video processor 206 of Figure IB for example, in addition to extracting head/face image portion 101b of Figure 2A for display in display panel 204 of Figure IB, in the manner described before , also valuates the illumination exposure of image 101a of Figure 2A received from participant A of Figure 1A.
  • video processor 206 of Figure IB also valuates other optical characteristics such as color of received image 101a of Figure 2A contained in video signal 205a of Figure IB.
  • Such valuation applies well known pixel signal integration processes in the manner described before with respect to head/face image portion 201b of Figure 3.
  • an integration process is applied to the content of captured image 101a of participant A of Figure 2A in its entirety.
  • video processor 206 of Figure B detects or recognizes the pattern of head/face image portion 101b of Figure 2A. Then, the integration process is applied solely to the portion of signal 205a of Figure IB that corresponds to head/ face image portion 101b of Figure 2A or to a combination of image portion 101b and portion 10 le of captured image 101a of participant A.
  • the result of such valuation is contained in an output signal 206a of processor 206 containing brightness values indicative of the extent of illumination exposure on head/face image portion 101b of Figure 2A of participant A.
  • video processor 203 of Figure IB synthesizes transmitter drive video signal 203 f to produce light in a region 304a of Figure 6 of display panel 304 of Figure IB.
  • Region 304a of Figure 6 excludes head/face image portion 201b of region 304b and is non-overlapping with region 304b of display panel 304.
  • Light producing region 304a is used for generating and regulating in a negative feedback manner illumination in region 304a to lighten up, for example, the head/face of participant A of Figure 1A forming the object of camera 307. This is done, advantageously, for controlling illumination such as brightness content directed to the head/face of participant A of Figure 1 A in an analogous manner by which illumination producing region 204a of Figure 2B is lightened up.
  • the illumination of light producing region 304a of Figure 6 may be controlled to vary the light exposure on participant A of Figure 1A in a manner to enhance the overall brightness content, contrast content and/or color temperature in a closed loop negative feedback manner.
  • correction signal 206a of Figure IB is adaptive to that change.
  • the brightness content of light producing region 304a of Figure 6 might be increased up to optimal light output.
  • the head/face of participant A may be lightened up by light producing region 304a of display panel 304 that illuminates a darker side of head/face image portion 201b of Figure 7 in an asymmetric manner. Similar symbols and numerals in
  • FIGURES 1A, IB, 2A, 2B, 3, 4A, 4B, 4C, 5, 6 and 7 indicate similar items or functions.

Abstract

A camera of a smart phone a video signal containing an image of a first participant captured by the camera. A first transmitter drive video signal containing a head/face image portion of the first participant is transmited via a communication network. An input video signal containing an image of a remote, second participant is received via the communication network for display in a display device of the smart phone. A video processor of the smart phone is responsive to the input video signal for valuating brightness content of the input video signal and for generating a second transmitter drive video signal having brightness content that is regulated in a feedback manner in accordance with the valuated brightness content of the input video signal. The second transmitter drive video signal is capable of enhancing background lighting produced by a remote display device of the second participant.

Description

A BACKGROUND LIGHT ENHANCING APPARATUS RESPONSIVE TO A REMOTELY GENERATED VIDEO SIGNAL
CROSS REFERENCES
This application claims priority to a U.S. Provisional Application, Serial No. 62/054720, filed on September 24, 2014, which is herein incorporated by reference in its entirety.
Field of the Invention
The present embodiment is directed to video telephony and, in particular, to an arrangement for enhancing light conditions to which a participant in a video conference is exposed.
Background of the Invention Videoconferencing using, for example, Skype or Facetime has become a common tool in the home environment. In a video conference, video cameras are used to allow participants at remote endpoints to view and hear each other. When a participant is exposed to insufficient light condition, the image of the participant displayed at the receiving end may be of low quality. It may be desireable to utilize the light produced in a display screen for illuminating or lightening up a head/face of a participant in the
videoconference.
A first participant and, for example, a remote second participant may participate in the video conference. A videotelephony device employed by the remote, second participant, may generate a video signal containing, for example, a camera captured image of, for example, the face of the remote, second participant. The generated video signal is transmitted via a communication network such as the internet.
In an advantageous embodiment, the transmitted video signal is received via the communication network as an input video signal to a video processor used in a videotelephony device, for example, a smart phone, a television receiver, a personal computer or a tablet, employed by the first participant . The video processor generates a display drive video signal containing, for example, the captured image of the head/face of the remote, second participant for display in a display panel of the videotelephony device of the first participant.
The video processor also generates a first transmitter drive video signal. The first transmitter drive video signal is applied via the
communication network and is configured to be displayed in a first region of a display panel of the videotelephony device of the second participant for enhancing the background lighting to which an object of a camera of the videotelephony device of the remote second participant is exposed.
The video processor is additionally responsive to the input video signal for valuating illumination, for example, brightness content of the received input video signal and for regulating the brightness content of the first transmitter drive video signal in a negative feedback manner in accordance with the valuated brightness content of the received input video signal. For example, if the input video signal has low brightness, the first transmitter drive video signal will cause an increase in the light produced in the first region of the display. Consequently, the background light that is applied to the object of the camera of the second participant will increase in a manner to increase the brightness associated with the input video signal.
The video processor is further responsive to a camera generated video signal containing, for example, the image of the face/head of the first participant, for generating a second transmitter drive video signal. The second transmitter drive video signal is applied to the communication network and is configured to be displayed in a second region that is separated from the first region of the display panel of the remote videotelephony device of the second participant. The image produced in the first region is, for example, unrelated to the image produced in the second region. Summary of the Invention
A video communication apparatus for employing an advantageous method includes an interface capable of receiving an input video signal from a communication network containing a first image capable of being displayed in a first display panel. A video processor is configured to valuate illumination content of the input video signal and to generate, in
accordance with the illumination content valuation, a first output
transmitter drive video signal that is capable of being transmitted in the communication network and of being displayed in a first region of a second display panel, when received from the communication network, to produce in the first region background light that is regulated, in accordance with the illumination content valuation. The video processor is responsive to a second video signal containing a second image for generating a second output transmitter drive video signal containing the second image that is capable of being transmitted in the communication network and of being displayed in a second region of the second display panel, when received from the communication network.
Brief Description of the Drawings
FIGURE 1 A illustrates a block diagram of a phone, for example, a prior art phone, used in a video conference;
FIGURE IB illustrates a block diagram of a smart phone, embodying an advantageous feature, operated by a first participant that is engaged in the video -conference with a second participant of FIGURE 1 A;
FIGURE 2A illustrates an image of the second participant of Figure
1A that is captured in a camera of the participant of Figure 1A;
FIGURE 2B illustrates a display panel of the smart phone of Figure IB having a light enhancing region;
FIGURE 3 illustrates an image of the first participant of Figure IB that is captured in the camera of Figure IB;
Figures 4A, 4B and 4C illustrate three examples, respectively, of asymmetric lighting of the participant of Figure IB;
Figure 5 illustrates a so called selfie image captured in the camera and displayed in the display of Figure IB; FIGURE 6 illustrates a display panel of Figure 1A having a light enhancing region controlled by the phone of Figure IB; and
Figure 7 illustrates asymmetric lighting of the participant of Figure 1A controlled by the phone of Figure IB.
Detailed Description
FIGURE 1 A illustrates a block diagram of, for example, a prior art smart phone 300 operated by a participant A that is engaged in a video- conference with a participant B of Figure IB located, for example, remotely from participant B. FIGURE IB illustrates a block diagram of a smart phone 200, providing an advantageous feature, and operated by participant B. Similar symbols and numerals in FIGURES 1A and IB indicate similar items or functions. FIGURE 2A illustrates an image 101a of participant A of Figure 1A that is captured in a camera 307 of phone 300 for producing a video signal 307a containing captured image 101a of Figure 2A. Image 101a includes an image portion 101b depicting a head/face of participant A, an image portion 101c depicting a body of participant A and a background portion 10 Id that excludes the other two image portions. Similar symbols and numerals in FIGURES 1A, IB and 2A indicate similar items or functions.
Video signal 307a of Figure 1A is coupled substantially without video or picture image content modification via a conventional video processor 302, implemented in, for example, a microprocessor, not shown, to a conventional receiver -transmitter stage 303 of phone 300. Receiver- transmitter stage 303 of video processor 302 transmits the content of video signal 307a in a conventional manner via a phone or data/internet
communication network 400. A conventional receiver-transmitter stage 205 of Figure IB receives via network 400 the signal transmitted by
receiver-transmitter stage 303 of Figure 1A that contains image 101a of Figure 2A forming an input video signal 205a of Figure IB. Input video signal 205a contains the same video or picture image content as image 101a of Figure 2A.
Advantageously, a video processor 206 of Figure IB, implemented in, for example, a microprocessor, not shown, detects or recognizes in input video signal 205a a portion signal, not shown, forming an image portion 101b of Figure 2 A depicting the head/face image of participant A of Figure 1 A using well known pattern recognition technique. Alternatively, the detected or recognized portion may also include a band 101 e of what would be otherwise background portion 101 d, in addition to the signal portion associated with head/face image portion 101b. Detecting or recognizing the head/face contained in image portion 101b is performed using a method similar to recognition methods explained, for example, in US 6,661 ,907, in US 6,343,141 and in an article, entitled, "Detecting Faces in Images: A Survey" by IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 24, NO. 1 , JANUARY 2002.
The aforementioned portion, not shown, of video signal 205a that contains head/face image portion 101b of Figure 2 A of participant A is extracted and applied in a video processor 203 to generate a display drive video signal 203b. In addition, video processor 203 of Figure IB synthesizes a display drive video signal 203c that is combined with display drive video signal 203b to form a combined display drive video signal 203 a. Thus, signal 203a contains both display drive video signals 203c and 203b. Display drive video signals 203 c and 203b are applied to a conventional display device 204c having a display panel 204. In display panel 204, display drive video signal 203b produces in a region 204b of display panel 204 of Figure 2B an image portion having, for example, the same picture image content as head/face image portion 101b of Figure 2A and is referred to in Figure 2B using the same symbol 101b. Similar symbols and numerals in FIGURES 1A, IB, 2A and 2B indicate similar items or functions.
Advantageously, synthesized display drive video signal 203c of Figure IB produces light in a region 204a of display panel 204 of Figure 2B that excludes head/face image portion 101b and is non-overlapping with region 204b of display panel 204. Light producing region 204a is used for generating and regulating illumination in region 204a to lighten up, for example, the head/face of participant B forming the object of a camera 207.
FIGURE 3 illustrates an image 201a captured in camera 207 of Figure IB of participant B forming the object of camera 207. Similar symbols and numerals in FIGURES 1A, IB, 2A, 2B and 3 indicate similar items or functions.
Image 201a of Figure 3 depicts a head/face image portion 201b of participant B, a body image portion 201c of participant B and a background image portion 20 Id that excludes at least head/face image portion 201b of participant B. A video signal 207a of Figure IB contains image 201a of Figure 3. Video signal 207a of Figure IB is processed in a video processor 202 implemented in, for example, the same microprocessor, not shown, which also implements processors 203 and 206 that were mentioned before. Thus, video processor 202, video processor 203 and video processor 206 may be combined to form a single video processor 250.
Advantageously, video processor 202, using a pattern recognition technique referred to before, detects or recognizes and extracts a signal portion, not shown, of video signal 207a that contains the image pattern of head/face image portion 201b of Figure 3 of participant B of Figure IB to the exclusion of the rest of image 201a. Optionally, the video content of a band portion 20 le may also be included in the detected and extracted portion. Advantageously, video processor 202 of Figure IB valuates illumination exposure parameter, for example, brightness, signal-to-noise ratio content of captured image 201a or other optical characteristics of captured image 201a of Figure 3 such as color, or a combination thereof, by analyzing the detected and extracted portion, not shown, of video signal 207a of Figure IB that contains head/face image portion 201b of Figure 3. Alternatively, the combination of head/face image portion 201b and, for example, band lOle can be used for such valuation. Illumination or brightness valuation applies well known pixel signal integration processes.
In one alternative, the integration process is applied to captured image 201a of participant B of Figure 3 in its entirety. In other alternatives, the integration process is applied solely to head/ face image portion 201b or to a combination of head/face image portion 201b and portion 20 le of captured image 201a of participant B of Figure IB . An output signal 202a of processor 202 contains a value indicative of the illumination such as the aforementioned brightness content exposure of the image of participant B.
The valuation may indicates that the illumination of head/face image portion 201b of Figure 3 is insufficiently low to be below a threshold level.
Advantageously, a combination of the brightness content, color content and gamma correction values contained in display drive video signal 203 c, that excludes the content of image of head/face image portion 101b of Figure 2A, is regulated in, for example, a closed loop negative feedback manner. The regulation is performed in accordance with the valuated illumination/brightness content in signal 202a of Figure IB. As a result, illumination of light producing region 204a of Figure 2B that excludes region 204b is controlled in a manner to vary the light exposure to which the object of camera 207 of Figure IB such as the head/face of participant B. The illumination of light producing region 204a of Figure 2B is obtained for enhancing the overall illumination or brightness content, contrast and/or color temperature in a closed loop negative feedback manner. Whenever the lighting circumstances change, adaptive sense signal 202a also changes. The brightness and/or color content of light producing region 204a of Figure 2B might be increased up to attain white color with maximum light output.
The regulated light output can be controlled by, for example, controlling light valves or cells, not shown, forming pixels of display panel 204 that may be formed using, for example, liquid crystal display (LCD) or organic light-emitting diode (OLED) technology. When, for example, the LCD technology is used, the regulated light output may be additionally or alternatively controlled by selectively controlling back lighting, not shown, of display panel 204. Advantageously, an improved or better lighted head/face image portion 201b of participant B of Figure IB is thereby obtained. For example, if participant B is located in a dark room and an exterior lighting, not shown, directed towards the head/face of participant B is poor, the brightness content of light producing region 204a of Figure 2B is increased in a negative feedback manner to an optimal value for obtaining enhanced illumination of the head/face of participant B of Figure 3. The result is that signal 207a of camera 207 of Figure IB will contain image 201a of participant B of Figure 3 that is , advantageously, optimally brighter. In one example, Signal 207a of camera 207 of Figure IB is applied in video processor 203 to transmitter- receiver stage 205 that transmits via
communication network 400 a transmitter drive video signal 203d of processor 203 containing image 201a of Figure 3. At a remote end, receiver- transmitter stage 303 of Figure 1 A receives transmitted drive video signal 203d and displays it, for example, unmodified, in a display panel 304 of a display device 304c of phone 300.
The enhanced lighting conditions produced in light producing region 204a of Figure 2B, controlled by the negative feedback control loop, cause image 201a of Figure 3 to be, advantageously, optimally bright when displayed in display panel 304 of Figure 1A of participant A. Thus, advantageously, even prior art phone 300 benefits from the advantageous features of advantageous phone 200 of Figure IB.
Advantageously, the color temperature of, in particular, the skin of the image of participant B of Figure IB may be analyzed by processor 202. As explained before, processor 203 can vary the color of light producing region 204a of Figure 2B in accordance with the analysis results of processor 202 of Figure IB. The result is that image 201a of participant B of Figure 3 that is transmitted to participant A of Figure 1A can become, advantageously, more presentable or so-called healthier looking. If the object of camera 207 of Figure IB includes more than one person , for example, a family, resulting in more than a single head/face in the picture, each head/face can be detected or recognized and taken into account to be displayed in a similar manner described before with respect to single participant B. Advantageously, to alleviate partial exposure to light, an under-exposed head/face portion may be selectively lightened up by light producing region 204a of display 204 to illuminate mainly the darker side in an asymmetric manner. Partial exposure occurs when only one portion of the image is poorly lighted as a result of, for example, an external light source, not shown, such as a lamp that illuminates mainly one side of the face of participant B. Figures 4A, 4B and 4C illustrate three examples of such so- called asymmetric lighting. Similar symbols and numerals in FIGURES 1 A, IB, 2A, 2B, 3, 4A, 4B and 4C indicate similar items or functions.
In Figures 4A and 4B, light producing region 204a occupies only a right portion of display panel 204 for producing light output at the right side of the image of participant A that is directed to the head/face of participant B of Figure IB . In the example of Figure 4B, the proportional size of head/ face portion 101b is scaled down and relocated to the left and down. Whereas, the area of light producing region 204a becomes proportionally larger than in Figure 4A to allow for better asymmetrical lighting.
In Figure 4C, head/ face image portion 101b is relocated to one corner or side allowing the rest of the area for producing more light output at the lower and right sides of head/ face portion 101b that is directed to the head/face of participant B of Figure IB. The size of the area occupied by light producing region 204a of Figures 4A, 4B and 4C is regulated by processor 203 of Figure IB in accordance with valuated illumination distribution content in head/face image portion 201b of Figure 3.
Advantageously, phone 200 of Figure IB is also capable of providing enhanced lighting content of head/face image portion 201b of Figure 3 of participant B of Figure IB also when camera 207 is used, in a manner unrelated to video telephony, to capture and store, for example, a self-portrait photograph referred to as selfie. For this purpose, the portion, not shown, of video signal 207a that contains mainly head/face image portion 201b of Figure 3 of participant B is extracted and applied in video processor 203 to generate display drive video signal 203b of display drive output video signal 203a of video processor 203. This is done in an analogous manner described before with respect to Figure 2B. A main difference is that, instead of displaying head/face image portion 101b of participant A in display panel 204 of Figure 2B, display drive video signal 203b contains mainly the extracted head/face image portion 201b of participant B of Figure 3 for display in display panel 204 of Figure 5. Similar symbols and numerals in FIGURES 1A, IB, 2A, 2B, 3, 4A, 4B, 4C and 5 indicate similar items or functions. In this way, participant B can view his head/face image portion 201b captured by camera 207 of Figure IB.
Advantageously, if the aforementioned content illumination valuation performed in processor 202 indicates that the brightness values of head/face image portion 201b of Figure 3 is insufficiently low, a combination of the brightness content, color content and gamma correction values associated with display drive video signal 203c is regulated in, for example, a closed loop negative feedback manner. The regulation is performed in accordance with the illumination/brightness content in signal 202a of Figure IB. The illumination of light producing region 204a of Figure 5 is controlled in a manner to vary the light exposure to which the selfle picture taker B is subjected in a manner to enhance the overall brightness content, contrast and color temperature in a closed loop feedback manner. It should be understood that the advantageous features described before in connection with Figures 2B and Figures 4A-4C are also applicable in an analogous manner with respect to capturing and storing a selfie. As explained before, when participants A and B of Figures 1A and IB, respectively, participate in a videotelephony conference, a portion, not shown, of video signal 207a that contains head/face image portion 201b of Figure 3 of participant B is extracted and applied in video processor 203 of Figure IB to generate a transmitter drive output video signal 203 e.
Transmitter drive output video signal 203 e is contained in the
aforementioned transmitter drive output video signal 203d. Transmitter drive video signal 203e has, for example, substantially the same visual content as the aforementioned extracted portion signal containing just head/face image portion 201b of Figure 3 of participant B of Figure IB. In addition, video processor 203 of Figure IB synthesizes a transmitter drive video signal 203f that is also contained in signal 203d. Thus, signal 203d that contains both signals 203 e and 203 f is applied to receiver-transmitter stage 205 that transmits signal 203d via network 400. Receiver-transmitter stage 303 of Figure 1A that receives signal 203d via network 400 displays it, for example, without modifying its visual contents, in display panel 304 of Figure 6 of phone 300 of Figure 1A. Similar symbols and numerals in FIGURES 1A, IB, 2A, 2B, 3, 4A, 4B, 4C, 5 and 6 indicate similar items or functions. In display panel 304 of Figure 6, display drive video signal 203e produces an image portion having, for example, the same visual content as the head/face image portion 201b of Figure 3 for display in a display region 304b of display panel 304 of Figure 6. Video processor 206 of Figure IB, for example, in addition to extracting head/face image portion 101b of Figure 2A for display in display panel 204 of Figure IB, in the manner described before , also valuates the illumination exposure of image 101a of Figure 2A received from participant A of Figure 1A. Optionally, video processor 206 of Figure IB also valuates other optical characteristics such as color of received image 101a of Figure 2A contained in video signal 205a of Figure IB. Such valuation applies well known pixel signal integration processes in the manner described before with respect to head/face image portion 201b of Figure 3.
In one alternative, an integration process is applied to the content of captured image 101a of participant A of Figure 2A in its entirety. In other alternatives, video processor 206 of Figure B detects or recognizes the pattern of head/face image portion 101b of Figure 2A. Then, the integration process is applied solely to the portion of signal 205a of Figure IB that corresponds to head/ face image portion 101b of Figure 2A or to a combination of image portion 101b and portion 10 le of captured image 101a of participant A. The result of such valuation is contained in an output signal 206a of processor 206 containing brightness values indicative of the extent of illumination exposure on head/face image portion 101b of Figure 2A of participant A.
In carrying out another particularly advantageous feature, if the analysis of the content of signal 206a of Figure IB indicates that the illumination/brightness content valuation associated with the image of participant A of Figure 2A is insufficiently low, for example, below a threshold level, video processor 203 of Figure IB synthesizes transmitter drive video signal 203 f to produce light in a region 304a of Figure 6 of display panel 304 of Figure IB. Region 304a of Figure 6 excludes head/face image portion 201b of region 304b and is non-overlapping with region 304b of display panel 304. Light producing region 304a is used for generating and regulating in a negative feedback manner illumination in region 304a to lighten up, for example, the head/face of participant A of Figure 1A forming the object of camera 307. This is done, advantageously, for controlling illumination such as brightness content directed to the head/face of participant A of Figure 1 A in an analogous manner by which illumination producing region 204a of Figure 2B is lightened up. Thus, the illumination of light producing region 304a of Figure 6 may be controlled to vary the light exposure on participant A of Figure 1A in a manner to enhance the overall brightness content, contrast content and/or color temperature in a closed loop negative feedback manner. Whenever the lighting circumstances change, correction signal 206a of Figure IB is adaptive to that change. The brightness content of light producing region 304a of Figure 6 might be increased up to optimal light output. In an analogous way to the way described before with respect to Figures 4A, 4B and 4C, the head/face of participant A may be lightened up by light producing region 304a of display panel 304 that illuminates a darker side of head/face image portion 201b of Figure 7 in an asymmetric manner. Similar symbols and numerals in
FIGURES 1A, IB, 2A, 2B, 3, 4A, 4B, 4C, 5, 6 and 7 indicate similar items or functions.

Claims

1. A video communication apparatus, comprising:
an interface capable of receiving an input video signal from a communication network containing a first image capable of being displayed in a first display panel; and
a video processor configured to valuate illumination content of said input video signal and to generate, in accordance with the illumination content valuation, a first output transmitter drive video signal that is capable of being transmitted in said communication network and of being displayed in a first region of a second display panel, when received from said communication network, to produce in said first region background light that is regulated, in accordance with the illumination content valuation,
said video processor being responsive to a second video signal containing a second image for generating a second output transmitter drive video signal containing said second image that is capable of being transmitted in said communication network and of being displayed in a second region of said second display panel, when received from said communication network.
2. A video communication apparatus according to Claim 1 wherein said video processor utilizes an image recognition technique for recognizing a particular portion of said second image contained in said second video signal and wherein said video processor is configured to select, in accordance with said recognized particular image portion, a first portion of said second image contained in second video signal to be included in said second output transmitter drive video signal and a second portion of said second image contained in said second video signal to be exclude from said second output transmitter drive video signal.
3. A video communication apparatus according to Claim 2 wherein said recognized image portion comprises an image of a head/face.
4. A video communication apparatus according to Claim 1 wherein said first output transmitter drive video signal has color content that is regulated in accordance with color content of said input video signal.
5. A video communication apparatus according to Claim 1 wherein said video processor generates said first output transmitter video signal in accordance with brightness content distribution throughout a portion of said input video signal associated with at least one -half of an entire image contained in said input video signal.
6. A video communication apparatus according to Claim 1 wherein said video processor is configured to generate a first display drive signal that is capable of being displayed in a first region of said first display panel and wherein said video processor utilizes an image recognition technique for recognizing a particular portion of said first image contained in said input video signal to select, in accordance with said recognized image portion, a first portion of said first image contained in said input video signal that is capable of being displayed in a second region of said first display panel and to exclude a second portion of said first image from being displayed in said second region of said first display panel.
7. A video communication apparatus according to Claim 6 wherein said recognized image portion comprises a head/face image.
8. A video communication apparatus according to Claim 1 wherein said first and second regions of said second display panel are at least partially non-overlapping .
9. A video communication apparatus according to Claim 1 wherein said communication network comprises one of the Internet, a data network and a telephone network.
10. A video communication apparatus according to Claim 1 wherein said video processor varies at least one of a size of said first region, a size of said second region, a location of said first region and a location of said second region in accordance with the brightness content of said input video signal.
11. A video communication apparatus according to Claim 1 , further comprising a camera for generating said second video signal containing said second image and said first display panel for
displaying said first image contained in said input video signal in said first display panel.
12. A method for performing video communication, comprising: receiving an input video signal from a communication network containing a first image that is capable of being displayed in a first display panel;
valuating illumination content of said input video signal;
generating, in accordance with the illumination content valuation, a first output transmitter drive video signal suitable for transmission in said communication network and capable, when received from said communication network, of being displayed in a first region of a second display panel to produce background light that is regulated, in accordance with the illumination content valuation; and generating a second output transmitter drive video signal containing a second image, said second output transmitter drive signal being suitable for transmission in said communication network and capable of being displayed in a second region of said second display panel, when received from said communication network.
13. A method according to Claim 12 further comprising , recognizing a particular portion of said image contained in a second video signal using an image recognition technique ; and
selecting, in accordance with said recognized image portion, a first portion of said image contained in said second video signal to be contained in said second output transmitter drive video signal and a second portion of said image contained in said second video signal to be excluded from said second output transmitter drive video signal.
14. A method according to Claim 13 wherein said recognized particular image portion comprises an image of a head/face.
15. A method according to Claim 12, wherein said first output transmitter drive video signal has color content that is regulated in accordance with color content of said input video signal.
16. A method according to Claim 12 wherein said first output transmitter video signal is generated in accordance with brightness content distribution in a portion of said input video signal associated with at least one -half of an entire image contained in said input video signal.
17. A method according to Claim 12 wherein said communication network comprises one of the Internet and a telephone network.
18. A method according to Claim 12 further comprising, varying at least one of a size of said first region, a size of said second region, a location of said first region and a location of said second region in accordance with the brightness content of said input video signal.
PCT/EP2015/070079 2014-09-24 2015-09-02 A background light enhancing apparatus responsive to a remotely generated video signal WO2016045924A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462054720P 2014-09-24 2014-09-24
US62/054,720 2014-09-24

Publications (1)

Publication Number Publication Date
WO2016045924A1 true WO2016045924A1 (en) 2016-03-31

Family

ID=54072818

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/070079 WO2016045924A1 (en) 2014-09-24 2015-09-02 A background light enhancing apparatus responsive to a remotely generated video signal

Country Status (2)

Country Link
TW (1) TW201624073A (en)
WO (1) WO2016045924A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108156512A (en) * 2018-01-02 2018-06-12 联想(北京)有限公司 A kind of video playing control method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5500671A (en) * 1994-10-25 1996-03-19 At&T Corp. Video conference system and method of providing parallax correction and a sense of presence
US20070147700A1 (en) * 2005-12-28 2007-06-28 Samsung Electronics Co., Ltd Method and apparatus for editing images using contour-extracting algorithm
US20090274368A1 (en) * 2007-01-11 2009-11-05 Fujitsu Limited Image correction method and apparatus
US20110221933A1 (en) * 2010-03-09 2011-09-15 Xun Yuan Backlight detection device and backlight detection method
US20120268350A1 (en) * 2011-04-20 2012-10-25 Sharp Kabushiki Kaisha Liquid crystal display device, multi-display device, method for determining light intensity, and storage medium
US8553103B1 (en) * 2009-09-30 2013-10-08 Hewlett-Packard Development Company, L.P. Compensation of ambient illumination

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5500671A (en) * 1994-10-25 1996-03-19 At&T Corp. Video conference system and method of providing parallax correction and a sense of presence
US20070147700A1 (en) * 2005-12-28 2007-06-28 Samsung Electronics Co., Ltd Method and apparatus for editing images using contour-extracting algorithm
US20090274368A1 (en) * 2007-01-11 2009-11-05 Fujitsu Limited Image correction method and apparatus
US8553103B1 (en) * 2009-09-30 2013-10-08 Hewlett-Packard Development Company, L.P. Compensation of ambient illumination
US20110221933A1 (en) * 2010-03-09 2011-09-15 Xun Yuan Backlight detection device and backlight detection method
US20120268350A1 (en) * 2011-04-20 2012-10-25 Sharp Kabushiki Kaisha Liquid crystal display device, multi-display device, method for determining light intensity, and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108156512A (en) * 2018-01-02 2018-06-12 联想(北京)有限公司 A kind of video playing control method and device
CN108156512B (en) * 2018-01-02 2021-04-13 联想(北京)有限公司 Video playing control method and device

Also Published As

Publication number Publication date
TW201624073A (en) 2016-07-01

Similar Documents

Publication Publication Date Title
TWI689892B (en) Background blurred method and electronic apparatus based on foreground image
US8345082B2 (en) System and associated methodology for multi-layered site video conferencing
CN109804622B (en) Recoloring of infrared image streams
JP5843751B2 (en) Information processing apparatus, information processing system, and information processing method
US8780161B2 (en) System and method for modifying images
US9225916B2 (en) System and method for enhancing video images in a conferencing environment
CN103945121B (en) A kind of information processing method and electronic equipment
US10719704B2 (en) Information processing device, information processing method, and computer-readable storage medium storing a program that extracts and corrects facial features in an image
US8384754B2 (en) Method and system of providing lighting for videoconferencing
US20130050395A1 (en) Rich Mobile Video Conferencing Solution for No Light, Low Light and Uneven Light Conditions
US20070115349A1 (en) Method and system of tracking and stabilizing an image transmitted using video telephony
US8553103B1 (en) Compensation of ambient illumination
US9843761B2 (en) System and method for brightening video image regions to compensate for backlighting
CN110022469A (en) Image processing method, device, storage medium and electronic equipment
US10548465B2 (en) Medical imaging apparatus and medical observation system
CN110266954A (en) Image processing method, device, storage medium and electronic equipment
EP3198332A1 (en) A background light enhancing apparatus responsive to a local camera output video signal
CN106506950A (en) A kind of image processing method and device
US11800048B2 (en) Image generating system with background replacement or modification capabilities
WO2016045924A1 (en) A background light enhancing apparatus responsive to a remotely generated video signal
KR100782505B1 (en) Method and apparatus for display video using contrast tone in mobile phone
US7822247B2 (en) Endoscope processor, computer program product, endoscope system, and endoscope image playback apparatus
KR102519288B1 (en) Method and apparatus for controlling content contrast in content ecosystem
WO2022267186A1 (en) Image processing method and apparatus for e-ink terminal, and storage medium
WO2007097517A1 (en) Adjustive chroma key composition apparatus and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15762533

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15762533

Country of ref document: EP

Kind code of ref document: A1