WO2017050115A1 - Image synthesis method - Google Patents

Image synthesis method Download PDF

Info

Publication number
WO2017050115A1
WO2017050115A1 PCT/CN2016/097937 CN2016097937W WO2017050115A1 WO 2017050115 A1 WO2017050115 A1 WO 2017050115A1 CN 2016097937 W CN2016097937 W CN 2016097937W WO 2017050115 A1 WO2017050115 A1 WO 2017050115A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
predetermined scene
luminance value
special effects
Prior art date
Application number
PCT/CN2016/097937
Other languages
French (fr)
Chinese (zh)
Inventor
戴向东
魏宇星
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017050115A1 publication Critical patent/WO2017050115A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Definitions

  • the present invention relates to image processing technologies, and in particular, to an image synthesis method and apparatus.
  • the binocular cameras of mobile terminals adopt mutually compatible shooting techniques to achieve better depth of field, 3D shooting and other photographic effects.
  • 11 and 12 are two visible light cameras of the mobile terminal device, and 13 are connecting members of the two cameras.
  • 11 and 12 are fixed to the connecting member 13 and are as parallel as possible to the imaging plane.
  • the mobile terminal can obtain 11 captured images and 12 captured images at the same time, and then combine the two images.
  • 11 mainly shoots dynamic characters
  • 12 photographs the background behind the person
  • the mobile terminal combines the images taken by 11 and 12.
  • the composite image using the above method is single, making the synthesized image lacking in interest, and thus, resulting in poor user experience.
  • the embodiments of the present invention are expected to provide an image synthesizing method and apparatus, which can improve the interest of the synthesized image and improve the user experience.
  • an image synthesis method comprising:
  • the i images are subjected to special effects synthesis to generate a composite image.
  • the feature is an overexposed area
  • the acquiring the feature of the first image includes:
  • the N pixel regions are referred to as N overexposed regions, and the N is a positive integer.
  • the determining a central luminance value of each pixel region of the first image includes:
  • the center luminance value is calculated according to the mean square error, saturation, sharpness, and fusion weight values of the left and right image pixel regions of each pixel region.
  • the synthesizing the i images according to the feature, and generating the composite image includes:
  • the over-exposed area in the i images and the pixel area corresponding to the image other than the self image are combined and processed by the linear attenuation method to generate the composite picture.
  • synthesizing the i images by using special effects to generate a composite image includes:
  • Special effects processing is performed on one or more of the predetermined scene images of the i images, and the predetermined scene image after the special effect processing and the predetermined scene image not processed by the special effects are combined to generate the composite picture.
  • the i images are images of the same scene captured by the i cameras from different angles and positions, the feature is a predetermined scene image, and the acquiring the features of the i images includes :
  • performing special effects synthesis on the i images, and generating a composite image includes:
  • the predetermined scene images of different depths are respectively segmented from the i images, including:
  • the predetermined scene images of different depths are segmented from the i images.
  • the i is 2
  • the predetermined scene images respectively segmenting different depths from the i images include:
  • Performing special effects processing on one or more of the predetermined scene images of the i images, and synthesizing the predetermined scene image after the special effect processing and the predetermined scene image not processed by the special effect, and obtaining the synthesized image includes:
  • the background image is subjected to special effect processing, and the processed background image is combined with the human image to generate the composite image.
  • the method further includes: when the feature is a background image, the background image may be blurred.
  • the method further includes:
  • one of the two images and the composite image are animated for dynamic display.
  • an image synthesizing apparatus comprising:
  • An acquiring unit configured to acquire images captured by i cameras, wherein i is an integer greater than 1; acquiring features of the i images;
  • the synthesizing unit is configured to perform special effects synthesis on the i images according to the feature to generate a composite image.
  • the feature is an overexposed area
  • the acquiring unit is further configured to:
  • the N pixel regions are referred to as N overexposed regions, and the N is a positive integer.
  • the acquiring unit is further configured to: calculate the central brightness value according to a mean square error, a saturation, a sharpness, and a fusion weight value of the left and right image pixel regions of each pixel region.
  • the synthesizing unit is further configured to:
  • the over-exposed area in the i images and the pixel area corresponding to the image other than the self image are combined and processed by the linear attenuation method to generate the composite picture.
  • the synthesizing unit is further configured to:
  • Special effects processing is performed on one or more of the predetermined scene images of the i images, and the predetermined scene image after the special effect processing and the predetermined scene image not processed by the special effects are combined to generate the composite picture.
  • the i images are images of the same scene captured by the i cameras from different angles and positions, and the features are predetermined scene images, and the acquiring unit is further configured to:
  • the synthesizing unit is further configured to:
  • the obtaining unit is further configured to:
  • the predetermined scene images of different depths are segmented from the i images.
  • the i is 2, and the acquiring unit is further configured to:
  • the synthesizing unit is further configured to:
  • the background image is subjected to special effect processing, and the processed background image is combined with the human image to generate the composite image.
  • the synthesizing unit is further configured to: when the feature is a background image, perform blur processing on the background image.
  • the synthesizing unit is further configured to: after performing special effects processing on the background image, dynamically display any one of the two images and the composite image in an animated form.
  • the embodiment of the invention provides an image synthesizing method and device, which first acquires images captured by i cameras and features of i images; and according to these features, i images are synthesized for special effects. Into a composite image. In this way, after acquiring the features of the captured image, the image synthesizing device can perform special effects synthesis on different features to generate a composite image. In this way, the image greatly enriches the types of photographs taken by multiple cameras, so that the same photo can be varied in many ways, thereby increasing the interest of the composite photo and improving the user experience.
  • 1 is a schematic structural view of a conventional binocular camera
  • FIG. 2 is a schematic structural diagram of hardware of a mobile terminal that implements various embodiments of the present invention
  • FIG. 3 is a schematic diagram of a wireless communication system of the mobile terminal shown in FIG. 2;
  • FIG. 5 is a flowchart of another image synthesizing method according to an embodiment of the present invention.
  • FIG. 6 is a schematic view of a marked overexposed area in an embodiment of the present invention.
  • FIG. 7 is a flowchart of still another image synthesizing method according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of an image captured by two cameras and a composite image according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of an image synthesizing apparatus according to an embodiment of the present invention.
  • the mobile terminal can be implemented in various forms.
  • the terminals described in the present invention may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), navigation devices, and the like.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • the configuration according to an embodiment of the present invention can be applied to a terminal of a fixed attribute, in addition to an element particularly for moving purposes.
  • FIG. 2 is a schematic diagram showing the hardware structure of a mobile terminal that implements various embodiments of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110, an audio/video (A/V) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190. and many more.
  • Figure 2 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network.
  • the wireless communication unit may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
  • the broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and/or a terrestrial channel.
  • the broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like.
  • the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal.
  • the broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112.
  • the broadcast signal can exist in various forms, for example, It may exist in the form of Digital Multimedia Broadcasting (DMB) Electronic Program Guide (EPG), Digital Video Broadcasting Handheld (DVB-H) Electronic Service Guide (ESG), and the like.
  • the broadcast receiving module 111 can receive a signal broadcast through a broadcast system using various attributes.
  • the broadcast receiving module 111 can use forward link media (MediaFLO) by using, for example, multimedia broadcast-terrestrial (DMB-T), digital multimedia broadcast-satellite (DMB-S), digital video broadcast-handheld (DVB-H)
  • MediaFLO forward link media
  • the digital broadcasting system of the @) data broadcasting system, the terrestrial digital broadcasting integrated service (ISDB-T), and the like receives digital broadcasting.
  • the broadcast receiving module 111 can be constructed as various broadcast systems suitable for providing broadcast signals as well as the above-described digital broadcast system.
  • the broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or a storage medium of other attributes).
  • the mobile communication module 112 transmits radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or data of various attributes transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal.
  • the module can be internally or externally coupled to the terminal.
  • the wireless Internet access technologies involved in the module may include Wireless Local Area Network (WLAN) (Wi-Fi), Wireless Broadband (Wibro), Worldwide Interoperability for Microwave Access (Wimax), High Speed Downlink Packet Access (HSDPA), and the like. .
  • WLAN Wireless Local Area Network
  • Wibro Wireless Broadband
  • Wimax Worldwide Interoperability for Microwave Access
  • HSDPA High Speed Downlink Packet Access
  • the short range communication module 114 is a module for supporting short range communication.
  • Some examples of short-range communication technology include Bluetooth TM, a radio frequency identification (RFID), infrared data association (IrDA), ultra wideband (UWB), ZigBee, etc. TM.
  • the location information module 115 is a module for checking or acquiring location information of the mobile terminal.
  • a typical example of a location information module is GPS (Global Positioning System).
  • GPS Global Positioning System
  • the GPS module 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information to accurately calculate three-dimensional current position information based on longitude, latitude, and altitude.
  • the method used to calculate position and time information uses three satellites and The error of the calculated position and time information is corrected using another satellite.
  • the GPS module 115 is capable of calculating speed information by continuously calculating current position information in real time.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 122 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode.
  • the microphone 122 can implement noise canceling (or suppression) algorithms of various properties to eliminate (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input information of various attributes, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch screen can be formed.
  • the sensing unit 140 detects the current state of the mobile terminal 100 (eg, the open or closed state of the mobile terminal 100), the location of the mobile terminal 100, the presence or absence of contact (ie, touch input) by the user with the mobile terminal 100, and the mobile terminal.
  • the sensing unit 140 can sense whether the slide type phone is turned on or off.
  • the sensing unit 140 can detect the power supply unit 190 Whether power is supplied or whether the interface unit 170 is coupled to an external device.
  • Sensing unit 140 may include proximity sensor 141 which will be described below in connection with a touch screen.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Subscriber Identity Module (SIM), a Universal Subscriber Identity Module (USIM), and the like.
  • UIM User Identification Module
  • SIM Subscriber Identity Module
  • USB Universal Subscriber Identity Module
  • the device having the identification module may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and externally Data is transferred between devices.
  • an external device eg, data information, power, etc.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in the video call mode or the image capturing mode, the display unit 151 may display the captured image and/or the received image, show the video or a picture. Like the UI or GUI of related functions and so on.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a transparent organic light emitting diode (TOLED) display or the like.
  • TOLED transparent organic light emitting diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
  • the audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can include a speaker, a buzzer, and the like.
  • the alarm unit 153 can provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alert unit 153 can provide an output in a different manner to notify of the occurrence of an event. For example, the alarm unit 153 can provide an output in the form of vibrations, and when a call, message, or some other incoming communication is received, the alarm unit 153 can provide a tactile output (ie, vibration) to notify the user of it. By providing such a tactile output, the user is able to recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 can also be via the display unit 151 or the audio output module 152. Provides an output of the occurrence of a notification event.
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one attribute of a storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, which may be constructed within the controller 180 or may be configured to be separate from the controller 180.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( FPGA, processor, controller, microcontroller, microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be under control Implemented in the controller 180.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable gate arrays
  • processor controller
  • microcontroller microprocessor
  • FPGA field programmable gate arrays
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal in a mobile terminal such as a folding type, a bar type, a swing type, a slide type mobile terminal or the like will be described as an example. Therefore, the present invention can be applied to a mobile terminal of any attribute, and is not limited to a slide type mobile terminal.
  • the mobile terminal 100 as shown in FIG. 2 may be configured to operate using a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • Such communication systems may use different air interfaces and/or physical layers.
  • air interfaces used by communication systems include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)). ), Global System for Mobile Communications (GSM), etc.
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • CDMA Code Division Multiple Access
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • GSM Global System for Mobile Communications
  • the following description relates to a CDMA communication system, but such teachings are equally applicable to systems of other attributes.
  • a CDMA wireless communication system may include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280.
  • the MSC 280 is configured to interface with a public switched telephone network (PSTN) 290.
  • PSTN public switched telephone network
  • the MSC 280 is also configured to interface with a BSC 275 that can be coupled to the base station 270 via a backhaul line.
  • the backhaul line can be constructed in accordance with any of a number of well known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 3 can include multiple BSCs 275.
  • Each BS 270 can serve one or more partitions (or regions), each of which is covered by a multi-directional antenna or an antenna directed to a particular direction radially away from the BS 270. Or, each partition can Covered by two or more antennas for diversity reception.
  • Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
  • BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology.
  • BTS Base Transceiver Subsystem
  • the term "base station” can be used to generally refer to a single BSC 275 and at least one BS 270.
  • a base station can also be referred to as a "cell station.”
  • each partition of a particular BS 270 may be referred to as a plurality of cellular stations.
  • a broadcast transmitter (BT) 295 transmits a broadcast signal to the mobile terminal 100 operating within the system.
  • a broadcast receiving module 111 as shown in FIG. 2 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295.
  • GPS Global Positioning System
  • the satellite 500 helps locate at least one of the plurality of mobile terminals 100.
  • a plurality of satellites 500 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites.
  • the GPS module 115 as shown in Figure 2 is typically configured to cooperate with the satellite 500 to obtain desired positioning information. Instead of GPS tracking technology or in addition to GPS tracking technology, other techniques that can track the location of the mobile terminal can be used. Additionally, at least one GPS satellite 500 can selectively or additionally process satellite DMB transmissions.
  • BS 270 receives reverse link signals from various mobile terminals 100.
  • Mobile terminal 100 typically participates in the communication of calls, messaging, and other attributes.
  • Each reverse link signal received by a particular base station 270 is processed within a particular BS 270.
  • the obtained data is forwarded to the relevant BSC 275.
  • the BSC provides call resource allocation and coordinated mobility management functions including a soft handoff procedure between the BSs 270.
  • the BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290.
  • PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
  • An embodiment of the present invention provides an image synthesizing method, which is applied to an image synthesizing device, and the image synthesizing device may be an independent device or a part of the mobile terminal. As shown in FIG. 4, the method includes:
  • Step 301 Acquire images captured by i cameras.
  • i is an integer greater than one.
  • the type of the camera in this embodiment is not limited.
  • the camera is a visible light camera, and the i cameras are fixed to the connecting member, and the photos taken by the i cameras can be displayed on the image synthesizing device.
  • the image captured by each camera may be an original image or a simple processed image, which is not limited in this embodiment.
  • the number of cameras is two.
  • Step 302 Acquire features of i images.
  • the feature in this embodiment may be an overexposed area or a preset scene image or the like, and may be a feature as long as it is a special effect or a synthesized scene image or a parameter value of a pixel area.
  • step 302 specifically includes: determining a center luminance value of each pixel region of the first image; and determining a first image according to a central luminance value of each pixel region of the first image.
  • the average brightness value determining whether the difference between the center brightness value and the average brightness value of each pixel area is greater than a preset threshold; when the difference between the center brightness value of the N pixel areas and the average brightness value is greater than a preset threshold,
  • the N pixel regions serve as N overexposed regions, and the N is a positive integer.
  • the pixel area of the image can be divided into a clear area and an overexposed area.
  • the clear area refers to the area where the pixel is not overexposed, has no underexposure, and the focus is clear; the corresponding overexposed area is the area where the pixel is overexposed, underexposed, or unfocused; the center brightness value is based on the mean square error of the pixel area. , saturation, sharpness (whether the focus is clear) and the blending weights of the pixel areas of the left and right images are calculated.
  • the i images are images of the same scene captured by the i cameras from different angles and positions, and the feature is a predetermined scene image
  • the step 302 may include: respectively dividing the predetermined scene images of different depths from the i images, wherein In different images, the predetermined scene image is different.
  • the segmentation method may combine the depth information with the original color information and the brightness information, and perform segmentation as a joint feature, so that the segmentation effect is more accurate.
  • step 302 may include: segmenting a person image from one image and segmenting a background image from another image.
  • Step 303 Perform special effects synthesis on the i images according to the feature to generate a composite image.
  • the step 303 corresponding to the step 302 includes: marking the overexposed area in the i images; and performing the brightness reduction processing on the overexposed areas of the i-1 images in the i images; In the linear attenuation mode, the overexposed regions in the i images are combined with the pixel regions corresponding to the images other than the self images to generate a composite image.
  • the overexposure region has been subjected to brightness reduction, when the i images are synthesized, the overexposed regions whose brightness is weakened are covered by the corresponding regions in other images, so that the overexposed regions of the composite image are greatly reduced. .
  • the i images may be combined with other special effects, which is not limited in this embodiment.
  • the feature is a predetermined scene image
  • the step 303 corresponding to step 302 may include: one or more of the predetermined scene images of the i images.
  • the special effect processing is performed, and the predetermined scene image after the special effect processing and the predetermined scene image not processed by the special effect are combined to generate the composite picture.
  • each scene image to be divided is not the same, and the scene image that can be divided is a predetermined scene image.
  • the background image may be subjected to special effects processing, and the processed background image and the unprocessed person image may be combined to generate a composite image.
  • the image synthesizing device can perform special effects synthesis on different features to generate a composite image.
  • the image greatly enriches the types of photographs taken by multiple cameras, so that the same photo can be varied in many ways, thereby increasing the interest of the composite photo and improving the user experience.
  • the method further includes: forming any one of the two images and the composite image into an animated form, such as a dynamic display in a gif format.
  • the image of the two images can be segmented, and the background images of the two characters and the image of any one image can be made into a gif format, thereby improving the interest of the synthesized image and improving the user experience.
  • the background image when the feature is a background image, the background image may be blurred. Any image captured independently by the dual camera, first splits the character image, and performs Gaussian blurring on the part other than the character, that is, the background image, for example, assuming that the dual camera can be divided into a left camera and a right camera, and the left camera is photographed. An image, a second image taken by the right camera, in which the character image of the first image is retained, and the background image of the second image is subjected to Gaussian blurring. Replace the background image of the first image with the background image of the second image. Since the dual camera itself can also capture the depth of field picture of the analog SLR camera, in this method, the blur mode can be customized for the degree of blurring, and the blur mode can be selected wider than the original shooting depth map. And the effect is better.
  • the captured image of the dual camera may be further synthesized by capturing the video stream by setting two filter icons corresponding to the two video streams. Then, while capturing the preview video stream, intercepting the required image data, placing the intercepted image data into the memory area in real time, and respectively performing image processing on the image data of the two cameras stored in different memory areas, and finding Target, determine the image coordinates in the image obtained by the two cameras. Finally, using the theory of binocular vision to calculate, the position information of the target point in the world coordinate system is obtained. This makes it possible to determine the positional information of the composite image.
  • Embodiments of the present invention provide an image synthesizing method, which is applied to a mobile terminal, such as a smart phone, a notebook computer, a desktop computer, or the like.
  • a mobile terminal such as a smart phone, a notebook computer, a desktop computer, or the like.
  • two cameras are taken as an example.
  • the method includes:
  • Step 401 Acquire a picture taken by two cameras at the same time.
  • the objects captured by the two cameras should be objects of the same area.
  • Step 402 Determine a central luminance value of each pixel region of the two images.
  • the center luminance value is calculated based on the mean square error, saturation, sharpness (whether the focus is clear) of the pixel region, and the blending weight values of the left and right image pixel regions.
  • Step 403 Determine an average brightness value of the corresponding image according to a central brightness value of each pixel area of the image.
  • the average luminance value of an image is the number of pixel regions in which all the pixel regions of the image are added and divided by the image.
  • Step 404 Determine whether a difference between a central luminance value of each pixel region in each image and an average luminance value of the corresponding image is greater than a preset threshold. If yes, go to step 405; if no, go to step 407.
  • Step 405 The pixel area where the difference between the central brightness value and the average brightness value of the corresponding image is greater than a preset threshold is used as the overexposed area.
  • Step 406 Perform special effects synthesis on the two images according to the overexposed area to generate a composite image.
  • the overexposed area is marked in two images (the black solid circle in FIG. 6 indicates the overexposed area at the mark); the overexposed area of any one image is subjected to brightness reduction processing; and the linear attenuation method is used.
  • the overexposed regions in the two images are combined with the pixel regions corresponding to the images other than the self images to generate a composite image.
  • the effect synthesis may include a series of special effects processing and synthesis processing such as segmentation, blurring, blurring, and modification of hue.
  • Step 407 Perform special effects synthesis on the two images to generate a composite image.
  • Embodiments of the present invention provide an image synthesizing method, which is applied to a mobile terminal, such as a smart phone, a notebook computer, a desktop computer, or the like.
  • a mobile terminal such as a smart phone, a notebook computer, a desktop computer, or the like.
  • This embodiment introduces two cameras as an example. As shown in FIG. 7, the method includes:
  • Step 501 Acquire a picture taken by two cameras at the same time.
  • the objects captured by the two cameras should be objects of the same area, and the two images are images of the same scene captured by two cameras from different angles and positions.
  • Step 502 Acquire a character image from the first image.
  • the character image is a predetermined scene image in the first image.
  • Step 503 Obtain a background image from the second image.
  • the character image is a predetermined scene image in the second image.
  • Step 504 Perform special effects processing on the background image in the second image.
  • the effect processing may include a series of processing similar to image processing such as blurring, blurring, and modifying the color tone.
  • the first picture in Figure 8 is the picture taken by the first camera
  • the second picture is the picture taken by the first camera
  • the third picture is the background of the character image of the first picture and the second picture of the special effect. What is the composite image of the image.
  • Step 505 synthesize the image processed by the special effect and the image of the person to obtain a composite image.
  • An embodiment of the present invention provides an image synthesizing device 60.
  • the image synthesizing device 60 may include:
  • the obtaining unit 601 is configured to acquire images captured by the i cameras, wherein the i is an integer greater than 1; and acquire features of the i images.
  • the synthesizing unit 602 is configured to perform special effects synthesis on the i images according to the feature, Generate a composite image.
  • the image synthesizing device can perform special effects synthesis on different features to generate a composite image.
  • the image greatly enriches the types of photographs taken by multiple cameras, so that the same photo can be varied in many ways, thereby increasing the interest of the composite photo and improving the user experience.
  • the feature is an overexposed area.
  • the obtaining unit 601 is further configured to:
  • the N pixel regions are referred to as N overexposed regions, and the N is a positive integer.
  • the acquiring unit 601 is further configured to: calculate the central brightness value according to a mean square error, a saturation, a sharpness, and a fusion weight of the left and right image pixel regions.
  • the synthesizing unit 602 is further configured to:
  • the over-exposed area in the i images and the pixel area corresponding to the image other than the self image are combined and processed by the linear attenuation method to generate the composite picture.
  • the synthesizing unit 602 is further configured to:
  • the i images are images of the same scene captured by the i cameras from different angles and positions, and the features are predetermined scene images, and the acquiring unit 601 is further configured to:
  • the synthesizing unit 602 is further configured to:
  • the obtaining unit 601 is further configured to:
  • the predetermined scene images of different depths are segmented from the i images.
  • the obtaining unit 601 is further configured to:
  • the synthesizing unit 602 is further configured to:
  • the background image is subjected to special effect processing, and the processed background image is combined with the human image to generate the composite image.
  • the synthesizing unit 602 is further configured to: when the feature is a background image, the background image may be blurred.
  • the synthesizing unit 602 is further configured to: after performing special effects processing on the background image, dynamically display any one of the two images and the composite image in an animated form.
  • the obtaining unit 601 and the synthesizing unit 602 can both be located in the terminal.
  • Central Processing Unit CPU
  • MPU Micro Processor Unit
  • DSP Digital Signal Processor
  • FPGA Field Programmable Gate Array
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Abstract

Discloses is an image synthesis method comprising: obtaining images taken by i cameras, wherein i is an integer exceeding 1; obtaining characteristics of the i images; synthesizing, according to the characteristics, the i images and applying a special effect thereon to generate a synthesized image. Also disclosed is an image synthesis device.

Description

一种图像合成方法和装置Image synthesis method and device 技术领域Technical field
本发明涉及图像处理技术,尤其涉及一种图像合成方法和装置。The present invention relates to image processing technologies, and in particular, to an image synthesis method and apparatus.
背景技术Background technique
目前移动终端的双目摄像头都采用相互配合的拍摄技术,以达到更好的景深、3D拍摄等摄影效果。如图1所示11和12是移动终端设备的两个可见光摄像头,13是两个摄像头的连接部件。11和12固定在连接部件13上,并尽量做到成像平面平行。移动终端可以在同一时刻得到11拍摄的图像和12拍摄的图像,再将两幅图像合成。例如,11主要拍摄动态的人物,12拍摄人物身后的背景,最后移动终端将11和12拍摄的图像进行合成。但使用上述方法的合成图像单一,使得合成图像缺少趣味性,因此,导致用户体验差。At present, the binocular cameras of mobile terminals adopt mutually compatible shooting techniques to achieve better depth of field, 3D shooting and other photographic effects. 11 and 12 are two visible light cameras of the mobile terminal device, and 13 are connecting members of the two cameras. 11 and 12 are fixed to the connecting member 13 and are as parallel as possible to the imaging plane. The mobile terminal can obtain 11 captured images and 12 captured images at the same time, and then combine the two images. For example, 11 mainly shoots dynamic characters, 12 photographs the background behind the person, and finally the mobile terminal combines the images taken by 11 and 12. However, the composite image using the above method is single, making the synthesized image lacking in interest, and thus, resulting in poor user experience.
发明内容Summary of the invention
为解决上述技术问题,本发明实施例期望提供一种图像合成方法和装置,能够提高合成图像的趣味性,提高用户体验。In order to solve the above technical problem, the embodiments of the present invention are expected to provide an image synthesizing method and apparatus, which can improve the interest of the synthesized image and improve the user experience.
本发明的技术方案是这样实现的:The technical solution of the present invention is implemented as follows:
第一方面,提供一种图像合成方法,所述方法包括:In a first aspect, an image synthesis method is provided, the method comprising:
获取i个摄像头拍摄的图像,所述i是大于1的整数;Obtaining images taken by i cameras, the i being an integer greater than one;
获取所述i个图像的特征;Obtaining features of the i images;
根据所述特征,将所述i个图像进行特效合成,生成合成图像。According to the feature, the i images are subjected to special effects synthesis to generate a composite image.
在本发明一实施方式中,所述特征为过曝区域,对于第一图像,所述获取所述第一图像的特征包括: In an embodiment of the present invention, the feature is an overexposed area, and for the first image, the acquiring the feature of the first image includes:
确定所述第一图像的每个像素区域的中心亮度值;Determining a central luminance value of each pixel region of the first image;
根据所述第一图像的每个像素区域的中心亮度值,确定出所述第一图像的平均亮度值;Determining an average luminance value of the first image according to a central luminance value of each pixel region of the first image;
判断所述每个像素区域的中心亮度值与所述平均亮度值之差是否大于预设阈值;Determining whether a difference between a central luminance value of each of the pixel regions and the average luminance value is greater than a preset threshold;
N个像素区域的中心亮度值与所述平均亮度值之差大于预设阈值时,将所述N个像素区域作为N个过曝区域,所述N是正整数。When the difference between the central luminance value of the N pixel regions and the average luminance value is greater than a preset threshold, the N pixel regions are referred to as N overexposed regions, and the N is a positive integer.
在本发明一实施方式中,所述确定所述第一图像的每个像素区域的中心亮度值,包括:In an embodiment of the present invention, the determining a central luminance value of each pixel region of the first image includes:
根据每个像素区域的均方差、饱和度、清晰度和左右图像像素区域的融合权值计算所述中心亮度值。The center luminance value is calculated according to the mean square error, saturation, sharpness, and fusion weight values of the left and right image pixel regions of each pixel region.
在本发明一实施方式中,所述根据所述特征,将所述i个图像进行特效合成,生成合成图像包括:In an embodiment of the present invention, the synthesizing the i images according to the feature, and generating the composite image includes:
在所述i个图像中标记出过曝区域;Marking an overexposed area in the i images;
将所述i个图像中的i-1个图像的过曝区域进行亮度减弱处理;Performing brightness reduction processing on the overexposed areas of the i-1 images in the i images;
利用线性减弱方式,将所述i个图像中的过曝区域与除自身图像之外的图像所对应像素区域进行合成处理,生成所述合成图片。The over-exposed area in the i images and the pixel area corresponding to the image other than the self image are combined and processed by the linear attenuation method to generate the composite picture.
在本发明一实施方式中,将所述i个图像进行特效合成,生成合成图像包括:In an embodiment of the present invention, synthesizing the i images by using special effects to generate a composite image includes:
对i个图像的预定景物影像的一个或多个进行特效处理,将特效处理后的预定景物影像和未特效处理的预定景物影像进行合成,生成所述合成图片。Special effects processing is performed on one or more of the predetermined scene images of the i images, and the predetermined scene image after the special effect processing and the predetermined scene image not processed by the special effects are combined to generate the composite picture.
在本发明一实施方式中,所述i个图像为所述i个摄像头从不同角度和位置拍摄的同一场景的图像,所述特征是预定景物影像,所述获取所述i个图像的特征包括: In an embodiment of the present invention, the i images are images of the same scene captured by the i cameras from different angles and positions, the feature is a predetermined scene image, and the acquiring the features of the i images includes :
分别从所述i个图像中分割出不同深度的预定景物影像,其中,不同图像中,预定景物影像不同;Separating predetermined scene images of different depths from the i images, wherein the predetermined scene images are different in different images;
所述根据所述特征,将所述i个图像进行特效合成,生成合成图像包括:According to the feature, performing special effects synthesis on the i images, and generating a composite image includes:
对所述i个图像的预定景物影像的一个或多个进行特效处理,将特效处理后的预定景物影像和未特效处理的预定景物影像进行合成,生成所述合成图片。Performing special effects processing on one or more of the predetermined scene images of the i images, and synthesizing the predetermined scene images after the special effects processing and the predetermined scene images not processed by the special effects to generate the composite images.
在本发明一实施方式中,所述分别从所述i个图像中分割出不同深度的预定景物影像,包括:In an embodiment of the invention, the predetermined scene images of different depths are respectively segmented from the i images, including:
结合图像的深度信息与图像原有的颜色信息以及亮度信息,作为联合特征来从所述i个图像中分割出不同深度的预定景物影像。Combining the depth information of the image with the original color information and the brightness information of the image, as the joint feature, the predetermined scene images of different depths are segmented from the i images.
在本发明一实施方式中,所述i是2,所述分别从所述i个图像中分割出不同深度的预定景物影像包括:In an embodiment of the present invention, the i is 2, and the predetermined scene images respectively segmenting different depths from the i images include:
将一个图像分割出人物影像;Split an image into a person image;
将另一个图像分割出背景影像;Split another image out of the background image;
所述对所述i个图像的预定景物影像的一个或多个进行特效处理,将特效处理后的预定景物影像和未特效处理的预定景物影像进行合成,得到合成后的图像包括:Performing special effects processing on one or more of the predetermined scene images of the i images, and synthesizing the predetermined scene image after the special effect processing and the predetermined scene image not processed by the special effect, and obtaining the synthesized image includes:
对所述背景影像进行特效处理,将处理后的背景影像与所述人物影像进行合成,生成所述合成图片。The background image is subjected to special effect processing, and the processed background image is combined with the human image to generate the composite image.
在本发明一实施方式中,所述方法还包括:当特征是背景影像时,可以对背景影像进行模糊处理。In an embodiment of the invention, the method further includes: when the feature is a background image, the background image may be blurred.
在本发明一实施方式中,所述方法还包括:In an embodiment of the invention, the method further includes:
对背景影像进行特效处理之后,将两个图像中的任一个图像和合成图像做成动画形式进行动态显示。After performing special effects processing on the background image, one of the two images and the composite image are animated for dynamic display.
第二方面,提供一种图像合成装置,所述装置包括: In a second aspect, an image synthesizing apparatus is provided, the apparatus comprising:
获取单元,配置为获取i个摄像头拍摄的图像,所述i是大于1的整数;获取所述i个图像的特征;An acquiring unit configured to acquire images captured by i cameras, wherein i is an integer greater than 1; acquiring features of the i images;
合成单元,配置为根据所述特征,将所述i个图像进行特效合成,生成合成图像。The synthesizing unit is configured to perform special effects synthesis on the i images according to the feature to generate a composite image.
在本发明一实施方式中,所述特征为过曝区域,对于第一图像,所述获取单元还配置为:In an embodiment of the invention, the feature is an overexposed area, and for the first image, the acquiring unit is further configured to:
确定所述第一图像的每个像素区域的中心亮度值;Determining a central luminance value of each pixel region of the first image;
根据所述第一图像的每个像素区域的中心亮度值,确定出所述第一图像的平均亮度值;Determining an average luminance value of the first image according to a central luminance value of each pixel region of the first image;
判断所述每个像素区域的中心亮度值与所述平均亮度值之差是否大于预设阈值;Determining whether a difference between a central luminance value of each of the pixel regions and the average luminance value is greater than a preset threshold;
N个像素区域的中心亮度值与所述平均亮度值之差大于预设阈值时,将所述N个像素区域作为N个过曝区域,所述N是正整数。When the difference between the central luminance value of the N pixel regions and the average luminance value is greater than a preset threshold, the N pixel regions are referred to as N overexposed regions, and the N is a positive integer.
在本发明一实施方式中,所述获取单元还配置为:根据每个像素区域的均方差、饱和度、清晰度和左右图像像素区域的融合权值计算所述中心亮度值。In an embodiment of the invention, the acquiring unit is further configured to: calculate the central brightness value according to a mean square error, a saturation, a sharpness, and a fusion weight value of the left and right image pixel regions of each pixel region.
在本发明一实施方式中,所述合成单元还配置为:In an embodiment of the invention, the synthesizing unit is further configured to:
在所述i个图像中标记出过曝区域;Marking an overexposed area in the i images;
将所述i个图像中的i-1个图像的过曝区域进行亮度减弱处理;Performing brightness reduction processing on the overexposed areas of the i-1 images in the i images;
利用线性减弱方式,将所述i个图像中的过曝区域与除自身图像之外的图像所对应像素区域进行合成处理,生成所述合成图片。The over-exposed area in the i images and the pixel area corresponding to the image other than the self image are combined and processed by the linear attenuation method to generate the composite picture.
在本发明一实施方式中,所述合成单元还配置为:In an embodiment of the invention, the synthesizing unit is further configured to:
对i个图像的预定景物影像的一个或多个进行特效处理,将特效处理后的预定景物影像和未特效处理的预定景物影像进行合成,生成所述合成图片。 Special effects processing is performed on one or more of the predetermined scene images of the i images, and the predetermined scene image after the special effect processing and the predetermined scene image not processed by the special effects are combined to generate the composite picture.
在本发明一实施方式中,所述i个图像为所述i个摄像头从不同角度和位置拍摄的同一场景的图像,所述特征是预定景物影像,所述获取单元还配置为:In an embodiment of the present invention, the i images are images of the same scene captured by the i cameras from different angles and positions, and the features are predetermined scene images, and the acquiring unit is further configured to:
分别从所述i个图像中分割出不同深度的预定景物影像,其中,不同图像中,预定景物影像不同;Separating predetermined scene images of different depths from the i images, wherein the predetermined scene images are different in different images;
所述合成单元还配置为:The synthesizing unit is further configured to:
对所述i个图像的预定景物影像的一个或多个进行特效处理,将特效处理后的预定景物影像和未特效处理的预定景物影像进行合成,生成所述合成图片。Performing special effects processing on one or more of the predetermined scene images of the i images, and synthesizing the predetermined scene images after the special effects processing and the predetermined scene images not processed by the special effects to generate the composite images.
在本发明一实施方式中,所述获取单元还配置为:In an embodiment of the invention, the obtaining unit is further configured to:
结合图像的深度信息与图像原有的颜色信息以及亮度信息,作为联合特征来从所述i个图像中分割出不同深度的预定景物影像。Combining the depth information of the image with the original color information and the brightness information of the image, as the joint feature, the predetermined scene images of different depths are segmented from the i images.
在本发明一实施方式中,所述i是2,所述获取单元还配置为:In an embodiment of the present invention, the i is 2, and the acquiring unit is further configured to:
将一个图像分割出人物影像;Split an image into a person image;
将另一个图像分割出背景影像;Split another image out of the background image;
所述合成单元还配置为:The synthesizing unit is further configured to:
对所述背景影像进行特效处理,将处理后的背景影像与所述人物影像进行合成,生成所述合成图片。The background image is subjected to special effect processing, and the processed background image is combined with the human image to generate the composite image.
在本发明一实施方式中,所述合成单元还配置为:当特征是背景影像时,可以对背景影像进行模糊处理。In an embodiment of the invention, the synthesizing unit is further configured to: when the feature is a background image, perform blur processing on the background image.
在本发明一实施方式中,所述合成单元还配置为:对背景影像进行特效处理之后,将两个图像中的任一个图像和合成图像做成动画形式进行动态显示。In an embodiment of the invention, the synthesizing unit is further configured to: after performing special effects processing on the background image, dynamically display any one of the two images and the composite image in an animated form.
本发明实施例提供了一种图像合成方法和装置,先获取i个摄像头拍摄的图像和i个图像的特征;再根据这些特征,将i个图像进行特效合成,生 成合成图像。这样一来,在获取拍摄图像的特征之后,图像合成装置可以将不同的特征进行特效合成,生成合成图像。这样,图像大大的丰富了多个摄像头拍摄照片的种类,使得同样的一张照片可以有多种变化,因此,增加了合成照片的趣味性,提高了用户体验。The embodiment of the invention provides an image synthesizing method and device, which first acquires images captured by i cameras and features of i images; and according to these features, i images are synthesized for special effects. Into a composite image. In this way, after acquiring the features of the captured image, the image synthesizing device can perform special effects synthesis on different features to generate a composite image. In this way, the image greatly enriches the types of photographs taken by multiple cameras, so that the same photo can be varied in many ways, thereby increasing the interest of the composite photo and improving the user experience.
附图说明DRAWINGS
图1为现有的双目摄像头的结构示意图;1 is a schematic structural view of a conventional binocular camera;
图2为实现本发明各个实施例的移动终端的硬件结构示意图;2 is a schematic structural diagram of hardware of a mobile terminal that implements various embodiments of the present invention;
图3为如图2所示的移动终端的无线通信系统示意图;3 is a schematic diagram of a wireless communication system of the mobile terminal shown in FIG. 2;
图4为本发明实施例提供的一种图像合成方法的流程图;4 is a flowchart of an image synthesizing method according to an embodiment of the present invention;
图5为本发明实施例提供的另一种图像合成方法的流程图;FIG. 5 is a flowchart of another image synthesizing method according to an embodiment of the present invention;
图6为本发明实施例中标记的过曝区域的示意图;6 is a schematic view of a marked overexposed area in an embodiment of the present invention;
图7为本发明实施例提供的再一种图像合成方法的流程图;FIG. 7 is a flowchart of still another image synthesizing method according to an embodiment of the present invention;
图8为本发明实施例提供的2个摄像机拍摄的图像和合成图像的示意图;FIG. 8 is a schematic diagram of an image captured by two cameras and a composite image according to an embodiment of the present invention; FIG.
图9为本发明实施例提供的一种图像合成装置的结构示意图。FIG. 9 is a schematic structural diagram of an image synthesizing apparatus according to an embodiment of the present invention.
具体实施方式detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。The technical solutions in the embodiments of the present invention will be clearly and completely described in the following with reference to the accompanying drawings.
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
现在将参考附图描述实现本发明各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。 A mobile terminal embodying various embodiments of the present invention will now be described with reference to the accompanying drawings. In the following description, the use of suffixes such as "module", "component" or "unit" for indicating an element is merely an explanation for facilitating the present invention, and does not have a specific meaning per se. Therefore, "module" and "component" can be used in combination.
移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、个人数字助理(PDA)、平板电脑(PAD)、便携式多媒体播放器(PMP)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定属性的终端。The mobile terminal can be implemented in various forms. For example, the terminals described in the present invention may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), navigation devices, and the like. Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like. In the following, it is assumed that the terminal is a mobile terminal. However, it will be understood by those skilled in the art that the configuration according to an embodiment of the present invention can be applied to a terminal of a fixed attribute, in addition to an element particularly for moving purposes.
图2为实现本发明各个实施例的移动终端的硬件结构示意。FIG. 2 is a schematic diagram showing the hardware structure of a mobile terminal that implements various embodiments of the present invention.
移动终端100可以包括无线通信单元110、音频/视频(A/V)输入单元120、用户输入单元130、感测单元140、输出单元150、存储器160、接口单元170、控制器180和电源单元190等等。图2示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。The mobile terminal 100 may include a wireless communication unit 110, an audio/video (A/V) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190. and many more. Figure 2 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
无线通信单元110通常包括一个或多个组件,其允许移动终端100与无线通信系统或网络之间的无线电通信。例如,无线通信单元可以包括广播接收模块111、移动通信模块112、无线互联网模块113、短程通信模块114和位置信息模块115中的至少一张。 Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network. For example, the wireless communication unit may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
广播接收模块111经由广播信道从外部广播管理服务器接收广播信号和/或广播相关信息。广播信道可以包括卫星信道和/或地面信道。广播管理服务器可以是生成并发送广播信号和/或广播相关信息的服务器或者接收之前生成的广播信号和/或广播相关信息并且将其发送给终端的服务器。广播信号可以包括TV广播信号、无线电广播信号、数据广播信号等等。而且,广播信号可以进一步包括与TV或无线电广播信号组合的广播信号。广播相关信息也可以经由移动通信网络提供,并且在该情况下,广播相关信息可以由移动通信模块112来接收。广播信号可以以各种形式存在,例如,其 可以以数字多媒体广播(DMB)的电子节目指南(EPG)、数字视频广播手持(DVB-H)的电子服务指南(ESG)等等的形式而存在。广播接收模块111可以通过使用各种属性的广播系统接收信号广播。特别地,广播接收模块111可以通过使用诸如多媒体广播-地面(DMB-T)、数字多媒体广播-卫星(DMB-S)、数字视频广播-手持(DVB-H),前向链路媒体(MediaFLO@)的数据广播系统、地面数字广播综合服务(ISDB-T)等等的数字广播系统接收数字广播。广播接收模块111可以被构造为适合提供广播信号的各种广播系统以及上述数字广播系统。经由广播接收模块111接收的广播信号和/或广播相关信息可以存储在存储器160(或者其它属性的存储介质)中。The broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel. The broadcast channel can include a satellite channel and/or a terrestrial channel. The broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal. The broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like. Moreover, the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal. The broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112. The broadcast signal can exist in various forms, for example, It may exist in the form of Digital Multimedia Broadcasting (DMB) Electronic Program Guide (EPG), Digital Video Broadcasting Handheld (DVB-H) Electronic Service Guide (ESG), and the like. The broadcast receiving module 111 can receive a signal broadcast through a broadcast system using various attributes. In particular, the broadcast receiving module 111 can use forward link media (MediaFLO) by using, for example, multimedia broadcast-terrestrial (DMB-T), digital multimedia broadcast-satellite (DMB-S), digital video broadcast-handheld (DVB-H) The digital broadcasting system of the @) data broadcasting system, the terrestrial digital broadcasting integrated service (ISDB-T), and the like receives digital broadcasting. The broadcast receiving module 111 can be constructed as various broadcast systems suitable for providing broadcast signals as well as the above-described digital broadcast system. The broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or a storage medium of other attributes).
移动通信模块112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一张和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或多媒体消息发送和/或接收的各种属性的数据。The mobile communication module 112 transmits radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server. Such radio signals may include voice call signals, video call signals, or data of various attributes transmitted and/or received in accordance with text and/or multimedia messages.
无线互联网模块113支持移动终端的无线互联网接入。该模块可以内部或外部地耦接到终端。该模块所涉及的无线互联网接入技术可以包括无线局域网(WLAN)(Wi-Fi)、无线宽带(Wibro)、全球微波互联接入(Wimax)、高速下行链路分组接入(HSDPA)等等。The wireless internet module 113 supports wireless internet access of the mobile terminal. The module can be internally or externally coupled to the terminal. The wireless Internet access technologies involved in the module may include Wireless Local Area Network (WLAN) (Wi-Fi), Wireless Broadband (Wibro), Worldwide Interoperability for Microwave Access (Wimax), High Speed Downlink Packet Access (HSDPA), and the like. .
短程通信模块114是用于支持短程通信的模块。短程通信技术的一些示例包括蓝牙TM、射频识别(RFID)、红外数据协会(IrDA)、超宽带(UWB)、紫蜂TM等等。The short range communication module 114 is a module for supporting short range communication. Some examples of short-range communication technology include Bluetooth TM, a radio frequency identification (RFID), infrared data association (IrDA), ultra wideband (UWB), ZigBee, etc. TM.
位置信息模块115是用于检查或获取移动终端的位置信息的模块。位置信息模块的典型示例是GPS(全球定位系统)。根据当前的技术,GPS模块115计算来自三个或更多卫星的距离信息和准确的时间信息并且对于计算的信息应用三角测量法,从而根据经度、纬度和高度准确地计算三维当前位置信息。当前,用于计算位置和时间信息的方法使用三颗卫星并且通 过使用另外的一颗卫星校正计算出的位置和时间信息的误差。此外,GPS模块115能够通过实时地连续计算当前位置信息来计算速度信息。The location information module 115 is a module for checking or acquiring location information of the mobile terminal. A typical example of a location information module is GPS (Global Positioning System). According to the current technology, the GPS module 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information to accurately calculate three-dimensional current position information based on longitude, latitude, and altitude. Currently, the method used to calculate position and time information uses three satellites and The error of the calculated position and time information is corrected using another satellite. Further, the GPS module 115 is capable of calculating speed information by continuously calculating current position information in real time.
A/V输入单元120用于接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风122,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通信单元110进行发送,可以根据移动终端的构造提供两个或更多相机121。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由移动通信模块112发送到移动通信基站的格式输出。麦克风122可以实施各种属性的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。The A/V input unit 120 is for receiving an audio or video signal. The A/V input unit 120 may include a camera 121 and a microphone 122 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode. The processed image frame can be displayed on the display unit 151. The image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal. The microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data. The processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode. The microphone 122 can implement noise canceling (or suppression) algorithms of various properties to eliminate (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种属性的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时,可以形成触摸屏。The user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal. The user input unit 130 allows the user to input information of various attributes, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc. In particular, when the touch panel is superimposed on the display unit 151 in the form of a layer, a touch screen can be formed.
感测单元140检测移动终端100的当前状态,(例如,移动终端100的打开或关闭状态)、移动终端100的位置、用户对于移动终端100的接触(即,触摸输入)的有无、移动终端100的取向、移动终端100的加速或减速移动和方向等等,并且生成用于控制移动终端100的操作的命令或信号。例如,当移动终端100实施为滑动型移动电话时,感测单元140可以感测该滑动型电话是打开还是关闭。另外,感测单元140能够检测电源单元190 是否提供电力或者接口单元170是否与外部装置耦接。感测单元140可以包括接近传感器141将在下面结合触摸屏来对此进行描述。The sensing unit 140 detects the current state of the mobile terminal 100 (eg, the open or closed state of the mobile terminal 100), the location of the mobile terminal 100, the presence or absence of contact (ie, touch input) by the user with the mobile terminal 100, and the mobile terminal. The orientation of 100, the acceleration or deceleration movement and direction of the mobile terminal 100, and the like, and generates a command or signal for controlling the operation of the mobile terminal 100. For example, when the mobile terminal 100 is implemented as a slide type mobile phone, the sensing unit 140 can sense whether the slide type phone is turned on or off. In addition, the sensing unit 140 can detect the power supply unit 190 Whether power is supplied or whether the interface unit 170 is coupled to an external device. Sensing unit 140 may include proximity sensor 141 which will be described below in connection with a touch screen.
接口单元170用作至少一张外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用户识别模块(UIM)、用户识别模块(SIM)、通用用户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为"识别装置")可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以用于接收来自外部装置的输入(例如,数据信息、电力等等),并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端和外部装置之间传输数据。The interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more. The identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Subscriber Identity Module (SIM), a Universal Subscriber Identity Module (USIM), and the like. In addition, the device having the identification module (hereinafter referred to as "identification device") may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device. The interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and externally Data is transferred between devices.
另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括显示单元151、音频输出模块152、警报单元153等等。In addition, when the mobile terminal 100 is connected to the external base, the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal. Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base. Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图 像以及相关功能的UI或GUI等等。The display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in the video call mode or the image capturing mode, the display unit 151 may display the captured image and/or the received image, show the video or a picture. Like the UI or GUI of related functions and so on.
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为透明有机发光二极管(TOLED)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力以及触摸输入位置和触摸输入面积。Meanwhile, when the display unit 151 and the touch panel are superposed on each other in the form of a layer to form a touch screen, the display unit 151 can function as an input device and an output device. The display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a transparent organic light emitting diode (TOLED) display or the like. According to a particular desired embodiment, the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) . The touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
音频输出模块152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152可以包括扬声器、蜂鸣器等等。The audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like. The audio signal is output as sound. Moreover, the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100. The audio output module 152 can include a speaker, a buzzer, and the like.
警报单元153可以提供输出以将事件的发生通知给移动终端100。典型的事件可以包括呼叫接收、消息接收、键信号输入、触摸输入等等。除了音频或视频输出之外,警报单元153可以以不同的方式提供输出以通知事件的发生。例如,警报单元153可以以振动的形式提供输出,当接收到呼叫、消息或一些其它进入通信(incomingcommunication)时,警报单元153可以提供触觉输出(即,振动)以将其通知给用户。通过提供这样的触觉输出,即使在用户的移动电话处于用户的口袋中时,用户也能够识别出各种事件的发生。警报单元153也可以经由显示单元151或音频输出模块152 提供通知事件的发生的输出。The alarm unit 153 can provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alert unit 153 can provide an output in a different manner to notify of the occurrence of an event. For example, the alarm unit 153 can provide an output in the form of vibrations, and when a call, message, or some other incoming communication is received, the alarm unit 153 can provide a tactile output (ie, vibration) to notify the user of it. By providing such a tactile output, the user is able to recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 can also be via the display unit 151 or the audio output module 152. Provides an output of the occurrence of a notification event.
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。The memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
存储器160可以包括至少一种属性的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。The memory 160 may include at least one attribute of a storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like. Moreover, the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
控制器180通常控制移动终端的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。另外,控制器180可以包括用于再现(或回放)多媒体数据的多媒体模块181,多媒体模块181可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。The controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like. In addition, the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, which may be constructed within the controller 180 or may be configured to be separate from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。The power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控 制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。The various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof. For hardware implementations, the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( FPGA, processor, controller, microcontroller, microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be under control Implemented in the controller 180. For software implementations, implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation. The software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by controller 180.
至此,己经按照其功能描述了移动终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种属性的移动终端中的滑动型移动终端作为示例。因此,本发明能够应用于任何属性的移动终端,并且不限于滑动型移动终端。So far, the mobile terminal has been described in terms of its function. Hereinafter, for the sake of brevity, a slide type mobile terminal in a mobile terminal such as a folding type, a bar type, a swing type, a slide type mobile terminal or the like will be described as an example. Therefore, the present invention can be applied to a mobile terminal of any attribute, and is not limited to a slide type mobile terminal.
如图2中所示的移动终端100可以被构造为利用经由帧或分组发送数据的诸如有线和无线通信系统以及基于卫星的通信系统来操作。The mobile terminal 100 as shown in FIG. 2 may be configured to operate using a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
现在将参考图3描述其中根据本发明的移动终端能够操作的通信系统。A communication system in which a mobile terminal according to the present invention can be operated will now be described with reference to FIG.
这样的通信系统可以使用不同的空中接口和/或物理层。例如,由通信系统使用的空中接口包括例如频分多址(FDMA)、时分多址(TDMA)、码分多址(CDMA)和通用移动通信系统(UMTS)(特别地,长期演进(LTE))、全球移动通信系统(GSM)等等。作为非限制性示例,下面的描述涉及CDMA通信系统,但是这样的教导同样适用于其它属性的系统。Such communication systems may use different air interfaces and/or physical layers. For example, air interfaces used by communication systems include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)). ), Global System for Mobile Communications (GSM), etc. As a non-limiting example, the following description relates to a CDMA communication system, but such teachings are equally applicable to systems of other attributes.
参考图3,CDMA无线通信系统可以包括多个移动终端100、多个基站(BS)270、基站控制器(BSC)275和移动交换中心(MSC)280。MSC280被构造为与公共电话交换网络(PSTN)290形成接口。MSC280还被构造为与可以经由回程线路耦接到基站270的BSC275形成接口。回程线路可以根据若干己知的接口中的任一种来构造,所述接口包括例如E1/T1、ATM,IP、PPP、帧中继、HDSL、ADSL或xDSL。将理解的是,如图3中所示的系统可以包括多个BSC275。Referring to FIG. 3, a CDMA wireless communication system may include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280. The MSC 280 is configured to interface with a public switched telephone network (PSTN) 290. The MSC 280 is also configured to interface with a BSC 275 that can be coupled to the base station 270 via a backhaul line. The backhaul line can be constructed in accordance with any of a number of well known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 3 can include multiple BSCs 275.
每个BS270可以服务一个或多个分区(或区域),由多向天线或指向特定方向的天线覆盖的每个分区放射状地远离BS270。或者,每个分区可以 由用于分集接收的两个或更多天线覆盖。每个BS270可以被构造为支持多个频率分配,并且每个频率分配具有特定频谱(例如,1.25MHz,5MHz等等)。Each BS 270 can serve one or more partitions (or regions), each of which is covered by a multi-directional antenna or an antenna directed to a particular direction radially away from the BS 270. Or, each partition can Covered by two or more antennas for diversity reception. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
分区与频率分配的交叉可以被称为CDMA信道。BS270也可以被称为基站收发器子系统(BTS)或者其它等效术语。在这样的情况下,术语"基站"可以用于笼统地表示单个BSC275和至少一张BS270。基站也可以被称为"蜂窝站"。或者,特定BS270的各分区可以被称为多个蜂窝站。The intersection of partitioning and frequency allocation can be referred to as a CDMA channel. BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology. In such a case, the term "base station" can be used to generally refer to a single BSC 275 and at least one BS 270. A base station can also be referred to as a "cell station." Alternatively, each partition of a particular BS 270 may be referred to as a plurality of cellular stations.
如图3中所示,广播发射器(BT)295将广播信号发送给在系统内操作的移动终端100。如图2中所示的广播接收模块111被设置在移动终端100处以接收由BT295发送的广播信号。在图3中,示出了几个全球定位系统(GPS)卫星500。卫星500帮助定位多个移动终端100中的至少一张。As shown in FIG. 3, a broadcast transmitter (BT) 295 transmits a broadcast signal to the mobile terminal 100 operating within the system. A broadcast receiving module 111 as shown in FIG. 2 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295. In Figure 3, several Global Positioning System (GPS) satellites 500 are shown. The satellite 500 helps locate at least one of the plurality of mobile terminals 100.
在图3中,描绘了多个卫星500,但是理解的是,可以利用任何数目的卫星获得有用的定位信息。如图2中所示的GPS模块115通常被构造为与卫星500配合以获得想要的定位信息。替代GPS跟踪技术或者在GPS跟踪技术之外,可以使用可以跟踪移动终端的位置的其它技术。另外,至少一张GPS卫星500可以选择性地或者额外地处理卫星DMB传输。In Figure 3, a plurality of satellites 500 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites. The GPS module 115 as shown in Figure 2 is typically configured to cooperate with the satellite 500 to obtain desired positioning information. Instead of GPS tracking technology or in addition to GPS tracking technology, other techniques that can track the location of the mobile terminal can be used. Additionally, at least one GPS satellite 500 can selectively or additionally process satellite DMB transmissions.
作为无线通信系统的一个典型操作,BS270接收来自各种移动终端100的反向链路信号。移动终端100通常参与通话、消息收发和其它属性的通信。特定基站270接收的每个反向链路信号被在特定BS270内进行处理。获得的数据被转发给相关的BSC275。BSC提供通话资源分配和包括BS270之间的软切换过程的协调的移动管理功能。BSC275还将接收到的数据路由到MSC280,其提供用于与PSTN290形成接口的额外的路由服务。类似地,PSTN290与MSC280形成接口,MSC与BSC275形成接口,并且BSC275相应地控制BS270以将正向链路信号发送到移动终端100。As a typical operation of a wireless communication system, BS 270 receives reverse link signals from various mobile terminals 100. Mobile terminal 100 typically participates in the communication of calls, messaging, and other attributes. Each reverse link signal received by a particular base station 270 is processed within a particular BS 270. The obtained data is forwarded to the relevant BSC 275. The BSC provides call resource allocation and coordinated mobility management functions including a soft handoff procedure between the BSs 270. The BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290. Similarly, PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
基于上述移动终端硬件结构以及通信系统,提出本发明方法各个实施 例。Based on the above hardware structure of the mobile terminal and the communication system, various implementations of the method of the present invention are proposed example.
实施例一Embodiment 1
本发明实施例提供一种图像合成方法,应用于图像合成装置,该图像合成装置可以是一个独立的设备,也可以是移动终端的一个部分。如图4所示,该方法包括:An embodiment of the present invention provides an image synthesizing method, which is applied to an image synthesizing device, and the image synthesizing device may be an independent device or a part of the mobile terminal. As shown in FIG. 4, the method includes:
步骤301、获取i个摄像头拍摄的图像。Step 301: Acquire images captured by i cameras.
这里,i是大于1的整数。本实施例中的摄像头的种类并不作限制,在本发明一实施方式中摄像头是可见光摄像头,这i个摄像头固定在连接部件上,i个摄像头拍摄的照片可以在图像合成装置上显示。每个摄像头拍摄的图像可以是原始图像,也可以是经过简单处理的图像,本实施例对此不做限制。Here, i is an integer greater than one. The type of the camera in this embodiment is not limited. In one embodiment of the present invention, the camera is a visible light camera, and the i cameras are fixed to the connecting member, and the photos taken by the i cameras can be displayed on the image synthesizing device. The image captured by each camera may be an original image or a simple processed image, which is not limited in this embodiment.
在本发明一实施方式中,摄像头的个数是2个。In one embodiment of the invention, the number of cameras is two.
步骤302、获取i个图像的特征。Step 302: Acquire features of i images.
本实施例中的特征可以是过曝区域或预设景物影像等等,只要是可以特效或者合成的景物影像或者像素区域的参数值即可作为特征。The feature in this embodiment may be an overexposed area or a preset scene image or the like, and may be a feature as long as it is a special effect or a synthesized scene image or a parameter value of a pixel area.
具体的,以特征为过曝区域为例,步骤302具体包括:确定第一图像的每个像素区域的中心亮度值;根据第一图像的每个像素区域的中心亮度值,确定出第一图像的平均亮度值;判断每个像素区域的中心亮度值与平均亮度值之差是否大于预设阈值;当N个像素区域的中心亮度值与所述平均亮度值之差大于预设阈值时,将这N个像素区域作为N个过曝区域,所述N是正整数。这里,图像的像素区域可以分为清晰区域和过曝区域。清晰区域是指像素点没有过曝光、没有欠曝光且对焦清晰的区域;相应的过曝区域是像素点过曝光、欠曝光或者对焦不清晰的区域;中心亮度值是根据该像素区域的均方差、饱和度、清晰度(对焦是否清晰)和左右图像像素区域的融合权值计算出的。 Specifically, taking the feature as an overexposed area as an example, step 302 specifically includes: determining a center luminance value of each pixel region of the first image; and determining a first image according to a central luminance value of each pixel region of the first image. The average brightness value; determining whether the difference between the center brightness value and the average brightness value of each pixel area is greater than a preset threshold; when the difference between the center brightness value of the N pixel areas and the average brightness value is greater than a preset threshold, The N pixel regions serve as N overexposed regions, and the N is a positive integer. Here, the pixel area of the image can be divided into a clear area and an overexposed area. The clear area refers to the area where the pixel is not overexposed, has no underexposure, and the focus is clear; the corresponding overexposed area is the area where the pixel is overexposed, underexposed, or unfocused; the center brightness value is based on the mean square error of the pixel area. , saturation, sharpness (whether the focus is clear) and the blending weights of the pixel areas of the left and right images are calculated.
具体的,i个图像分别是i个摄像机从不同角度和位置拍摄的同一场景的图像,特征是预定景物影像,步骤302可以包括:分别从i个图像中分割出不同深度的预定景物影像,其中,不同图像中,预定景物影像不同。现有的分割方法有很多种。例如,分割方法可以是将深度信息与原有的颜色信息、亮度信息进行结合起来,作为联合特征来进行分割,这样,分割效果更加精确。在本发明一实施方式中,以i是2为例,步骤302可以包括:从一个图像中分割出人物影像,从另一个图像分割出背景影像。Specifically, the i images are images of the same scene captured by the i cameras from different angles and positions, and the feature is a predetermined scene image, and the step 302 may include: respectively dividing the predetermined scene images of different depths from the i images, wherein In different images, the predetermined scene image is different. There are many different methods of segmentation. For example, the segmentation method may combine the depth information with the original color information and the brightness information, and perform segmentation as a joint feature, so that the segmentation effect is more accurate. In an embodiment of the present invention, taking i as 2 as an example, step 302 may include: segmenting a person image from one image and segmenting a background image from another image.
步骤303、根据特征,将i个图像进行特效合成,生成合成图像。Step 303: Perform special effects synthesis on the i images according to the feature to generate a composite image.
当特征为过曝区域时,与步骤302相对应的步骤303包括:在i个图像中标记出过曝区域;将i个图像中的i-1个图像的过曝区域进行亮度减弱处理;利用线性减弱方式,将i个图像中的过曝区域与除自身图像之外的图像所对应像素区域进行合成处理,生成合成图片。When the feature is an overexposed area, the step 303 corresponding to the step 302 includes: marking the overexposed area in the i images; and performing the brightness reduction processing on the overexposed areas of the i-1 images in the i images; In the linear attenuation mode, the overexposed regions in the i images are combined with the pixel regions corresponding to the images other than the self images to generate a composite image.
这里,由于已经将过曝区域进行亮度减弱,因此,i个图像在合成时,亮度减弱的过曝区域就会被其他图像中对应的区域覆盖,这样,合成图像的过曝区域将会大大减少。Here, since the overexposure region has been subjected to brightness reduction, when the i images are synthesized, the overexposed regions whose brightness is weakened are covered by the corresponding regions in other images, so that the overexposed regions of the composite image are greatly reduced. .
值得说明的是,当没有像素区域与所述平均亮度值之差大于预设阈值时,可以将i个图像进行其他特效合成,本实施例不做限制。It is to be noted that, when the difference between the pixel area and the average brightness value is greater than a preset threshold, the i images may be combined with other special effects, which is not limited in this embodiment.
当i个图像分别是i个摄像头从不同角度和位置拍摄的同一场景的图像,特征是预定景物影像,与步骤302对应的步骤303可以包括:对i个图像的预定景物影像的一个或多个进行特效处理,将特效处理后的预定景物影像和未特效处理的预定景物影像进行合成,生成所述合成图片。这里,由于每个图像的深度不同,每个图像需要分割的景物影像也不尽相同,这种能够被分割的景物影像就是预定景物影像。例如,从一个图像中分割出人物影像,从另一个图像分割出背景影像之后,可以将背景影像进行特效处理,将处理后的背景影像和未处理的人物影像进行合成,生成合成图像。 When the i images are respectively images of the same scene captured by the i cameras from different angles and positions, the feature is a predetermined scene image, and the step 303 corresponding to step 302 may include: one or more of the predetermined scene images of the i images. The special effect processing is performed, and the predetermined scene image after the special effect processing and the predetermined scene image not processed by the special effect are combined to generate the composite picture. Here, since the depth of each image is different, each scene image to be divided is not the same, and the scene image that can be divided is a predetermined scene image. For example, after segmenting a person image from one image and segmenting the background image from another image, the background image may be subjected to special effects processing, and the processed background image and the unprocessed person image may be combined to generate a composite image.
这样一来,在获取拍摄图像的特征之后,图像合成装置可以将不同的特征进行特效合成,生成合成图像。这样,图像大大的丰富了多个摄像头拍摄照片的种类,使得同样的一张照片可以有多种变化,因此,增加了合成照片的趣味性,提高了用户体验。In this way, after acquiring the features of the captured image, the image synthesizing device can perform special effects synthesis on different features to generate a composite image. In this way, the image greatly enriches the types of photographs taken by multiple cameras, so that the same photo can be varied in many ways, thereby increasing the interest of the composite photo and improving the user experience.
在本发明一实施方式中,所述对背景影像进行特效处理之后还可以包括:将两个图像中的任一个图像和合成图像做成动画形式,例如gif格式的动态显示。同样可以分割出两个图像的人物影像,将两个人物影像和任一个图像的背景影像做成gif格式,从而提高合成图片的趣味性,提高用户体验。In an embodiment of the present invention, after performing the special effect processing on the background image, the method further includes: forming any one of the two images and the composite image into an animated form, such as a dynamic display in a gif format. Similarly, the image of the two images can be segmented, and the background images of the two characters and the image of any one image can be made into a gif format, thereby improving the interest of the synthesized image and improving the user experience.
在本发明一实施方式中,当特征是背景影像时,可以对背景影像进行模糊处理。双摄像头独立拍摄的任一图像,先分割出人物影像,将该人物以外的部分,即背景影像进行高斯模糊处理,例如,假设双摄像头可以分为左摄像头和右摄像头,左摄像头拍摄的是第一图像,右摄像头拍摄的第二图像,两图像中,保留第一图像的人物影像,将第二图像的背景影像进行高斯模糊处理。将第一图像的背景影像全部替换为第二图像的背景影像。由于双摄像头本身也可以拍摄出模拟单反相机的景深图片,因此,该方法中,对于模糊的程度,模糊的模式可以自定义,相比于原有的拍摄景深图,其模糊方式可选择面广,且效果较好。In an embodiment of the invention, when the feature is a background image, the background image may be blurred. Any image captured independently by the dual camera, first splits the character image, and performs Gaussian blurring on the part other than the character, that is, the background image, for example, assuming that the dual camera can be divided into a left camera and a right camera, and the left camera is photographed. An image, a second image taken by the right camera, in which the character image of the first image is retained, and the background image of the second image is subjected to Gaussian blurring. Replace the background image of the first image with the background image of the second image. Since the dual camera itself can also capture the depth of field picture of the analog SLR camera, in this method, the blur mode can be customized for the degree of blurring, and the blur mode can be selected wider than the original shooting depth map. And the effect is better.
值得说明的是,双摄像头的拍摄出的图像再经过合成可以包括:对视频流进行捕捉,具体方法为方法是建立两条滤波器图标,分别对应两个视频流捕捉。然后,在捕捉预览视频流的同时,截取所需的图像数据,将截取的图像数据实时的放到内存区域中,并分别对存放在不同内存区域的两个摄像头的图像数据进行图像处理,找到目标,确定两个摄像头所得图像中的图像坐标。最后,运用双目视觉的理论进行计算,得到目标点在世界坐标系中的位置信息。这样就可以确定出合成图像的位置信息。 It is worth noting that the captured image of the dual camera may be further synthesized by capturing the video stream by setting two filter icons corresponding to the two video streams. Then, while capturing the preview video stream, intercepting the required image data, placing the intercepted image data into the memory area in real time, and respectively performing image processing on the image data of the two cameras stored in different memory areas, and finding Target, determine the image coordinates in the image obtained by the two cameras. Finally, using the theory of binocular vision to calculate, the position information of the target point in the world coordinate system is obtained. This makes it possible to determine the positional information of the composite image.
实施例二Embodiment 2
本发明实施例提供一种图像合成方法,应用于移动终端,例如智能手机、笔记本电脑、台式电脑等。本实施例以2个摄像头为例进行介绍,如图5所示,该方法包括:Embodiments of the present invention provide an image synthesizing method, which is applied to a mobile terminal, such as a smart phone, a notebook computer, a desktop computer, or the like. In this embodiment, two cameras are taken as an example. As shown in FIG. 5, the method includes:
步骤401、获取2个摄像头同时拍摄的图片。Step 401: Acquire a picture taken by two cameras at the same time.
这里,两个摄像头拍摄的对象应该是同一区域的对象。Here, the objects captured by the two cameras should be objects of the same area.
步骤402、确定2个图像的每个像素区域的中心亮度值。Step 402: Determine a central luminance value of each pixel region of the two images.
这里,中心亮度值是根据像素区域的均方差、饱和度、清晰度(对焦是否清晰)和左右图像像素区域的融合权值计算出的。Here, the center luminance value is calculated based on the mean square error, saturation, sharpness (whether the focus is clear) of the pixel region, and the blending weight values of the left and right image pixel regions.
步骤403、根据图像的每个像素区域的中心亮度值,确定出对应的图像的平均亮度值。Step 403: Determine an average brightness value of the corresponding image according to a central brightness value of each pixel area of the image.
一般情况下,图像的平均亮度值是将图像所有的像素区域的中心亮度值相加除以图像的像素区域的个数。In general, the average luminance value of an image is the number of pixel regions in which all the pixel regions of the image are added and divided by the image.
步骤404、判断每个图像中每个像素区域的中心亮度值与对应图像的平均亮度值之差是否大于预设阈值。若是,则执行步骤405;若否,则执行步骤407。Step 404: Determine whether a difference between a central luminance value of each pixel region in each image and an average luminance value of the corresponding image is greater than a preset threshold. If yes, go to step 405; if no, go to step 407.
步骤405、将中心亮度值与对应图像的平均亮度值之差大于预设阈值的像素区域作为过曝区域。Step 405: The pixel area where the difference between the central brightness value and the average brightness value of the corresponding image is greater than a preset threshold is used as the overexposed area.
步骤406、根据过曝区域,将2个图像进行特效合成,生成合成图像。Step 406: Perform special effects synthesis on the two images according to the overexposed area to generate a composite image.
具体的,在2个图像中标记出过曝区域(图6中的黑色实心圆表示被标记处的过曝区域);将任1个图像的过曝区域进行亮度减弱处理;利用线性减弱方式,将2个图像中的过曝区域与除自身图像之外的图像所对应像素区域进行合成处理,生成合成图片。Specifically, the overexposed area is marked in two images (the black solid circle in FIG. 6 indicates the overexposed area at the mark); the overexposed area of any one image is subjected to brightness reduction processing; and the linear attenuation method is used. The overexposed regions in the two images are combined with the pixel regions corresponding to the images other than the self images to generate a composite image.
这里,特效合成可以包括分割、虚化、模糊、修改色调等一系特效处理和合成处理。 Here, the effect synthesis may include a series of special effects processing and synthesis processing such as segmentation, blurring, blurring, and modification of hue.
步骤407、将2个图像进行特效合成,生成合成图像。Step 407: Perform special effects synthesis on the two images to generate a composite image.
实施例三Embodiment 3
本发明实施例提供一种图像合成方法,应用于移动终端,例如智能手机、笔记本电脑、台式电脑等。本实施例以2个摄像头为例进行介绍,如图7所示,该方法包括:Embodiments of the present invention provide an image synthesizing method, which is applied to a mobile terminal, such as a smart phone, a notebook computer, a desktop computer, or the like. This embodiment introduces two cameras as an example. As shown in FIG. 7, the method includes:
步骤501、获取2个摄像头同时拍摄的图片。Step 501: Acquire a picture taken by two cameras at the same time.
这里,2个摄像头拍摄的对象应该是同一区域的对象,2个图像分别是2个摄像头从不同角度和位置拍摄的同一场景的图像。Here, the objects captured by the two cameras should be objects of the same area, and the two images are images of the same scene captured by two cameras from different angles and positions.
步骤502、从第一个图像中获取人物影像。Step 502: Acquire a character image from the first image.
这里,人物影像是该第一图像中预定景物影像。Here, the character image is a predetermined scene image in the first image.
步骤503、从第二个图像中获取背景影像。Step 503: Obtain a background image from the second image.
这里,人物影像是该第二图像中预定景物影像。Here, the character image is a predetermined scene image in the second image.
步骤504、将第二个图像中的背景影像进行特效处理。Step 504: Perform special effects processing on the background image in the second image.
这里,特效处理可以包括虚化、模糊、修改色调等一系列类似于图像处理的处理。Here, the effect processing may include a series of processing similar to image processing such as blurring, blurring, and modifying the color tone.
如图8的第一张图是第一个摄像头拍摄的图片,第二张图是第一个摄像头拍摄的图片,第三张图是第一张图的人物影像和特效第二个图的背景影像的何合成图像。The first picture in Figure 8 is the picture taken by the first camera, the second picture is the picture taken by the first camera, and the third picture is the background of the character image of the first picture and the second picture of the special effect. What is the composite image of the image.
步骤505、将特效处理后的图像与人物影像进行合成,得到合成图像。Step 505: synthesize the image processed by the special effect and the image of the person to obtain a composite image.
实施例四Embodiment 4
本发明实施例提供一种图像合成装置60,如图9所示,该图像合成装置60可以包括:An embodiment of the present invention provides an image synthesizing device 60. As shown in FIG. 9, the image synthesizing device 60 may include:
获取单元601,配置为获取i个摄像头拍摄的图像,所述i是大于1的整数;获取所述i个图像的特征。The obtaining unit 601 is configured to acquire images captured by the i cameras, wherein the i is an integer greater than 1; and acquire features of the i images.
合成单元602,配置为根据所述特征,将所述i个图像进行特效合成, 生成合成图像。The synthesizing unit 602 is configured to perform special effects synthesis on the i images according to the feature, Generate a composite image.
这样一来,在获取拍摄图像的特征之后,图像合成装置可以将不同的特征进行特效合成,生成合成图像。这样,图像大大的丰富了多个摄像头拍摄照片的种类,使得同样的一张照片可以有多种变化,因此,增加了合成照片的趣味性,提高了用户体验。In this way, after acquiring the features of the captured image, the image synthesizing device can perform special effects synthesis on different features to generate a composite image. In this way, the image greatly enriches the types of photographs taken by multiple cameras, so that the same photo can be varied in many ways, thereby increasing the interest of the composite photo and improving the user experience.
在本发明一实施方式中,所述特征为过曝区域,对于第一图像,所述获取单元601还配置为:In an embodiment of the present invention, the feature is an overexposed area. For the first image, the obtaining unit 601 is further configured to:
确定所述第一图像的每个像素区域的中心亮度值;Determining a central luminance value of each pixel region of the first image;
根据所述第一图像的每个像素区域的中心亮度值,确定出所述第一图像的平均亮度值;Determining an average luminance value of the first image according to a central luminance value of each pixel region of the first image;
判断所述每个像素区域的中心亮度值与所述平均亮度值之差是否大于预设阈值;Determining whether a difference between a central luminance value of each of the pixel regions and the average luminance value is greater than a preset threshold;
N个像素区域的中心亮度值与所述平均亮度值之差大于预设阈值时,将所述N个像素区域作为N个过曝区域,所述N是正整数。When the difference between the central luminance value of the N pixel regions and the average luminance value is greater than a preset threshold, the N pixel regions are referred to as N overexposed regions, and the N is a positive integer.
在本发明一实施方式中,所述获取单元601还配置为:根据每个像素区域的均方差、饱和度、清晰度和左右图像像素区域的融合权值计算所述中心亮度值。In an embodiment of the present invention, the acquiring unit 601 is further configured to: calculate the central brightness value according to a mean square error, a saturation, a sharpness, and a fusion weight of the left and right image pixel regions.
在本发明一实施方式中,所述合成单元602还配置为:In an embodiment of the invention, the synthesizing unit 602 is further configured to:
在所述i个图像中标记出过曝区域;Marking an overexposed area in the i images;
将所述i个图像中的i-1个图像的过曝区域进行亮度减弱处理;Performing brightness reduction processing on the overexposed areas of the i-1 images in the i images;
利用线性减弱方式,将所述i个图像中的过曝区域与除自身图像之外的图像所对应像素区域进行合成处理,生成所述合成图片。The over-exposed area in the i images and the pixel area corresponding to the image other than the self image are combined and processed by the linear attenuation method to generate the composite picture.
在本发明一实施方式中,所述合成单元602还配置为:In an embodiment of the invention, the synthesizing unit 602 is further configured to:
对i个图像的预定景物影像的一个或多个进行特效处理,将特效处理后的预定景物影像和未特效处理的预定景物影像进行合成,生成所述合成图 片。Performing special effects processing on one or more of the predetermined scene images of the i images, synthesizing the predetermined scene images after the special effects processing and the predetermined scene images not processed by the special effects to generate the composite map sheet.
在本发明一实施方式中,所述i个图像分别是所述i个摄像头从不同角度和位置拍摄的同一场景的图像,所述特征是预定景物影像,所述获取单元601还配置为:In an embodiment of the present invention, the i images are images of the same scene captured by the i cameras from different angles and positions, and the features are predetermined scene images, and the acquiring unit 601 is further configured to:
分别从所述i个图像中分割出不同深度的预定景物影像,其中,不同图像中,预定景物影像不同;Separating predetermined scene images of different depths from the i images, wherein the predetermined scene images are different in different images;
所述合成单元602还配置为:The synthesizing unit 602 is further configured to:
对所述i个图像的预定景物影像的一个或多个进行特效处理,将特效处理后的预定景物影像和未特效处理的预定景物影像进行合成,生成所述合成图片。Performing special effects processing on one or more of the predetermined scene images of the i images, and synthesizing the predetermined scene images after the special effects processing and the predetermined scene images not processed by the special effects to generate the composite images.
在本发明一实施方式中,所述获取单元601还配置为:In an embodiment of the invention, the obtaining unit 601 is further configured to:
结合图像的深度信息与图像原有的颜色信息以及亮度信息,作为联合特征来从所述i个图像中分割出不同深度的预定景物影像。Combining the depth information of the image with the original color information and the brightness information of the image, as the joint feature, the predetermined scene images of different depths are segmented from the i images.
在本发明一实施方式中,所述获取单元601还配置为:In an embodiment of the invention, the obtaining unit 601 is further configured to:
将一个图像分割出人物影像;Split an image into a person image;
将另一个图像分割出背景影像;Split another image out of the background image;
所述合成单元602还配置为:The synthesizing unit 602 is further configured to:
对所述背景影像进行特效处理,将处理后的背景影像与所述人物影像进行合成,生成所述合成图片。The background image is subjected to special effect processing, and the processed background image is combined with the human image to generate the composite image.
在本发明一实施方式中,所述合成单元602还配置为:当特征是背景影像时,可以对背景影像进行模糊处理。In an embodiment of the invention, the synthesizing unit 602 is further configured to: when the feature is a background image, the background image may be blurred.
在本发明一实施方式中,所述合成单元602还配置为:对背景影像进行特效处理之后,将两个图像中的任一个图像和合成图像做成动画形式进行动态显示。In an embodiment of the invention, the synthesizing unit 602 is further configured to: after performing special effects processing on the background image, dynamically display any one of the two images and the composite image in an animated form.
在实际应用中,所述获取单元601和合成单元602均可由位于终端中 的中央处理器(Central Processing Unit,CPU)、微处理器(Micro ProcessorUnit,MPU)、数字信号处理器(Digital Signal Processor,DSP)、或现场可编程门阵列(Field Programmable Gate Array,FPGA)等实现。In an actual application, the obtaining unit 601 and the synthesizing unit 602 can both be located in the terminal. Central Processing Unit (CPU), Micro Processor Unit (MPU), Digital Signal Processor (DSP), or Field Programmable Gate Array (FPGA) .
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art will appreciate that embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (system), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. Means for implementing the functions specified in one or more of the flow or in a block or blocks of the flow chart.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。The computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device. The apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。 These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device. The instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。 The above is only the preferred embodiment of the present invention and is not intended to limit the scope of the present invention.

Claims (20)

  1. 一种图像合成方法,所述方法包括:An image synthesis method, the method comprising:
    获取i个摄像头拍摄的图像,所述i是大于1的整数;Obtaining images taken by i cameras, the i being an integer greater than one;
    获取所述i个图像的特征;Obtaining features of the i images;
    根据所述特征,将所述i个图像进行特效合成,生成合成图像。According to the feature, the i images are subjected to special effects synthesis to generate a composite image.
  2. 根据权利要求1所述的方法,其中,所述特征为过曝区域,对于第一图像,所述获取所述第一图像的特征包括:The method of claim 1, wherein the feature is an overexposed region, and for the first image, the acquiring the feature of the first image comprises:
    确定所述第一图像的每个像素区域的中心亮度值;Determining a central luminance value of each pixel region of the first image;
    根据所述第一图像的每个像素区域的中心亮度值,确定出所述第一图像的平均亮度值;Determining an average luminance value of the first image according to a central luminance value of each pixel region of the first image;
    判断所述每个像素区域的中心亮度值与所述平均亮度值之差是否大于预设阈值;Determining whether a difference between a central luminance value of each of the pixel regions and the average luminance value is greater than a preset threshold;
    N个像素区域的中心亮度值与所述平均亮度值之差大于预设阈值时,将所述N个像素区域作为N个过曝区域,所述N是正整数。When the difference between the central luminance value of the N pixel regions and the average luminance value is greater than a preset threshold, the N pixel regions are referred to as N overexposed regions, and the N is a positive integer.
  3. 根据权利要求2所述的方法,其中,所述确定所述第一图像的每个像素区域的中心亮度值,包括:The method of claim 2, wherein the determining a central luminance value of each pixel region of the first image comprises:
    根据每个像素区域的均方差、饱和度、清晰度和左右图像像素区域的融合权值计算所述中心亮度值。The center luminance value is calculated according to the mean square error, saturation, sharpness, and fusion weight values of the left and right image pixel regions of each pixel region.
  4. 根据权利要求2所述的方法,其中,所述根据所述特征,将所述i个图像进行特效合成,生成合成图像包括:The method according to claim 2, wherein the synthesizing the i images according to the feature to generate a composite image comprises:
    在所述i个图像中标记出过曝区域;Marking an overexposed area in the i images;
    将所述i个图像中的i-1个图像的过曝区域进行亮度减弱处理;Performing brightness reduction processing on the overexposed areas of the i-1 images in the i images;
    利用线性减弱方式,将所述i个图像中的过曝区域与除自身图像之外的图像所对应像素区域进行合成处理,生成所述合成图片。The over-exposed area in the i images and the pixel area corresponding to the image other than the self image are combined and processed by the linear attenuation method to generate the composite picture.
  5. 根据权利要求2所述的方法,其中,将所述i个图像进行特效合成, 生成合成图像包括:The method according to claim 2, wherein the i images are subjected to special effects synthesis, Generating composite images includes:
    对i个图像的预定景物影像的一个或多个进行特效处理,将特效处理后的预定景物影像和未特效处理的预定景物影像进行合成,生成所述合成图片。Special effects processing is performed on one or more of the predetermined scene images of the i images, and the predetermined scene image after the special effect processing and the predetermined scene image not processed by the special effects are combined to generate the composite picture.
  6. 根据权利要求1所述的方法,其中,所述i个图像为所述i个摄像头从不同角度和位置拍摄的同一场景的图像,所述特征是预定景物影像,所述获取所述i个图像的特征包括:The method according to claim 1, wherein the i images are images of the same scene captured by the i cameras from different angles and positions, the features are predetermined scene images, and the acquiring the i images Features include:
    分别从所述i个图像中分割出不同深度的预定景物影像,其中,不同图像中,预定景物影像不同;Separating predetermined scene images of different depths from the i images, wherein the predetermined scene images are different in different images;
    所述根据所述特征,将所述i个图像进行特效合成,生成合成图像包括:According to the feature, performing special effects synthesis on the i images, and generating a composite image includes:
    对所述i个图像的预定景物影像的一个或多个进行特效处理,将特效处理后的预定景物影像和未特效处理的预定景物影像进行合成,生成所述合成图片。Performing special effects processing on one or more of the predetermined scene images of the i images, and synthesizing the predetermined scene images after the special effects processing and the predetermined scene images not processed by the special effects to generate the composite images.
  7. 根据权利要求6所述的方法,其中,所述分别从所述i个图像中分割出不同深度的预定景物影像,包括:The method according to claim 6, wherein the segmenting the predetermined scene images of different depths from the i images respectively comprises:
    结合图像的深度信息与图像原有的颜色信息以及亮度信息,作为联合特征来从所述i个图像中分割出不同深度的预定景物影像。Combining the depth information of the image with the original color information and the brightness information of the image, as the joint feature, the predetermined scene images of different depths are segmented from the i images.
  8. 根据权利要求6所述的方法,其中,所述i是2,所述分别从所述i个图像中分割出不同深度的预定景物影像包括:The method according to claim 6, wherein said i is 2, and said separately segmenting predetermined scene images of different depths from said i images comprises:
    将一个图像分割出人物影像;Split an image into a person image;
    将另一个图像分割出背景影像;Split another image out of the background image;
    所述对所述i个图像的预定景物影像的一个或多个进行特效处理,将特效处理后的预定景物影像和未特效处理的预定景物影像进行合成,得到合成后的图像包括:Performing special effects processing on one or more of the predetermined scene images of the i images, and synthesizing the predetermined scene image after the special effect processing and the predetermined scene image not processed by the special effect, and obtaining the synthesized image includes:
    对所述背景影像进行特效处理,将处理后的背景影像与所述人物影像 进行合成,生成所述合成图片。Performing special effects on the background image, and processing the processed background image and the character image Synthesis is performed to generate the composite picture.
  9. 根据权利要求1至8任一项所述的方法,其中,所述方法还包括:当特征是背景影像时,可以对背景影像进行模糊处理。The method according to any one of claims 1 to 8, wherein the method further comprises: blurring the background image when the feature is a background image.
  10. 根据权利要求9所述的方法,其中,所述方法还包括:The method of claim 9 wherein the method further comprises:
    对背景影像进行特效处理之后,将两个图像中的任一个图像和合成图像做成动画形式进行动态显示。After performing special effects processing on the background image, one of the two images and the composite image are animated for dynamic display.
  11. 一种图像合成装置,所述装置包括:An image synthesizing device, the device comprising:
    获取单元,配置为获取i个摄像头拍摄的图像,所述i是大于1的整数;获取所述i个图像的特征;An acquiring unit configured to acquire images captured by i cameras, wherein i is an integer greater than 1; acquiring features of the i images;
    合成单元,配置为根据所述特征,将所述i个图像进行特效合成,生成合成图像。The synthesizing unit is configured to perform special effects synthesis on the i images according to the feature to generate a composite image.
  12. 根据权利要求11所述的装置,其中,所述特征为过曝区域,对于第一图像,所述获取单元还配置为:The apparatus according to claim 11, wherein the feature is an overexposed area, and for the first image, the obtaining unit is further configured to:
    确定所述第一图像的每个像素区域的中心亮度值;Determining a central luminance value of each pixel region of the first image;
    根据所述第一图像的每个像素区域的中心亮度值,确定出所述第一图像的平均亮度值;Determining an average luminance value of the first image according to a central luminance value of each pixel region of the first image;
    判断所述每个像素区域的中心亮度值与所述平均亮度值之差是否大于预设阈值;Determining whether a difference between a central luminance value of each of the pixel regions and the average luminance value is greater than a preset threshold;
    N个像素区域的中心亮度值与所述平均亮度值之差大于预设阈值时,将所述N个像素区域作为N个过曝区域,所述N是正整数。When the difference between the central luminance value of the N pixel regions and the average luminance value is greater than a preset threshold, the N pixel regions are referred to as N overexposed regions, and the N is a positive integer.
  13. 根据权利要求12所述的装置,其中,所述获取单元还配置为:根据每个像素区域的均方差、饱和度、清晰度和左右图像像素区域的融合权值计算所述中心亮度值。The apparatus according to claim 12, wherein the acquisition unit is further configured to calculate the central luminance value according to a mean square error, a saturation, a sharpness, and a fusion weight value of the left and right image pixel regions of each of the pixel regions.
  14. 根据权利要求12所述的装置,其中,所述合成单元还配置为:The apparatus of claim 12, wherein the synthesizing unit is further configured to:
    在所述i个图像中标记出过曝区域; Marking an overexposed area in the i images;
    将所述i个图像中的i-1个图像的过曝区域进行亮度减弱处理;Performing brightness reduction processing on the overexposed areas of the i-1 images in the i images;
    利用线性减弱方式,将所述i个图像中的过曝区域与除自身图像之外的图像所对应像素区域进行合成处理,生成所述合成图片。The over-exposed area in the i images and the pixel area corresponding to the image other than the self image are combined and processed by the linear attenuation method to generate the composite picture.
  15. 根据权利要求12所述的装置,其中,所述合成单元还配置为:The apparatus of claim 12, wherein the synthesizing unit is further configured to:
    对i个图像的预定景物影像的一个或多个进行特效处理,将特效处理后的预定景物影像和未特效处理的预定景物影像进行合成,生成所述合成图片。Special effects processing is performed on one or more of the predetermined scene images of the i images, and the predetermined scene image after the special effect processing and the predetermined scene image not processed by the special effects are combined to generate the composite picture.
  16. 根据权利要求11所述的装置,其中,所述i个图像为所述i个摄像头从不同角度和位置拍摄的同一场景的图像,所述特征是预定景物影像,所述获取单元还配置为:The apparatus according to claim 11, wherein the i images are images of the same scene captured by the i cameras from different angles and positions, the feature is a predetermined scene image, and the acquiring unit is further configured to:
    分别从所述i个图像中分割出不同深度的预定景物影像,其中,不同图像中,预定景物影像不同;Separating predetermined scene images of different depths from the i images, wherein the predetermined scene images are different in different images;
    所述合成单元还配置为:The synthesizing unit is further configured to:
    对所述i个图像的预定景物影像的一个或多个进行特效处理,将特效处理后的预定景物影像和未特效处理的预定景物影像进行合成,生成所述合成图片。Performing special effects processing on one or more of the predetermined scene images of the i images, and synthesizing the predetermined scene images after the special effects processing and the predetermined scene images not processed by the special effects to generate the composite images.
  17. 根据权利要求16所述的装置,其中,所述获取单元还配置为:The apparatus of claim 16, wherein the obtaining unit is further configured to:
    结合图像的深度信息与图像原有的颜色信息以及亮度信息,作为联合特征来从所述i个图像中分割出不同深度的预定景物影像。Combining the depth information of the image with the original color information and the brightness information of the image, as the joint feature, the predetermined scene images of different depths are segmented from the i images.
  18. 根据权利要求16所述的装置,其中,所述i是2,所述获取单元还配置为:The apparatus according to claim 16, wherein said i is 2, and said obtaining unit is further configured to:
    将一个图像分割出人物影像;Split an image into a person image;
    将另一个图像分割出背景影像;Split another image out of the background image;
    所述合成单元还配置为:The synthesizing unit is further configured to:
    对所述背景影像进行特效处理,将处理后的背景影像与所述人物影像 进行合成,生成所述合成图片。Performing special effects on the background image, and processing the processed background image and the character image Synthesis is performed to generate the composite picture.
  19. 根据权利要求11至18任一项所述的装置,其中,所述合成单元还配置为:当特征是背景影像时,可以对背景影像进行模糊处理。The apparatus according to any one of claims 11 to 18, wherein the synthesizing unit is further configured to: when the feature is a background image, blur the background image.
  20. 根据权利要求19所述的装置,其中,所述合成单元还配置为:对背景影像进行特效处理之后,将两个图像中的任一个图像和合成图像做成动画形式进行动态显示。 The apparatus according to claim 19, wherein the synthesizing unit is further configured to: after performing special effect processing on the background image, dynamically display any one of the two images and the composite image in an animated form.
PCT/CN2016/097937 2015-09-24 2016-09-02 Image synthesis method WO2017050115A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510618878.6 2015-09-24
CN201510618878.6A CN105227837A (en) 2015-09-24 2015-09-24 A kind of image combining method and device

Publications (1)

Publication Number Publication Date
WO2017050115A1 true WO2017050115A1 (en) 2017-03-30

Family

ID=54996488

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/097937 WO2017050115A1 (en) 2015-09-24 2016-09-02 Image synthesis method

Country Status (2)

Country Link
CN (1) CN105227837A (en)
WO (1) WO2017050115A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114173059A (en) * 2021-12-09 2022-03-11 广州阿凡提电子科技有限公司 Video editing system, method and device

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105227837A (en) * 2015-09-24 2016-01-06 努比亚技术有限公司 A kind of image combining method and device
CN106973280B (en) * 2016-01-13 2019-04-16 深圳超多维科技有限公司 A kind for the treatment of method and apparatus of 3D rendering
CN106161980A (en) * 2016-07-29 2016-11-23 宇龙计算机通信科技(深圳)有限公司 Photographic method and system based on dual camera
CN106254724A (en) * 2016-07-29 2016-12-21 努比亚技术有限公司 A kind of realize the method for image noise reduction, device and terminal
CN106454123B (en) * 2016-11-25 2019-02-22 盐城丝凯文化传播有限公司 A kind of method and mobile terminal of focusing of taking pictures
CN107018325A (en) * 2017-03-29 2017-08-04 努比亚技术有限公司 A kind of image combining method and device
CN108924530A (en) * 2017-03-31 2018-11-30 深圳市易快来科技股份有限公司 A kind of 3D shoots method, apparatus and the mobile terminal of abnormal image correction
CN107240072B (en) * 2017-04-27 2020-06-05 南京秦淮紫云创益企业服务有限公司 Screen brightness adjusting method, terminal and computer readable storage medium
WO2019014842A1 (en) * 2017-07-18 2019-01-24 辛特科技有限公司 Light field acquisition method and acquisition device
CN107610078A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method and device
CN107707839A (en) * 2017-09-11 2018-02-16 广东欧珀移动通信有限公司 Image processing method and device
WO2019047985A1 (en) 2017-09-11 2019-03-14 Oppo广东移动通信有限公司 Image processing method and device, electronic device, and computer-readable storage medium
CN108111748B (en) * 2017-11-30 2021-01-08 维沃移动通信有限公司 Method and device for generating dynamic image
CN108154514B (en) * 2017-12-06 2021-08-13 Oppo广东移动通信有限公司 Image processing method, device and equipment
CN110166759B (en) * 2018-05-28 2021-10-15 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic device
CN108924435B (en) * 2018-07-12 2020-08-18 Oppo广东移动通信有限公司 Image processing method and device and electronic equipment
CN110139033B (en) * 2019-05-13 2020-09-22 Oppo广东移动通信有限公司 Photographing control method and related product
CN110248094B (en) * 2019-06-25 2020-05-05 珠海格力电器股份有限公司 Shooting method and shooting terminal
CN110929615B (en) * 2019-11-14 2022-10-18 RealMe重庆移动通信有限公司 Image processing method, image processing apparatus, storage medium, and terminal device
CN115150563A (en) * 2021-03-31 2022-10-04 华为技术有限公司 Video production method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040239670A1 (en) * 2003-05-29 2004-12-02 Sony Computer Entertainment Inc. System and method for providing a real-time three-dimensional interactive environment
CN102780855A (en) * 2011-05-13 2012-11-14 晨星软件研发(深圳)有限公司 Image processing method and related device
CN104333708A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Photographing method, photographing device and terminal
CN105227837A (en) * 2015-09-24 2016-01-06 努比亚技术有限公司 A kind of image combining method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009276956A (en) * 2008-05-14 2009-11-26 Fujifilm Corp Image processing apparatus and method, and program
JP5141733B2 (en) * 2010-08-18 2013-02-13 カシオ計算機株式会社 Imaging apparatus, imaging method, and program
CN104050651B (en) * 2014-06-19 2017-06-30 青岛海信电器股份有限公司 A kind of processing method and processing device of scene image
CN104580910B (en) * 2015-01-09 2018-07-24 宇龙计算机通信科技(深圳)有限公司 Image combining method based on forward and backward camera and system
CN104796625A (en) * 2015-04-21 2015-07-22 努比亚技术有限公司 Picture synthesizing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040239670A1 (en) * 2003-05-29 2004-12-02 Sony Computer Entertainment Inc. System and method for providing a real-time three-dimensional interactive environment
CN102780855A (en) * 2011-05-13 2012-11-14 晨星软件研发(深圳)有限公司 Image processing method and related device
CN104333708A (en) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 Photographing method, photographing device and terminal
CN105227837A (en) * 2015-09-24 2016-01-06 努比亚技术有限公司 A kind of image combining method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114173059A (en) * 2021-12-09 2022-03-11 广州阿凡提电子科技有限公司 Video editing system, method and device
CN114173059B (en) * 2021-12-09 2023-04-07 广州阿凡提电子科技有限公司 Video editing system, method and device

Also Published As

Publication number Publication date
CN105227837A (en) 2016-01-06

Similar Documents

Publication Publication Date Title
WO2017050115A1 (en) Image synthesis method
CN106454121B (en) Double-camera shooting method and device
WO2017045650A1 (en) Picture processing method and terminal
WO2018019124A1 (en) Image processing method and electronic device and storage medium
WO2017067526A1 (en) Image enhancement method and mobile terminal
US8780258B2 (en) Mobile terminal and method for generating an out-of-focus image
CN106909274B (en) Image display method and device
WO2017020836A1 (en) Device and method for processing depth image by blurring
WO2017016511A1 (en) Image processing method and device, and terminal
WO2017071476A1 (en) Image synthesis method and device, and storage medium
WO2017071475A1 (en) Image processing method, and terminal and storage medium
CN106713716B (en) Shooting control method and device for double cameras
WO2018019128A1 (en) Method for processing night scene image and mobile terminal
WO2018045945A1 (en) Focusing method and terminal, storage medium
WO2017071469A1 (en) Mobile terminal, image capture method and computer storage medium
WO2017071542A1 (en) Image processing method and apparatus
WO2018076938A1 (en) Method and device for processing image, and computer storage medium
WO2017067523A1 (en) Image processing method, device and mobile terminal
WO2017041714A1 (en) Method and device for acquiring rgb data
CN106911881B (en) Dynamic photo shooting device and method based on double cameras and terminal
WO2017045647A1 (en) Method and mobile terminal for processing image
CN106851125B (en) Mobile terminal and multiple exposure shooting method
WO2017071532A1 (en) Group selfie photography method and apparatus
CN106657782B (en) Picture processing method and terminal
WO2018045961A1 (en) Image processing method, and terminal and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16847997

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16847997

Country of ref document: EP

Kind code of ref document: A1