WO2017088678A1 - Long-exposure panoramic image shooting apparatus and method - Google Patents

Long-exposure panoramic image shooting apparatus and method Download PDF

Info

Publication number
WO2017088678A1
WO2017088678A1 PCT/CN2016/105705 CN2016105705W WO2017088678A1 WO 2017088678 A1 WO2017088678 A1 WO 2017088678A1 CN 2016105705 W CN2016105705 W CN 2016105705W WO 2017088678 A1 WO2017088678 A1 WO 2017088678A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
projection
subsequent
projection image
mobile terminal
Prior art date
Application number
PCT/CN2016/105705
Other languages
French (fr)
Chinese (zh)
Inventor
李嵩
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017088678A1 publication Critical patent/WO2017088678A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Definitions

  • the present application relates to, but is not limited to, the field of image processing technologies, and in particular, to a long exposure panoramic image capturing apparatus and method.
  • the so-called long exposure shooting function is to open the shutter of the mobile terminal and use the long exposure technology to capture the camera for a long time.
  • the environment often used to shoot water, nebula star trails, night road light trails, etc., can capture stunning time and space images.
  • the long-exposure photographing method in the related art is to fix the mobile terminal having the camera by using a fixing device such as a tripod, and then perform exposure shooting. Since the shooting method requires the photographer to carry the fixing device with him, the mobile terminal can not be shaken for a long time, but when the mobile terminal user shoots outdoors, it is very inconvenient to carry a large-sized and heavy-weight fixing device, resulting in The long exposure shooting function of the mobile terminal is inconvenient to use.
  • the main purpose of the present application is to provide a long exposure panoramic image capturing apparatus and method, aiming at solving the technical problem of inconvenient use of the long exposure shooting function of the mobile terminal.
  • An embodiment of the present invention provides a long exposure panoramic image capturing apparatus, where the long exposure panoramic image capturing apparatus includes:
  • the image acquisition module is configured to acquire an image acquired by the mobile terminal viewfinder at a time when receiving the long exposure shooting instruction
  • An image projection module is configured to project the acquired image to a preset projection space to form a corresponding projection image, wherein the acquired projection image corresponding to the first frame image is used as a reference projection image, The projected image corresponding to the acquired subsequent image is used as a subsequent projected image;
  • the registration fusion module is configured to perform position registration and pixel fusion of the subsequent projection images of the plurality of viewing angles satisfying the preset projection conditions one by one with the reference projection image to form a new reference projection image until the traversal is obtained. Subsequently projecting an image to obtain an output projected image;
  • a back projection module is configured to backproject the output projected image to a coordinate space of the image acquired by the mobile terminal viewfinder to generate a long exposure panoramic image.
  • the image projection module includes:
  • a parameter obtaining unit configured to acquire a motion parameter of the mobile terminal, where the motion parameter includes a motion direction and a tilt angle;
  • a space determining unit configured to determine a projection space for image projection according to the acquired motion parameter
  • a projection unit configured to project the acquired image to the determined projection space to form a corresponding projection image.
  • the parameter obtaining unit is configured to acquire a motion parameter of the mobile terminal by:
  • the parameter acquisition unit controls an accelerometer and a gyroscope installed inside the mobile terminal to sense a movement direction and a tilt angle of the user holding the mobile terminal, or use an image registration method to extract features from the current image, find an alignment position, and determine a user. Hold the moving direction and tilt angle of the mobile terminal.
  • the projection space includes a cylindrical projection space, a spherical projection space, and a cube projection space.
  • the registration fusion module includes:
  • a feature extraction unit configured to extract a reference feature from the reference projection image, and extract matching features corresponding to the reference feature from the subsequent projection images of the plurality of perspectives satisfying the preset projection condition one by one;
  • the registration unit is configured to combine the subsequent projection image that meets the preset projection condition with the reference projection image according to the positional relationship between the matching feature and the reference feature;
  • the merging unit is configured to perform pixel merging and overlapping of the overlapping area of the reference projection image satisfying the preset projection condition with the non-overlapping area to form a new reference projection image until the acquired subsequent projection image is traversed And the resulting reference projection image is taken as an output projection image.
  • the feature extraction unit is configured to extract features in one of the following ways: feature point matching, optical flow method, cross-correlation.
  • the registration unit is configured to obtain a registration parameter according to a positional relationship between the matching feature and the reference feature, and the subsequent projection image and the reference projection satisfying the preset projection condition according to the registration parameter Image registration combined.
  • the image acquiring module is further configured to acquire a motion parameter of the mobile terminal when acquiring an image acquired by the mobile terminal viewfinder timing, where the motion parameter includes a moving direction and a tilting angle;
  • the registration unit is further configured to:
  • the fusion unit is further configured to:
  • the value and the reference color value are color values of the pixel position of the reference projection image in the overlapping area, and the number of times of fusion is the number of times the pixel of the reference projection image is pixel-fused in the overlapping area.
  • the fusion unit is configured to calculate a color value of each pixel in the overlapping region according to a weighted average method.
  • An embodiment of the present invention further provides a long exposure panoramic image capturing method, the long exposure panoramic image Photo shooting methods include:
  • the subsequent projection images of the plurality of viewing angles satisfying the preset projection condition are position-aligned and pixel-fused with the reference projection image one by one to form a new reference projection image, until the acquired subsequent projection image is traversed to obtain an output projection image;
  • the output projection image is back projected to a coordinate space of the mobile terminal viewfinder acquisition image to generate a long exposure panoramic image.
  • the step of projecting the acquired image into a preset projection space to form a corresponding projection image includes:
  • the acquired image is projected onto the determined projection space to form a corresponding projected image.
  • the acquiring the motion parameter of the mobile terminal includes:
  • the accelerometer and the gyroscope installed inside the mobile terminal are controlled to sense the moving direction and the tilting angle of the user holding the mobile terminal, or the image registration method is used to extract the feature from the current image, and the alignment position is found to determine the user holding the mobile terminal.
  • Direction of movement and angle of inclination are controlled to sense the moving direction and the tilting angle of the user holding the mobile terminal, or the image registration method is used to extract the feature from the current image, and the alignment position is found to determine the user holding the mobile terminal.
  • the projection space includes a cylindrical projection space, a spherical projection space, and a cube projection space.
  • the subsequent projection images of the plurality of viewing angles satisfying the preset projection condition are position-aligned and pixel-fused with the reference projection image one by one to form a new reference projection image, and the subsequent steps of the acquisition are traversed.
  • the steps of projecting an image to obtain an output projected image include:
  • the preset projection condition will be met.
  • the subsequent projected image is combined with the reference projected image;
  • the resulting reference projection image is taken as an output projection image.
  • the extracting reference features from the reference projection image, and extracting matching features corresponding to the reference features from the subsequent projection images of the plurality of perspectives satisfying the preset projection conditions one by one including : Feature extraction in one of the following ways: feature point matching, optical flow method, cross-correlation.
  • the registering the subsequent projection image that meets the preset projection condition with the reference projection image according to the positional relationship between the matching feature and the reference feature comprises: matching the reference feature and the reference feature The positional relationship between the two results in a registration parameter, and the subsequent projection image that satisfies the preset projection condition is combined with the reference projection image according to the registration parameter.
  • the long-exposure panoramic image capturing method further includes: acquiring a motion parameter of the mobile terminal when acquiring an image acquired by the mobile terminal viewfinder timing, where the motion parameter includes a moving direction and a tilting angle;
  • the step of combining the subsequent projection image satisfying the preset projection condition with the reference projection image according to the positional relationship between the matching feature and the reference feature comprises:
  • the step of performing pixel fusion on the overlapping area of the subsequent projection image that meets the preset projection condition and the reference projection image includes:
  • the color value of the pixel position in the overlap region, the reference color value is the color value of the pixel position of the reference projection image in the overlapping region, and the number of fusion times is the number of times the pixel of the reference projection image is pixel-fused in the overlapping region.
  • the color value of each pixel in the overlapping area is adjusted according to the input color value, the reference color value, and the number of times of fusion to complete pixel fusion, including: calculating an overlapping area according to a weighted average method. The color value of each pixel in the middle.
  • the embodiment of the invention further provides a computer readable storage medium storing computer executable instructions, which are implemented to implement the long exposure panoramic image capturing method described above.
  • the embodiment When receiving the long exposure shooting instruction, the embodiment obtains an image captured by the mobile terminal viewfinder at a time; and then projects the acquired image to a preset projection space to form a corresponding projection image, where the acquired first frame image corresponds to The projected image is used as a reference projection image, and the acquired projection image corresponding to the subsequent image is used as a subsequent projection image; then the subsequent projection images are position-aligned and pixel-fused with the reference projection image one by one to form a new reference projection image, until the traversal is obtained.
  • FIG. 1 is a schematic structural diagram of hardware of an optional mobile terminal that implements various embodiments of the present application
  • Figure 2 is a block diagram showing the electrical structure of the camera of Figure 1;
  • FIG. 3 is a schematic diagram of functional modules of a first embodiment of a long exposure panoramic image capturing apparatus of the present application
  • FIG. 4 is a schematic diagram of a refinement function module of an image projection module in a second embodiment of the long exposure panoramic image capturing apparatus of the present application;
  • FIG. 5 is a schematic diagram of a refinement function module of a registration fusion module in a third embodiment of the long exposure panoramic image capturing apparatus of the present application;
  • FIG. 6 is a schematic flow chart of a first embodiment of a long exposure panoramic image capturing method according to the present application.
  • FIG. 7 is a refinement flow diagram of a step of projecting an acquired image into a preset projection space to form a corresponding projection image in a second embodiment of the long exposure panoramic image capturing method of the present application;
  • FIG. 8 is a third embodiment of the long exposure panoramic image capturing method of the present application, in which a subsequent projection image of a plurality of viewing angles satisfying a preset projection condition is subjected to position registration and pixel fusion one by one with a reference projection image to form a new reference projection image.
  • the long exposure panoramic image capturing apparatus of the embodiment of the present invention can be applied to a terminal.
  • a terminal implementing various embodiments of the present application will now be described with reference to the accompanying drawings.
  • suffixes such as “module”, “component” or “unit” for indicating an element is merely an explanation for facilitating the present application, and does not have a specific meaning per se. Therefore, “module” and “component” can be used in combination.
  • the mobile terminal can be implemented in various forms.
  • the terminals described in this application may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Tablets), PMPs (Portable Multimedia Players), navigation devices, and the like.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • configurations in accordance with embodiments of the present application can be applied to fixed type terminals in addition to elements that are specifically for mobile purposes.
  • FIG. 1 is a schematic structural diagram of hardware of an optional mobile terminal implementing various embodiments of the present application.
  • the mobile terminal 100 may include a wireless communication unit 110, an A/V (audio/video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, The controller 180, the power supply unit 190, the long exposure panoramic image capturing device 200, and the like.
  • Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication device or network.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 that processes image data of still pictures or video obtained by an image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure values, capacitance, etc. due to contact), Roller, rocker, etc.
  • a touch pad eg, a touch sensitive component that detects changes in resistance, pressure values, capacitance, etc. due to contact
  • Roller rocker, etc.
  • a touch screen can be formed.
  • the sensing unit 140 detects the current state of the mobile terminal 100 (eg, the open or closed state of the mobile terminal 100), the location of the mobile terminal 100, the presence or absence of contact (ie, touch input) by the user with the mobile terminal 100, and the mobile terminal.
  • the sensing unit 140 includes an accelerometer 141 for detecting a real-time acceleration of the mobile terminal 100 to derive a moving direction of the mobile terminal 100, and a gyroscope 142 for detecting the mobile terminal 100 relative to its location The angle of inclination of the plane.
  • the sensing unit 140 can detect whether the power supply unit 190 provides power or whether the interface unit 170 is coupled to an external device.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include User Identification Module (UIM), Customer Identification Module (SIM), Universal Customer Identification Module (USIM), and more.
  • the device having the identification module may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output module 152, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the touch screen can be used to detect touch input pressure values as well as touch input positions and touch input areas.
  • the audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100. Audio output module 152 It may include a pickup, a buzzer, and the like.
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, for example, the multimedia module 181 may be used to display a long-exposure panoramic image generated in real time in real time, and the multimedia module 181 may be constructed in the controller 180, or may It is configured to be separated from the controller 180.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and The controller
  • the long exposure panoramic image capturing device 200 includes an image acquiring module 10, an image projecting module 20, a registration fusion module 30, and a back projection module 40, wherein
  • the image obtaining module 10 is configured to acquire an image periodically collected by the viewfinder of the mobile terminal when receiving the long exposure shooting instruction;
  • the image projection module 20 is configured to project the acquired image to a preset projection space to form a corresponding projection image, wherein the acquired projection image corresponding to the first frame image is used as a reference projection image, and the acquired projection corresponding to the subsequent image is used.
  • the image as a subsequent projected image
  • the registration fusion module 30 is configured to perform position registration and pixel fusion of the subsequent projection images one by one with the reference projection image to form a new reference projection image until the acquired subsequent projection image is traversed to obtain an output projection image;
  • the back projection module 40 is configured to backproject the output projected image to a coordinate space of the mobile terminal viewfinder captured image to generate a long exposure panoramic image.
  • FIG. 2 is an electrical structure of the camera of FIG. block diagram.
  • the photographic lens 1211 is composed of a plurality of optical lenses for forming a subject image, and is a single focus lens or a zoom lens.
  • the photographic lens 1211 is movable in the optical axis direction under the control of the lens driver 1221, and the lens driver 1221 controls the focus position of the photographic lens 1211 in accordance with a control signal from the lens driving control circuit 1222, and can also be controlled in the case of the zoom lens. Focus distance.
  • the lens drive control circuit 1222 performs drive control of the lens driver 1221 in accordance with a control command from the microcomputer 1217.
  • An imaging element 1212 is disposed on the optical axis of the photographic lens 1211 near the position of the subject image formed by the photographic lens 1211.
  • the imaging element 1212 is for capturing an image of a subject and acquiring captured image data.
  • Photodiodes constituting each pixel are arranged two-dimensionally and in a matrix on the imaging element 1212. Each photodiode generates a photoelectric conversion current corresponding to the amount of received light, which is subjected to charge accumulation by a capacitor connected to each photodiode.
  • the front surface of each pixel is provided with a Bayer array of RGB color filters.
  • the imaging element 1212 is connected to the imaging circuit 1213.
  • the imaging circuit 1213 performs charge accumulation control and image signal readout control in the imaging element 1212, and performs waveform shaping after reducing the reset noise of the read image signal (analog image signal). Further, gain improvement or the like is performed to obtain an appropriate signal level.
  • the imaging circuit 1213 is connected to an A/D converter 1214 that performs analog-to-digital conversion on the analog image signal and outputs a digital image signal (hereinafter referred to as image data) to the bus 1227.
  • A/D converter 1214 that performs analog-to-digital conversion on the analog image signal and outputs a digital image signal (hereinafter referred to as image data) to the bus 1227.
  • the bus 1227 is a transmission path for transmitting various data read or generated inside the camera.
  • the A/D converter 1214 is connected to the bus 1227, and an image processor 1215, a JPEG processor 1216, a microcomputer 1217, a SDRAM (Synchronous Dynamic Random Access Memory) 1218, and a memory interface are also connected. (hereinafter referred to as memory I/F) 1219, LCD (Liquid Crystal Display) driver 1220.
  • memory I/F memory I/F
  • LCD Liquid Crystal Display
  • the image processor 1215 performs various kinds of images such as OB subtraction processing, white balance adjustment, color matrix calculation, gamma conversion, color difference signal processing, noise removal processing, simultaneous processing, edge processing, and the like on the image data based on the output of the imaging element 1212. deal with.
  • the JPEG processor 1216 compresses the image data read out from the SDRAM 1218 in accordance with the JPEG compression method when the image data is recorded on the recording medium 1225. Further, the JPEG processor 1216 performs decompression of JPEG image data for image reproduction display.
  • the file recorded on the recording medium 1225 is read, and after the compression processing is performed in the JPEG processor 1216, the decompressed image data is temporarily stored in the SDRAM 1218 and displayed on the LCD 1226.
  • the JPEG method is adopted as the image compression/decompression method.
  • the compression/decompression method is not limited thereto, and other compression/decompression methods such as MPEG, TIFF, and H.264 may be used.
  • the microcomputer 1217 functions as a control unit of the entire camera, and collectively controls various processing sequences of the camera.
  • the microcomputer 1217 is connected to the operation unit 1223 and the flash memory 1224.
  • the operating unit 1223 includes, but is not limited to, a physical button or a virtual button, and the entity or virtual button may be a power button, a camera button, an edit button, a dynamic image button, a reproduction button, a menu button, a cross button, an OK button, a delete button, an enlarge button
  • An operation control such as various input buttons and various input keys detects an operation state of these operation controls, and outputs the detection result to the microcomputer 1217.
  • a touch panel is provided on the front surface of the LCD 1226 as a display, and the touch position of the user is detected, and the touch position is output to the microcomputer 1217.
  • Microcomputer 1217 based on operation The detection result of the operation position of the unit 1223 performs various processing sequences corresponding to the operation of the user.
  • the flash memory 1224 stores programs for executing various processing sequences of the microcomputer 1217.
  • the microcomputer 1217 performs overall control of the camera in accordance with the program. Further, the flash memory 1224 stores various adjustment values of the camera, and the microcomputer 1217 reads out the adjustment value, and performs control of the camera in accordance with the adjustment value.
  • the SDRAM 1218 is an electrically rewritable volatile memory for temporarily storing image data or the like.
  • the SDRAM 1218 temporarily stores image data output from the A/D converter 1214 and image data processed in the image processor 1215, the JPEG processor 1216, and the like.
  • the memory interface 1219 is connected to the recording medium 1225, and performs control for writing image data and a file header attached to the image data to the recording medium 1225 and reading out from the recording medium 1225.
  • the recording medium 1225 is, for example, a recording medium such as a memory card that can be detachably attached to the camera body.
  • the recording medium 1225 is not limited thereto, and may be a hard disk or the like built in the camera body.
  • the LCD driver 1220 is connected to the LCD 1226, and stores image data processed by the image processor 1215 in the SDRAM 1218.
  • the image data stored in the SDRAM 1218 is read and displayed on the LCD 1226, or the image data stored in the JPEG processor 1216 is compressed.
  • the JPEG processor 1216 reads the compressed image data of the SDRAM 1218, decompresses it, and displays the decompressed image data through the LCD 1226.
  • the LCD 1226 is configured to display an image on the back of the camera body. However, it is not limited thereto, and various display panels (LCD 1226) such as an organic EL may be used.
  • the long exposure panoramic image capturing apparatus of the present application Based on the hardware structure of the mobile terminal and the electrical structure diagram of the camera, an embodiment of the long exposure panoramic image capturing apparatus of the present application is proposed, and the long exposure panoramic image capturing apparatus is a part of the mobile terminal.
  • the present application provides a long exposure panoramic image capturing apparatus.
  • the apparatus includes:
  • the image obtaining module 10 is configured to acquire an image periodically collected by the viewfinder of the mobile terminal when receiving the long exposure shooting instruction;
  • the camera shutter of the mobile terminal is open, the camera The viewfinder captures changes in light and shadow in the scene over a long period of time.
  • the viewfinder of the mobile terminal starts to collect images at a timing, for example, one frame image is acquired every 0.5 s, and the image acquisition module 10 collects the image acquired by the mobile terminal viewfinder at a time.
  • the image projection module 20 is configured to project the acquired image to a preset projection space to form a corresponding projection image, wherein the acquired projection image corresponding to the first frame image is used as a reference projection image, and the acquired projection corresponding to the subsequent image is used.
  • the image as a subsequent projected image
  • the image projection module 20 projects the acquired image acquired by the mobile terminal viewfinder to a preset projection space to form a corresponding projected image. Projecting the acquired image may have multiple execution sequences. For example, each frame image acquired by the mobile terminal viewfinder may be captured, and the acquired image of the frame may be projected to a preset projection space; or each mobile terminal viewfinder may be acquired. Collecting preset frames (for example, 5 frames), and then projecting the acquired preset frame images one by one to the preset projection space; in addition, all the acquired frame images may be acquired once the image is captured by the mobile terminal viewfinder. The projections are projected one by one to the preset projection space.
  • the execution sequence of the projection operation on the acquired image can be flexibly set according to requirements.
  • the image processing capability of the mobile terminal is strong, and after the image capture by the mobile terminal viewfinder is completed, all the acquired frame images are projected one by one to the preset projection. space.
  • the projection image formed by the projection of the first frame image captured by the mobile terminal viewfinder in the preset projection space is used as a reference projection image, and the projection of the subsequent frame image collected by the mobile terminal viewfinder in the preset projection space is obtained.
  • the projected image is used as a subsequent projected image.
  • the projection space includes a cylindrical projection space, a spherical projection space, a cube projection space, and the like.
  • the registration fusion module 30 is configured to perform position registration and pixel fusion of the subsequent projection images of the plurality of viewing angles that meet the preset projection conditions one by one with the reference projection image to form a new reference projection image until the acquired subsequent projection image is traversed. To obtain an output projection image;
  • the acquired subsequent projection image includes a plurality of viewing angles. For example, when the user moves the mobile terminal horizontally for long exposure panoramic shooting, the acquired image is an image of multiple viewing angles in the same horizontal direction, so that the projected subsequent projected image is also relative to the reference projected image. An image of multiple perspectives.
  • the registration fusion module 30 After acquiring the subsequent projection image, the registration fusion module 30 first determines whether the acquired subsequent projection satisfies the preset projection condition.
  • the preset projection condition includes that the tilt angle of the subsequent projection image relative to the reference projection image translation or rotation cannot be greater than the preset angle,
  • the subsequent projection image is vertically shifted relative to the reference projection image by a preset value or the like; then the reference projection image is used as a reference image, and the acquired subsequent projection image is pressed Position registration and pixel fusion with the reference projection image one by one according to the order of projection to the preset projection space until the mobile terminal viewfinder stops capturing images, that is, acquiring subsequent projections of all images acquired by the mobile terminal viewfinder in the preset projection space
  • the reference projection image is registered and the image after all the subsequent projection images are fused as an output projection image, thereby synthesizing the subsequent projection images of the plurality of viewing angles and the reference projection image into one large image.
  • the subsequent projection image is firstly registered with the reference projection image, and then pixel-fused with the reference projection image to form a new reference projection image, and then the next subsequent projection image is registered with the new reference projection image and pixel fusion is performed.
  • Position registration is based on image registration technology, and pixel fusion is based on fusion technology.
  • the above image registration technology refers to a process of matching and superimposing two or more images acquired at different times, different sensors (imaging devices) or under different conditions (weather, illumination, imaging position, angle, etc.). It is widely used in remote sensing data analysis, computer vision, image processing and other fields.
  • the general flow of image registration technology is as follows: firstly, feature extraction is performed on two images to obtain feature points; matching feature point pairs are found by performing similarity measure; then image space coordinate transformation parameters are obtained by matching feature point pairs; Transform parameters for image registration.
  • the above image fusion technology refers to: image data of the same target collected by the multi-source channel is subjected to image processing and computer technology to extract the favorable information in the respective channels to the maximum extent, and finally integrated into a high-quality image to improve
  • image processing and computer technology to extract the favorable information in the respective channels to the maximum extent
  • image processing and computer technology to extract the favorable information in the respective channels to the maximum extent
  • image processing and computer technology to extract the favorable information in the respective channels to the maximum extent
  • the image acquisition module 10 acquires three frames of images acquired by the mobile terminal viewfinder.
  • first image projection module 20 projects image 1 to a preset projection space to form a corresponding projection image 1 , and projection image 1 as a reference projection image;
  • image projection module 20 will acquire Image 2 is projected onto a preset projection space to form a corresponding projection image 2, and projection image 2 is used as a subsequent projection image;
  • registration fusion module 30 will project image 2 (ie, subsequent projection image) and projection image 1 (ie, reference projection) Image) performing position registration and pixel fusion to form a new reference projection image (set as projection image 1a);
  • the image projection module 20 projects the acquired image 3 to a preset projection space to form a corresponding projection image 3, and uses the projection image 3 as a new subsequent projection image;
  • the registration fusion module 30 positions the
  • the back projection module 40 is configured to backproject the output projected image to a coordinate space of the mobile terminal viewfinder captured image to generate a long exposure panoramic image.
  • the back projection module 40 performs back projection on the synthesized output image of the large angle of view, that is, back projection of the output projection image from the preset projection space to the coordinate space of the image captured by the mobile terminal finder to generate a user operating the mobile terminal camera A long exposure panoramic image of a large viewing angle of an image viewing angle is acquired.
  • the image projection module 20 projects the acquired image to the preset projection space to form a corresponding projection image.
  • the projection image corresponding to the acquired first frame image is used as a reference projection image
  • the acquired projection image corresponding to the subsequent image is used as a subsequent projection image;
  • the registration fusion module 30 further follows the multiple perspectives that satisfy the preset projection condition.
  • the projected image is position-aligned and pixel-fused with the reference projection image one by one to form a new reference projection image until the acquired subsequent projection image is traversed to obtain an output projection image; finally, the reverse projection module 40 reverses the output projection image.
  • the captured images Projecting to the coordinate space of the image captured by the mobile terminal viewfinder to generate a long-exposure panoramic image, thereby aligning the position of the subsequent projected image with the reference projected image and pixel fusion, and the mobile terminal is in a slight jitter state (for example, the user holds the mobile terminal)
  • the captured images can also enter
  • the line length exposure synthesis enhances the adaptability of the mobile terminal to the jitter in the long exposure shooting mode, so that the user does not need to carry the fixing device when performing long exposure shooting outdoors, so that the long exposure shooting function of the mobile terminal is more convenient to use.
  • the image projection module 20 includes:
  • the parameter obtaining unit 21 is configured to acquire a motion parameter of the mobile terminal, where the motion parameter includes a motion direction and a tilt angle;
  • the parameter acquisition unit 21 controls the accelerometer and the gyroscope installed inside the mobile terminal to sense the moving direction and the tilt angle of the user holding the mobile terminal; or use the image registration method to the current
  • the parameter acquisition unit 21 finds the alignment position to determine the moving direction and the tilt angle at which the user holds the mobile terminal.
  • the space determining unit 22 is configured to determine a projection space for image projection according to the acquired motion parameters
  • the space determining unit 22 determines an applicable projection space according to the moving direction and the tilting angle of the mobile terminal; for example, when the tilting angle is smaller than the preset angle, and the moving direction is only rotated 360 degrees in the horizontal direction, determining the cylindrical projection space as the image projection amount
  • the preset projection space when the tilt angle of the mobile terminal is greater than the preset angle, or the vertical direction shift is greater than the preset value, the user is prompted or the image that is tilted too much is discarded.
  • the spherical projection space is determined as the projection space of the image projection, and the spherical projection space can synthesize the scene in which the mobile terminal camera moves in a three-dimensional manner.
  • the projection unit 23 is arranged to project the acquired image to the determined projection space to form a corresponding projection image.
  • the subsequent registration process and pixel fusion process can be performed in the same projection space.
  • the projection image corresponding to the acquired first frame image is used as a reference projection image
  • the acquired projection image corresponding to the subsequent image is used as a subsequent projection image.
  • the motion parameter of the mobile terminal is acquired by the parameter acquisition unit 21, and the motion tendency of the user to hold the mobile terminal for long exposure shooting is determined, so that the space determining unit 22 selects an appropriate projection space, thereby improving image projection to determination.
  • the projection effect of the projection space improves the integrity of the projected image, and facilitates the registration and fusion between subsequent projected images, thereby improving the light and shadow effect of the image obtained by the mobile terminal for long-exposure panoramic image capturing.
  • the registration fusion module is provided. 30 includes:
  • the feature extraction unit 31 is configured to extract reference features from the reference projection image, and extract matching features corresponding to the reference features from the subsequent projection images of the plurality of perspectives satisfying the preset projection conditions one by one;
  • the global transformation of the image can select geometric transformation, similar transformation, affine transformation, and projective transformation.
  • the image local transformation can divide the image into different parts and calculate for each part.
  • the feature extraction unit 31 extracts the registration features, and the methods for extracting the registration features are: feature point matching, optical flow method, cross-correlation, and the like. among them:
  • Feature point matching means extracting a number of feature points from a reference frame (ie, a reference projection image), and then extracting or searching for corresponding feature points from the to-be-registered frame (ie, a subsequent projected image), and solving the problem according to the position of the feature point as data
  • the registration parameters of the registration frame relative to the reference frame are described below.
  • the optical flow method refers to estimating the instantaneous velocity of the moving motion of the moving object on the imaging plane by using the variation of the pixel in the image sequence and the correlation between adjacent frames to find the previous frame and the current Correspondence between frames, thereby calculating motion information of objects between adjacent frames.
  • Cross-correlation means transforming the image into the frequency domain by Fourier transform, and then calculating the correlation of each position of the to-be-registered frame in the spatial domain by using the cross-correlation formula, taking the largest position as the relative reference frame to be registered. Registration parameters.
  • the registration unit 32 is configured to combine the subsequent projection image that meets the preset projection condition with the reference projection image according to the positional relationship between the matching feature and the reference feature;
  • the registration unit 32 obtains a registration parameter according to the positional relationship between the matching feature and the reference feature (for example, the registration parameter is a two-dimensional or three-dimensional motion vector), and the subsequent projection that satisfies the preset projection condition according to the registration parameter
  • the image is registered in conjunction with the reference projection image (ie, aligned).
  • the reference projection image ie, aligned.
  • the moving direction and the moving distance of the matching feature to the reference feature are obtained, and the subsequent projected image is moved according to the moving direction and the moving distance.
  • Each pixel point, the subsequent projection image is moved every pixel point
  • the subsequent position such that the matching features of the subsequent projected image are aligned with the reference features of the reference projected image, and also causes the subsequent projected image to coincide with the same image region of the reference projected image.
  • the transformation has 6 parameters for 2*3 matrix representation.
  • A can be found by a registration algorithm. Then a point for the current input image frame (ie, the subsequent projected image) on the projected cylinder Coordinated position Similarly, the coordinate positions of all the pixel points of the current input image after registration are obtained, so that the current input image is aligned with the reference projection image according to the coordinate positions of all the pixel points after registration.
  • the merging unit 33 is configured to perform pixel merging and overlapping of the overlapping regions of the reference projection image and the non-overlapping region to form a new reference projection image until the acquired subsequent projection image is traversed.
  • the resulting reference projection image is taken as an output projection image.
  • each pixel on the input frame image ie, the subsequent projected image
  • the method of judging is that the color of each pixel position is initially 0 when the panorama is initialized.
  • Each pixel of the new input frame image is judged at the composite position if the colors of the original pixels are all 0, and then overlap; if the colors are not all 0, they overlap.
  • the fusion unit 33 After the subsequent projection image that meets the preset projection condition is aligned with the reference projection image, the fusion unit 33 performs pixel fusion on the overlapping region of the subsequent projection image and the reference projection image, that is, based on the image fusion technique, the pixel of the subsequent projection image according to the overlapping region
  • the color value and the color value of the reference projection image pixel point are pixel-fused to obtain the fused pixel point color value;
  • the merging unit 33 directly splices the subsequent projection image and the reference projection image non-overlapping region, that is, if the reference projection image is used
  • the area has a subsequent projection image, indicating that the mobile terminal acquires a new image due to the movement, and directly splicing the newly acquired subsequent projection image with the corresponding position of the reference projection image; thus, the image formed by the pixel fusion and the image mosaic is used as a new reference projection.
  • the same process is performed on other subsequent projected images by using the new reference projection image, and the loop is repeated until the traversal or processing is completed. Finished subsequent projection images.
  • the image after the subsequent projection image is registered and registered with the reference projection image may be synchronously displayed in the display area of the mobile terminal for the user to view.
  • the fusion unit 33 uses the latest reference projection image as an output projection. image.
  • the positional relationship between the reference projection image and the subsequent projection image is obtained, that is, the registration of the subsequent projection image with respect to the reference projection image is obtained.
  • Parameter, and then aligning the subsequent projection image with the reference projection image according to the positional relationship between the reference projection image and the subsequent projection image, and then pixel merging the overlapping regions of the subsequent projection image with the reference projection image, directly splicing the non-overlapping regions to form a new one Referring to the projected image until the acquired subsequent projected image is traversed; finally, the finally formed reference projected image is used as an output projected image, thereby registering and merging each subsequent projected image with the updated reference projected image to form a long exposure.
  • the panoramic output projection image corresponding to the captured image can be combined into a panoramic projection output image of a large viewing angle image, and can realize a long exposure shooting function when the mobile terminal is shaken at a certain amplitude, and the mobile terminal is expanded. Long exposure shooting Wai.
  • the image obtaining module 10 is further configured to acquire a motion parameter of the mobile terminal when acquiring an image captured by the mobile terminal viewfinder, and the motion parameter includes a moving direction and a tilting angle;
  • the accelerometer and the gyroscope installed inside the mobile terminal are used to sense the moving direction and the tilt angle of the user holding the mobile terminal; or the current image is extracted by using the image registration method. Feature, find the alignment position to determine the direction and tilt angle at which the user holds the mobile terminal.
  • the registration unit 32 is also configured to:
  • the preset will be satisfied. Subsequent projection images of the projection conditions are combined with the reference projection image registration.
  • the registration unit 32 can roughly determine the combination direction of the subsequent projection image and the reference projection image that meet the preset projection condition according to the motion parameter of the mobile terminal. For example, if the mobile terminal moves horizontally to the right, the subsequent projection image that satisfies the preset projection condition should be The combination is made on the right side of the reference projection image, that is, the combined direction of the subsequent projection image and the reference projection image is right-handed. After determining the combination direction of the subsequent projection image and the reference projection image, the registration unit 32 can quickly align the subsequent projection image with the reference projection image and perform registration and combination according to the positional relationship between the matching feature and the reference feature.
  • the motion parameter of the mobile terminal is acquired, and then the combination direction of the subsequent projection image that meets the preset projection condition and the reference projection image is determined according to the moving direction of the mobile terminal.
  • the subsequent projection image can be quickly aligned with the reference projection image and the registration is combined. The efficiency of registering the position of the subsequent projection image and the reference projection image is improved.
  • the fusion unit 33 is also configured to:
  • the input color value is a color value, a reference color of a pixel position of the subsequent projected image in the overlapping area.
  • the value is the color value of the pixel position of the reference projected image in the overlapping area
  • the number of times of fusion is the number of times the pixel of the reference projection image is pixel-fused at the pixel position in the overlapping area.
  • the fusion unit 33 acquires an input color value of each pixel point of the overlapping area of the reference projection image in the subsequent projection image that satisfies the preset projection condition, and acquires a reference color of each pixel point in the overlapping area of the reference projection image and the subsequent projection image.
  • the number of fusions of the value and the pixel fusion; the fusion unit 33 adjusts the color value of each pixel in the overlapping region according to the input color value, the reference color value, and the number of fusions of each pixel position of the overlapping region to complete pixel fusion.
  • the color value of each pixel in the overlapping region is calculated, and the number of times of fusion of a certain pixel in the overlapping region is F, the reference color value is C F , and the input color value is C P , then synthesized
  • the color values are:
  • the number of fusions after synthesis is recorded as F incremented by 1.
  • the color value in this embodiment is the value in the hexadecimal color code.
  • the input color value, the reference color value, and the number of fusions of each pixel position in the overlapping area of the subsequent projection image and the reference projection image are obtained, and then based on the weighted average algorithm, and according to the input color value and the reference color.
  • the value and the number of fusions are adjusted to adjust the color value of each pixel in the overlapping area to complete the pixel fusion, so that the color value fusion of the pixel in the overlapping area is correlated with the color value of the corresponding pixel of all subsequent projected images, so that the pixel fusion effect is better.
  • the present application also provides a long exposure panoramic image capturing method.
  • the method includes:
  • Step S10 when receiving the long exposure shooting instruction, acquiring an image periodically collected by the mobile terminal viewfinder;
  • a long exposure shooting instruction is input to the mobile terminal to control the mobile terminal to enter the long exposure mode, the camera shutter of the mobile terminal is opened, and the camera viewfinder captures the change of the light and shadow in the scene for a long time.
  • the viewfinder of the mobile terminal starts to acquire images at a timing, for example, one frame image is acquired every 0.5 s, and the long exposure panoramic image capturing device collects images acquired by the mobile terminal viewfinder at a time.
  • Step S20 Projecting the acquired image to a preset projection space to form a corresponding projection image, wherein the acquired projection image corresponding to the first frame image is used as a reference projection image, and the acquired projection image corresponding to the subsequent image is used as a subsequent projection image. ;
  • the acquired image acquired by the mobile terminal viewfinder is projected to a preset projection space to form a corresponding projected image.
  • Projecting the acquired image may have multiple execution sequences. For example, each frame image acquired by the mobile terminal viewfinder may be captured, and the acquired image of the frame may be projected to a preset projection space; or each mobile terminal viewfinder may be acquired. Collecting preset frames (for example, 5 frames), and then projecting the acquired preset frame images one by one to the preset projection space; in addition, all the acquired frame images may be acquired once the image is captured by the mobile terminal viewfinder. The projections are projected one by one to the preset projection space. Correct The execution order of the acquired image for the projection operation can be flexibly set according to requirements.
  • the image processing capability of the mobile terminal is strong, and after the image capture by the mobile terminal viewfinder is completed, all the acquired frame images are projected one by one to the preset projection space. .
  • the projection image formed by the projection of the first frame image captured by the mobile terminal viewfinder in the preset projection space is used as a reference projection image, and the projection of the subsequent frame image collected by the mobile terminal viewfinder in the preset projection space is obtained.
  • the projected image is used as a subsequent projected image.
  • the projection space includes a cylindrical projection space, a spherical projection space, a cube projection space, and the like.
  • Step S30 performing position registration and pixel fusion of the subsequent projection images of the plurality of viewing angles that meet the preset projection conditions one by one with the reference projection image to form a new reference projection image, until the acquired subsequent projection image is traversed to obtain the output projection image.
  • the acquired subsequent projection image includes a plurality of viewing angles. For example, when the user moves the mobile terminal horizontally for long exposure panoramic shooting, the acquired image is an image of multiple viewing angles in the same horizontal direction, so that the projected subsequent projected image is also relative to the reference projected image. An image of multiple perspectives.
  • the preset projection condition includes that the tilt angle of the subsequent projection image relative to the reference projection image translation or rotation cannot be greater than the preset angle, and the subsequent projection image is relative to the reference.
  • the vertical displacement of the projected image is greater than a preset value or the like; then the reference projection image is used as a reference image, and the acquired subsequent projection images are position-aligned and pixel-fused with the reference projection image one by one according to the order in which they are projected to the preset projection space until
  • the mobile terminal viewfinder stops acquiring images, that is, after acquiring the image of all the images captured by the mobile terminal viewfinder in the preset projection space and the reference projection image position registration and pixel fusion, the reference projection image is registered and merged with all subsequent projections.
  • the image after the image is used as an output projection image, thereby synthesizing the subsequent projection image of the plurality of angles of view and the reference projection image into a projection image of a large angle of view, so that the long-exposure panoramic image of the large angle of view can be back-projected.
  • the subsequent projection image is firstly registered with the reference projection image, and then pixel-fused with the reference projection image to form a new reference projection image, and then the next subsequent projection image is registered with the new reference projection image and pixel fusion is performed.
  • Position registration is based on image registration technology, and pixel fusion is based on fusion technology.
  • the above image registration technology refers to matching two or more images acquired at different times, different sensors (imaging devices) or under different conditions (weather, illumination, imaging position, angle, etc.),
  • the superposition process has been widely used in remote sensing data analysis, computer vision, image processing and other fields.
  • the general flow of image registration technology is as follows: firstly, feature extraction is performed on two images to obtain feature points; matching feature point pairs are found by performing similarity measure; then image space coordinate transformation parameters are obtained by matching feature point pairs; Transform parameters for image registration.
  • the above image fusion technology refers to: image data of the same target collected by the multi-source channel is subjected to image processing and computer technology to extract the favorable information in the respective channels to the maximum extent, and finally integrated into a high-quality image to improve
  • image processing and computer technology to extract the favorable information in the respective channels to the maximum extent
  • image processing and computer technology to extract the favorable information in the respective channels to the maximum extent
  • image processing and computer technology to extract the favorable information in the respective channels to the maximum extent
  • the present application is explained below with a specific example: for example, after receiving the long exposure shooting instruction, acquiring three frames of images captured by the mobile terminal viewfinder, which are images in sequence 1.
  • Image 2 and image 3 first project image 1 to a preset projection space to form a corresponding projection image 1, and use projection image 1 as a reference projection image; then project the acquired image 2 to a preset projection space to form a corresponding projection.
  • the projected image 2 is used as a subsequent projected image; further, the projected image 2 (ie, the subsequent projected image) is position-registered and pixel-fused with the projected image 1 (ie, the reference projected image) to form a new reference projected image ( Set to the projected image 1a); then project the acquired image 3 to the preset projection space to form a corresponding projected image 3, and use the projected image 3 as a new subsequent projected image; and then positionally map the projected image 3 and the projected image 1a. Blending with the pixels to form the latest reference projection image, and since the acquired images are all projected, registered, and fused, the latest reference is obtained. Shadow image is set to output a projection image.
  • step S40 the output projection image is back-projected to the coordinate space of the image acquired by the mobile terminal viewfinder to generate a long-exposure panoramic image.
  • back projection on the synthesized output image of the large viewing angle that is, back projection of the output projection image from the preset projection space to the coordinate space of the image captured by the mobile terminal finder to generate a large angle of view of the user operating the mobile terminal camera to capture the image Long exposure panoramic image of perspective.
  • the image captured by the mobile terminal viewfinder is acquired; then the acquired image is projected to the preset projection space to form a corresponding projection image, wherein the first image to be acquired is obtained.
  • the projection image corresponding to the frame image is used as the reference projection image, and the projection image corresponding to the acquired subsequent image is used as the subsequent projection image;
  • Subsequent projection images of the viewing angle are position-aligned and pixel-fused with the reference projection image one by one to form a new reference projection image until the acquired subsequent projection image is traversed to obtain an output projection image; finally, the output projection image is back projected
  • the mobile terminal finder collects the coordinate space of the image to generate a long-exposure panoramic image, so that the mobile terminal is in a slight jitter state by the position registration and pixel fusion of the subsequent projected image and the reference avatar image (for example, the user holds the mobile terminal for long Exposure shooting)
  • the captured image can also be combined with long exposure, which enhances the ability of the mobile terminal to adapt to the
  • the exposure shooting function is more convenient to use. And, combining the subsequent projection images of the plurality of viewing angles with the reference projection image into one output projection image of a large viewing angle, and then performing back projection on the output projection image to obtain a long-exposure panoramic image with a large viewing angle, that is, by taking long exposure shooting and panoramic.
  • the combination of the projection techniques of the shooting enables the user to capture a large-angle long-exposure panoramic image beyond the range of the camera's viewing angle, thereby overcoming the disadvantages of the related art method that the fixed mobile terminal camera can only capture a fixed viewing angle scene and has a limited shooting angle.
  • step S20 includes:
  • Step S21 acquiring a motion parameter of the mobile terminal, where the motion parameter includes a motion direction and a tilt angle;
  • the accelerometer sense and the gyroscope installed inside the mobile terminal are used to sense the moving direction and the tilt angle of the mobile terminal, or the image registration method is used to extract the current image. Find the alignment position to determine the direction and tilt angle at which the user holds the mobile terminal.
  • Step S22 determining a projection space for image projection according to the acquired motion parameters
  • the tilt angle of the mobile terminal is greater than the preset angle, or when the vertical direction shift is greater than the preset value, the user is prompted or the image that is tilted too much is discarded.
  • the spherical projection space is determined as the projection space of the image projection, and the spherical projection space can synthesize the scene in which the mobile terminal camera moves in a three-dimensional manner.
  • Step S23 projecting the acquired image to the determined projection space to form a corresponding projection image.
  • the subsequent registration process and pixel fusion process can be performed in the same projection space.
  • the projection image corresponding to the acquired first frame image is used as a reference projection image
  • the acquired projection image corresponding to the subsequent image is used as a subsequent projection image.
  • step S30 includes:
  • Step S31 extracting reference features from the reference projection image, and extracting matching features corresponding to the reference features from the subsequent projection images of the plurality of perspectives satisfying the preset projection conditions one by one;
  • the global transformation of the image can select geometric transformation, similar transformation, affine transformation, and projective transformation.
  • the image local transformation can divide the image into different parts and calculate for each part.
  • the registration features are extracted, and the methods for extracting the registration features are: feature point matching, optical flow method, cross-correlation, and the like. among them:
  • Feature point matching means extracting a number of feature points from a reference frame (ie, a reference projection image), and then extracting or searching for corresponding feature points from the to-be-registered frame (ie, a subsequent projected image), and solving the problem according to the position of the feature point as data
  • the registration parameters of the registration frame relative to the reference frame are described below.
  • the optical flow method refers to estimating the instantaneous velocity of the moving motion of the moving object on the imaging plane by using the variation of the pixel in the image sequence and the correlation between adjacent frames to find the previous frame and the current Correspondence between frames, thereby calculating motion information of objects between adjacent frames.
  • Cross-correlation means transforming an image into the frequency domain using Fourier transform, and then using the cross-correlation formula The correlation of each position of the registration frame in the spatial domain is calculated, and the largest position is taken as the registration parameter of the to-be-registered frame relative to the reference frame.
  • Step S32 according to the positional relationship between the matching feature and the reference feature, the subsequent projection image that satisfies the preset projection condition is combined with the reference projection image;
  • the registration parameter is obtained (for example, the registration parameter is a two-dimensional or three-dimensional motion vector), and the subsequent projection image and the reference projection satisfying the preset projection condition according to the registration parameter are obtained.
  • Image registration is combined (ie aligned). For example, there are three matching features. According to the positional relationship between the three matching features and the three corresponding reference features, the moving direction and the moving distance of the matching feature to the reference feature are obtained, and the subsequent projected image is moved according to the moving direction and the moving distance.
  • the position of each pixel of the subsequent projected image is obtained, so that the matching feature of the subsequent projected image is aligned with the reference feature of the reference projected image, and the subsequent projected image is coincident with the same image region of the reference projected image.
  • the transformation has 6 parameters for 2*3 matrix representation.
  • A can be found by a registration algorithm. Then a point for the current input image frame (ie, the subsequent projected image) on the projected cylinder Coordinated position Similarly, the coordinate positions of all the pixel points of the current input image after registration are obtained, so that the current input image is aligned with the reference projection image according to the coordinate positions of all the pixel points after registration.
  • Step S33 performing pixel fusion on the overlapping area of the subsequent projection image satisfying the preset projection condition and the reference projection image, and directly splicing the non-overlapping area to form a new reference projection image until the acquired subsequent projection image is traversed;
  • the judging method is that the color of each pixel position is initially 0 when the panorama is initialized.
  • Each pixel of the new input frame image is judged at the composite position if the color of the original pixel is all 0s, and does not overlap; If the colors are not all 0, they overlap.
  • the overlapping region of the subsequent projection image and the reference projection image is pixel-fused, that is, based on the image fusion technology, according to the color value of the pixel of the subsequent projection image of the overlapping region and Refer to the color value of the pixel of the projected image for pixel fusion to obtain the fused pixel color value; and directly splicing the subsequent projection image and the reference projection image without overlapping regions, that is, if the reference projection image has a subsequent projection image It indicates that the mobile terminal acquires a new image by moving, and directly splicing the newly acquired subsequent projection image with the corresponding position of the reference projection image; thus, the image formed by the pixel fusion and the image mosaic is used as a new reference projection image.
  • the same processing is performed on the other subsequent projected images by using the new reference projection image, and the loop is repeated until the acquired subsequent projected images are traversed or processed.
  • the image after the subsequent projection image is registered and registered with the reference projection image may be synchronously displayed in the display area of the mobile terminal for the user to view.
  • step S34 the finally formed reference projection image is taken as an output projection image.
  • the mobile terminal viewfinder stops acquiring the image, and all the acquired subsequent projection images are registered and fused with the reference projection image, the finally formed new reference projection image is used as the output projection image.
  • the positional relationship between the reference projection image and the subsequent projection image is obtained, that is, the registration of the subsequent projection image with respect to the reference projection image is obtained.
  • Parameter, and then aligning the subsequent projection image with the reference projection image according to the positional relationship between the reference projection image and the subsequent projection image, and then pixel merging the overlapping regions of the subsequent projection image with the reference projection image, directly splicing the non-overlapping regions to form a new one Referring to the projected image until the acquired subsequent projected image is traversed; finally, the finally formed reference projected image is used as an output projected image, thereby registering and merging each subsequent projected image with the updated reference projected image to form a long exposure.
  • the panoramic output projection image corresponding to the captured image can be combined into a panoramic projection output image of a large viewing angle image, and the mobile terminal can realize the long exposure shooting function when the mobile terminal is shaken at a certain amplitude, and the mobile terminal is expanded. Long exposure shooting Wai.
  • Photo shooting methods also include:
  • Step S35 Acquire a motion parameter of the mobile terminal when acquiring an image captured by the mobile terminal viewfinder, and the motion parameter includes a moving direction and a tilt angle;
  • the accelerometer sense and the gyroscope installed inside the mobile terminal sense the user's movement direction and the tilt angle of the mobile terminal; or use the image registration method to the current image.
  • the features are extracted and the alignment position is found to determine the direction and tilt angle at which the user holds the mobile terminal.
  • Step S32 includes:
  • Step S321 determining a combination direction of the subsequent projection image that meets the preset projection condition and the reference projection image according to the moving direction of the mobile terminal;
  • the combination direction of the continuous projection image and the reference projection image satisfying the preset projection condition can be roughly determined. For example, if the mobile terminal moves horizontally to the right, the subsequent projection image that satisfies the preset projection condition should be compared with the reference projection image. The right side is combined, that is, the combined direction of the subsequent projected image and the reference projected image is right-handed.
  • Step S322 combining the subsequent projection image with the reference projection image according to the combination direction and the positional relationship between the matching feature and the reference feature.
  • the subsequent projection image After determining the combination direction of the subsequent projection image and the reference projection image, according to the positional relationship between the matching feature and the reference feature, the subsequent projection image can be quickly aligned with the reference projection image and the registration is combined.
  • the motion parameter of the mobile terminal is acquired, and then the combination direction of the subsequent projection image that meets the preset projection condition and the reference projection image is determined according to the moving direction of the mobile terminal.
  • the subsequent projection image can be quickly aligned with the reference projection image and the registration is combined. The efficiency of registering the position of the subsequent projection image and the reference projection image is improved.
  • a fifth embodiment of the long exposure panoramic image capturing method is proposed.
  • the overlapping of the subsequent projected image and the reference projected image is performed.
  • the steps of performing pixel fusion in the area include:
  • Step S331 acquiring an input color value, a reference color value, and a fusion number of each pixel position in an overlapping area of the subsequent projection image and the reference projection image that meet the preset projection condition;
  • Step S332 adjusting color values of each pixel in the overlapping area according to the input color value, the reference color value, and the number of fusion times to complete pixel fusion, wherein the input color value is a color value of a pixel position of the subsequent projected image in the overlapping area.
  • the reference color value is a color value of the pixel position of the reference projection image in the overlapping area
  • the number of fusion times is the number of times the pixel of the reference projection image is pixel-fused in the overlapping area.
  • the color value of each pixel in the overlapping region is calculated, and the number of times of fusion of a pixel in the overlapping region is F, the reference color value is C F , and the input color value is C P , then
  • the color values are:
  • the number of fusions after synthesis is recorded as F incremented by 1.
  • the color value in this embodiment is the value in the hexadecimal color code.
  • the input color value, the reference color value, and the number of fusions of each pixel position in the overlapping area of the subsequent projection image and the reference projection image are obtained, and then based on the weighted average algorithm, and according to the input color value and the reference color.
  • the value and the number of fusions are adjusted to adjust the color value of each pixel in the overlapping area to complete the pixel fusion, so that the color value fusion of the pixel in the overlapping area is correlated with the color value of the corresponding pixel of all subsequent projected images, so that the pixel fusion effect is better.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present application, which is essential or contributes to the related art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk, CD-ROM).
  • the instructions include a number of instructions for causing a terminal device (which may be a cell phone, computer, server, air conditioner, or network device, etc.) to perform the methods described in various embodiments of the present application.
  • the embodiment of the present application provides a long exposure panoramic image capturing device and method, which enhances the adaptability of the mobile terminal to jitter in a long exposure shooting mode, so that the user does not need to carry a fixing device when performing long exposure shooting outdoors, so that the mobile terminal
  • the long exposure shooting function is more convenient to use.

Abstract

Disclosed is a long-exposure panoramic image shooting apparatus. The apparatus comprises: an image acquisition module configured to acquire an image regularly collected by a mobile terminal viewfinder when receiving a long-exposure shooting instruction; an image projection module configured to project the acquired image to a pre-set projection space to form a corresponding projection image; a registration and fusion module configured to perform position registration and pixel fusion on subsequent projection images at multiple perspectives satisfying a pre-set projection condition one by one with a reference projection image, so as to form a new reference projection image until all the acquired subsequent projection images are traversed; and a back projection module configured to back-project an output projection image to a coordinate space for the mobile terminal viewfinder to capture the image, so as to generate a long-exposure panoramic image. The solution enhances the ability of a mobile terminal to adapt to a jitter in a long-exposure shooting mode, thereby making the use of a long-exposure shooting function of the mobile terminal more convenient.

Description

长曝光全景图像拍摄装置和方法Long exposure panoramic image capturing device and method 技术领域Technical field
本申请涉及但不限于图像处理技术领域,尤其涉及一种长曝光全景图像拍摄装置和方法。The present application relates to, but is not limited to, the field of image processing technologies, and in particular, to a long exposure panoramic image capturing apparatus and method.
背景技术Background technique
随着电子技术的发展,市面上的智能手机、数码相机等移动终端一般都具有长曝光拍摄功能,所谓长曝光拍摄功能,就是开放移动终端快门,利用长时间的曝光技术,让相机捕捉长时间环境,常常用于拍摄流水、星云星轨、夜间马路光线轨迹等,可以拍摄出令人惊叹的时空图像。With the development of electronic technology, mobile terminals such as smart phones and digital cameras on the market generally have long exposure shooting functions. The so-called long exposure shooting function is to open the shutter of the mobile terminal and use the long exposure technology to capture the camera for a long time. The environment, often used to shoot water, nebula star trails, night road light trails, etc., can capture stunning time and space images.
相关技术中的长曝光拍摄方法是使用三脚架等固定装置将具有相机的移动终端保持固定不动,再进行曝光拍摄。由于这种拍摄方法要求拍摄者随身携带固定装置,长时间固定移动终端不能抖动,但是,移动终端用户在户外进行拍摄时,携带体积较大、重量较重的固定装置变得非常不方便,导致移动终端长曝光拍摄功能使用不方便。The long-exposure photographing method in the related art is to fix the mobile terminal having the camera by using a fixing device such as a tripod, and then perform exposure shooting. Since the shooting method requires the photographer to carry the fixing device with him, the mobile terminal can not be shaken for a long time, but when the mobile terminal user shoots outdoors, it is very inconvenient to carry a large-sized and heavy-weight fixing device, resulting in The long exposure shooting function of the mobile terminal is inconvenient to use.
发明内容Summary of the invention
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。The following is an overview of the topics detailed in this document. This Summary is not intended to limit the scope of the claims.
本申请的主要目的在于提供一种长曝光全景图像拍摄装置和方法,旨在解决移动终端长曝光拍摄功能使用不方便的技术问题。The main purpose of the present application is to provide a long exposure panoramic image capturing apparatus and method, aiming at solving the technical problem of inconvenient use of the long exposure shooting function of the mobile terminal.
本发明实施例提供一种长曝光全景图像拍摄装置,所述长曝光全景图像拍摄装置包括:An embodiment of the present invention provides a long exposure panoramic image capturing apparatus, where the long exposure panoramic image capturing apparatus includes:
图像获取模块,设置为当接收到长曝光拍摄指令时,获取移动终端取景器定时采集的图像;The image acquisition module is configured to acquire an image acquired by the mobile terminal viewfinder at a time when receiving the long exposure shooting instruction;
图像投影模块,设置为将获取的图像投影至预设投影空间,形成对应的投影图像,其中,将获取的第一帧图像对应的投影图像作为参考投影图像, 将获取的后续图像对应的投影图像作为后续投影图像;An image projection module is configured to project the acquired image to a preset projection space to form a corresponding projection image, wherein the acquired projection image corresponding to the first frame image is used as a reference projection image, The projected image corresponding to the acquired subsequent image is used as a subsequent projected image;
配准融合模块,设置为将满足预设投影条件的多个视角的所述后续投影图像逐个与所述参考投影图像进行位置配准和像素融合,形成新的参考投影图像,直至遍历完获取的后续投影图像,以获取输出投影图像;The registration fusion module is configured to perform position registration and pixel fusion of the subsequent projection images of the plurality of viewing angles satisfying the preset projection conditions one by one with the reference projection image to form a new reference projection image until the traversal is obtained. Subsequently projecting an image to obtain an output projected image;
反向投影模块,设置为将所述输出投影图像反向投影至移动终端取景器采集图像的坐标空间以生成长曝光全景图像。A back projection module is configured to backproject the output projected image to a coordinate space of the image acquired by the mobile terminal viewfinder to generate a long exposure panoramic image.
作为一种实施方式,所述图像投影模块包括:As an implementation manner, the image projection module includes:
参数获取单元,设置为获取移动终端的运动参数,该运动参数包括运动方向和倾斜角度;a parameter obtaining unit, configured to acquire a motion parameter of the mobile terminal, where the motion parameter includes a motion direction and a tilt angle;
空间确定单元,设置为根据获取的运动参数确定用于图像投影的投影空间;a space determining unit configured to determine a projection space for image projection according to the acquired motion parameter;
投影单元,设置为将获取的图像投影至确定的投影空间形成对应的投影图像。And a projection unit configured to project the acquired image to the determined projection space to form a corresponding projection image.
作为一种实施方式,所述参数获取单元设置为通过以下方式获取移动终端的运动参数:As an implementation manner, the parameter obtaining unit is configured to acquire a motion parameter of the mobile terminal by:
所述参数获取单元控制安装在移动终端内部的加速度计和陀螺仪感应用户握持移动终端的运动方向和倾斜角度,或者,使用图像配准方法对当前图像提取特征,找到对齐位置,以确定用户握持移动终端的运动方向和倾斜角度。The parameter acquisition unit controls an accelerometer and a gyroscope installed inside the mobile terminal to sense a movement direction and a tilt angle of the user holding the mobile terminal, or use an image registration method to extract features from the current image, find an alignment position, and determine a user. Hold the moving direction and tilt angle of the mobile terminal.
作为一种实施方式,所述投影空间包括柱面投影空间、球面投影空间、正方体投影空间。As an implementation manner, the projection space includes a cylindrical projection space, a spherical projection space, and a cube projection space.
作为一种实施方式,所述配准融合模块包括:As an implementation manner, the registration fusion module includes:
特征提取单元,设置为从所述参考投影图像中提取参考特征,并逐个从满足预设投影条件的多个视角的所述后续投影图像中提取与所述参考特征对应的匹配特征;a feature extraction unit configured to extract a reference feature from the reference projection image, and extract matching features corresponding to the reference feature from the subsequent projection images of the plurality of perspectives satisfying the preset projection condition one by one;
配准单元,设置为根据所述匹配特征与参考特征之间的位置关系,将满足预设投影条件的所述后续投影图像与参考投影图像配准结合; The registration unit is configured to combine the subsequent projection image that meets the preset projection condition with the reference projection image according to the positional relationship between the matching feature and the reference feature;
融合单元,设置为将满足预设投影条件的所述后续投影图像与参考投影图像的重叠区域进行像素融合、非重叠区域直接拼接,以形成新的参考投影图像,直至遍历完获取的后续投影图像,并将最终形成的参考投影图像作为输出投影图像。The merging unit is configured to perform pixel merging and overlapping of the overlapping area of the reference projection image satisfying the preset projection condition with the non-overlapping area to form a new reference projection image until the acquired subsequent projection image is traversed And the resulting reference projection image is taken as an output projection image.
作为一种实施方式,所述特征提取单元设置为通过以下方式之一提取特征:特征点匹配、光流法、互相关。As an embodiment, the feature extraction unit is configured to extract features in one of the following ways: feature point matching, optical flow method, cross-correlation.
作为一种实施方式,所述配准单元设置为根据匹配特征与参考特征之间的位置关系,得出配准参数,根据所述配准参数将满足预设投影条件的后续投影图像与参考投影图像配准结合。As an implementation manner, the registration unit is configured to obtain a registration parameter according to a positional relationship between the matching feature and the reference feature, and the subsequent projection image and the reference projection satisfying the preset projection condition according to the registration parameter Image registration combined.
作为一种实施方式,所述图像获取模块,还设置为在获取移动终端取景器定时采集的图像时,获取移动终端的运动参数,运动参数包括移动方向和倾斜角度;As an implementation manner, the image acquiring module is further configured to acquire a motion parameter of the mobile terminal when acquiring an image acquired by the mobile terminal viewfinder timing, where the motion parameter includes a moving direction and a tilting angle;
所述配准单元还设置为:The registration unit is further configured to:
根据移动终端的运动参数确定满足预设投影条件的所述后续投影图像与参考投影图像的结合方向;Determining, according to a motion parameter of the mobile terminal, a combination direction of the subsequent projected image that satisfies a preset projection condition and the reference projected image;
根据所述结合方向,以及匹配特征与参考特征之间的位置关系,将满足预设投影条件的所述后续投影图像与参考投影图像配准结合。And matching the subsequent projection image satisfying the preset projection condition with the reference projection image according to the bonding direction and the positional relationship between the matching feature and the reference feature.
作为一种实施方式,所述融合单元还设置为:As an implementation manner, the fusion unit is further configured to:
获取满足预设投影条件的所述后续投影图像与参考投影图像的重叠区域中每个像素点位置的输入颜色值、参考颜色值和融合次数;Obtaining an input color value, a reference color value, and a number of fusion times of each pixel position in an overlapping area of the subsequent projection image and the reference projection image that satisfy a preset projection condition;
根据所述输入颜色值、参考颜色值和融合次数,调整所述重叠区域中每个像素点的颜色值以完成像素融合,其中,输入颜色值为后续投影图像在重叠区域中像素点位置的颜色值、参考颜色值为参考投影图像在重叠区域中像素点位置的颜色值、融合次数为参考投影图像在重叠区域中像素点位置进行像素融合的次数。And adjusting a color value of each pixel in the overlapping area to complete pixel fusion according to the input color value, the reference color value, and the number of fusions, wherein the input color value is a color of a pixel position of the subsequent projected image in the overlapping area. The value and the reference color value are color values of the pixel position of the reference projection image in the overlapping area, and the number of times of fusion is the number of times the pixel of the reference projection image is pixel-fused in the overlapping area.
作为一种实施方式,所述融合单元设置为根据加权平均法计算得到重叠区域中每个像素点的颜色值。As an embodiment, the fusion unit is configured to calculate a color value of each pixel in the overlapping region according to a weighted average method.
本发明实施例还提供一种长曝光全景图像拍摄方法,所述长曝光全景图 像拍摄方法包括:An embodiment of the present invention further provides a long exposure panoramic image capturing method, the long exposure panoramic image Photo shooting methods include:
当接收到长曝光拍摄指令时,获取移动终端取景器定时采集的图像;Obtaining an image acquired by the mobile terminal viewfinder at a time when receiving a long exposure shooting instruction;
将获取的图像投影至预设投影空间形成对应的投影图像,其中,将获取的第一帧图像对应的投影图像作为参考投影图像,将获取的后续图像对应的投影图像作为后续投影图像;Projecting the acquired image to a preset projection space to form a corresponding projection image, wherein the acquired projection image corresponding to the first frame image is used as a reference projection image, and the acquired projection image corresponding to the subsequent image is used as a subsequent projection image;
将满足预设投影条件的多个视角的所述后续投影图像逐个与所述参考投影图像进行位置配准和像素融合形成新的参考投影图像,直至遍历完获取的后续投影图像,以获取输出投影图像;The subsequent projection images of the plurality of viewing angles satisfying the preset projection condition are position-aligned and pixel-fused with the reference projection image one by one to form a new reference projection image, until the acquired subsequent projection image is traversed to obtain an output projection image;
将所述输出投影图像反向投影至移动终端取景器采集图像的坐标空间以生成长曝光全景图像。The output projection image is back projected to a coordinate space of the mobile terminal viewfinder acquisition image to generate a long exposure panoramic image.
作为一种实施方式,所述将获取的图像投影至预设投影空间形成对应的投影图像的步骤包括:As an implementation manner, the step of projecting the acquired image into a preset projection space to form a corresponding projection image includes:
获取移动终端的运动参数,该运动参数包括运动方向和倾斜角度;Obtaining a motion parameter of the mobile terminal, where the motion parameter includes a motion direction and a tilt angle;
根据获取的运动参数确定用于图像投影的投影空间;Determining a projection space for image projection based on the acquired motion parameters;
将获取的图像投影至确定的投影空间形成对应的投影图像。The acquired image is projected onto the determined projection space to form a corresponding projected image.
作为一种实施方式,所述获取移动终端的运动参数,包括:As an implementation manner, the acquiring the motion parameter of the mobile terminal includes:
控制安装在移动终端内部的加速度计和陀螺仪感应用户握持移动终端的运动方向和倾斜角度,或者,使用图像配准方法对当前图像提取特征,找到对齐位置,以确定用户握持移动终端的运动方向和倾斜角度。The accelerometer and the gyroscope installed inside the mobile terminal are controlled to sense the moving direction and the tilting angle of the user holding the mobile terminal, or the image registration method is used to extract the feature from the current image, and the alignment position is found to determine the user holding the mobile terminal. Direction of movement and angle of inclination.
作为一种实施方式,所述投影空间包括柱面投影空间、球面投影空间、正方体投影空间。As an implementation manner, the projection space includes a cylindrical projection space, a spherical projection space, and a cube projection space.
作为一种实施方式,所述将满足预设投影条件的多个视角的所述后续投影图像逐个与所述参考投影图像进行位置配准和像素融合形成新的参考投影图像,遍历完获取的后续投影图像,以获取输出投影图像的步骤包括:As an implementation manner, the subsequent projection images of the plurality of viewing angles satisfying the preset projection condition are position-aligned and pixel-fused with the reference projection image one by one to form a new reference projection image, and the subsequent steps of the acquisition are traversed. The steps of projecting an image to obtain an output projected image include:
从所述参考投影图像中提取参考特征,并逐个从满足预设投影条件的多个视角的所述后续投影图像中提取与所述参考特征对应的匹配特征;Extracting reference features from the reference projection image, and extracting matching features corresponding to the reference features from the subsequent projection images of the plurality of perspectives satisfying the preset projection conditions one by one;
根据所述匹配特征与参考特征之间的位置关系,将满足预设投影条件的 所述后续投影图像与参考投影图像配准结合;According to the positional relationship between the matching feature and the reference feature, the preset projection condition will be met. The subsequent projected image is combined with the reference projected image;
将满足预设投影条件的所述后续投影图像与参考投影图像的重叠区域进行像素融合、非重叠区域直接拼接,以形成新的参考投影图像,直至遍历完获取的后续投影图像;Performing pixel fusion on the overlapping area of the predetermined projection image and the reference projection image, and directly splicing the non-overlapping area to form a new reference projection image until the acquired subsequent projection image is traversed;
将最终形成的参考投影图像作为输出投影图像。The resulting reference projection image is taken as an output projection image.
作为一种实施方式,所述从所述参考投影图像中提取参考特征,并逐个从满足预设投影条件的多个视角的所述后续投影图像中提取与所述参考特征对应的匹配特征,包括:通过以下方式之一进行特征提取:特征点匹配、光流法、互相关。As an implementation manner, the extracting reference features from the reference projection image, and extracting matching features corresponding to the reference features from the subsequent projection images of the plurality of perspectives satisfying the preset projection conditions one by one, including : Feature extraction in one of the following ways: feature point matching, optical flow method, cross-correlation.
作为一种实施方式,所述根据所述匹配特征与参考特征之间的位置关系,将满足预设投影条件的所述后续投影图像与参考投影图像配准结合,包括:根据匹配特征与参考特征之间的位置关系,得出配准参数,根据所述配准参数将满足预设投影条件的后续投影图像与参考投影图像配准结合。In an embodiment, the registering the subsequent projection image that meets the preset projection condition with the reference projection image according to the positional relationship between the matching feature and the reference feature comprises: matching the reference feature and the reference feature The positional relationship between the two results in a registration parameter, and the subsequent projection image that satisfies the preset projection condition is combined with the reference projection image according to the registration parameter.
作为一种实施方式,所述长曝光全景图像拍摄方法还包括:在获取移动终端取景器定时采集的图像时,获取移动终端的运动参数,运动参数包括移动方向和倾斜角度;As an embodiment, the long-exposure panoramic image capturing method further includes: acquiring a motion parameter of the mobile terminal when acquiring an image acquired by the mobile terminal viewfinder timing, where the motion parameter includes a moving direction and a tilting angle;
所述根据所述匹配特征与参考特征之间的位置关系,将满足预设投影条件的所述后续投影图像与参考投影图像配准结合的步骤包括:The step of combining the subsequent projection image satisfying the preset projection condition with the reference projection image according to the positional relationship between the matching feature and the reference feature comprises:
根据移动终端的运动参数确定满足预设投影条件的所述后续投影图像与参考投影图像的结合方向;Determining, according to a motion parameter of the mobile terminal, a combination direction of the subsequent projected image that satisfies a preset projection condition and the reference projected image;
根据所述结合方向,以及匹配特征与参考特征之间的位置关系,将满足预设投影条件的所述后续投影图像与参考投影图像配准结合。And matching the subsequent projection image satisfying the preset projection condition with the reference projection image according to the bonding direction and the positional relationship between the matching feature and the reference feature.
作为一种实施方式,所述将满足预设投影条件的所述后续投影图像与参考投影图像的重叠区域进行像素融合的步骤包括:As an implementation manner, the step of performing pixel fusion on the overlapping area of the subsequent projection image that meets the preset projection condition and the reference projection image includes:
获取满足预设投影条件的所述后续投影图像与参考投影图像的重叠区域中每个像素点位置的输入颜色值、参考颜色值和融合次数;Obtaining an input color value, a reference color value, and a number of fusion times of each pixel position in an overlapping area of the subsequent projection image and the reference projection image that satisfy a preset projection condition;
根据所述输入颜色值、参考颜色值和融合次数,调整所述重叠区域中每个像素点的颜色值以完成像素融合,其中,输入颜色值为后续投影图像在重 叠区域中像素点位置的颜色值、参考颜色值为参考投影图像在重叠区域中像素点位置的颜色值、融合次数为参考投影图像在重叠区域中像素点位置进行像素融合的次数。And adjusting a color value of each pixel in the overlapping area to complete pixel fusion according to the input color value, the reference color value, and the number of times of fusion, wherein the input color value is heavy in the subsequent projection image The color value of the pixel position in the overlap region, the reference color value is the color value of the pixel position of the reference projection image in the overlapping region, and the number of fusion times is the number of times the pixel of the reference projection image is pixel-fused in the overlapping region.
作为一种实施方式,所述根据所述输入颜色值、参考颜色值和融合次数,调整所述重叠区域中每个像素点的颜色值以完成像素融合,包括:根据加权平均法计算得到重叠区域中每个像素点的颜色值。In one embodiment, the color value of each pixel in the overlapping area is adjusted according to the input color value, the reference color value, and the number of times of fusion to complete pixel fusion, including: calculating an overlapping area according to a weighted average method. The color value of each pixel in the middle.
本发明实施例还提供一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令被执行时实现上述的长曝光全景图像拍摄方法。The embodiment of the invention further provides a computer readable storage medium storing computer executable instructions, which are implemented to implement the long exposure panoramic image capturing method described above.
本发明实施例通过接收到长曝光拍摄指令时,获取移动终端取景器定时采集的图像;然后将获取的图像投影至预设投影空间形成对应的投影图像,其中,将获取的第一帧图像对应的投影图像作为参考投影图像,将获取的后续图像对应的投影图像作为后续投影图像;再将后续投影图像逐个与参考投影图像进行位置配准和像素融合形成新的参考投影图像,直至遍历完获取的后续投影图像,以获取输出投影图像;最后将输出投影图像反向投影至移动终端取景器采集图像的坐标空间以生成长曝光全景图像,从而通过后续投影图像与参考投影图像的位置配准和像素融合,对移动终端处于轻微抖动状态下(例如用户握持移动终端进行长曝光拍摄)所采集的图像也能进行长曝光合成,从而加强了移动终端在长曝光拍摄模式下对抖动的适应能力,从而用户在户外进行长曝光拍摄时,无需携带固定装置,使移动终端长曝光拍摄功能使用更方便。When receiving the long exposure shooting instruction, the embodiment obtains an image captured by the mobile terminal viewfinder at a time; and then projects the acquired image to a preset projection space to form a corresponding projection image, where the acquired first frame image corresponds to The projected image is used as a reference projection image, and the acquired projection image corresponding to the subsequent image is used as a subsequent projection image; then the subsequent projection images are position-aligned and pixel-fused with the reference projection image one by one to form a new reference projection image, until the traversal is obtained. Subsequent projection image to obtain an output projection image; finally, the output projection image is backprojected to a coordinate space of the mobile terminal viewfinder acquisition image to generate a long exposure panoramic image, thereby registering the position of the subsequent projection image and the reference projection image Pixel fusion, the image acquired by the mobile terminal in a slight jitter state (for example, the user holds the mobile terminal for long exposure shooting) can also perform long exposure synthesis, thereby enhancing the adaptability of the mobile terminal to jitter in the long exposure shooting mode. So that the user is outdoors Exposure shooting, without carrying fixing means, so long exposure shooting function of the mobile terminal more convenient to use.
在阅读并理解了附图和详细描述后,可以明白其他方面。Other aspects will be apparent upon reading and understanding the drawings and detailed description.
附图概述BRIEF abstract
图1为实现本申请各个实施例的一个可选的移动终端的硬件结构示意图;1 is a schematic structural diagram of hardware of an optional mobile terminal that implements various embodiments of the present application;
图2为图1中相机的电气结构框图;Figure 2 is a block diagram showing the electrical structure of the camera of Figure 1;
图3是本申请的长曝光全景图像拍摄装置第一实施例的功能模块示意图;3 is a schematic diagram of functional modules of a first embodiment of a long exposure panoramic image capturing apparatus of the present application;
图4为本申请的长曝光全景图像拍摄装置第二实施例中图像投影模块的细化功能模块示意图; 4 is a schematic diagram of a refinement function module of an image projection module in a second embodiment of the long exposure panoramic image capturing apparatus of the present application;
图5为本申请的长曝光全景图像拍摄装置第三实施例中配准融合模块的细化功能模块示意图;5 is a schematic diagram of a refinement function module of a registration fusion module in a third embodiment of the long exposure panoramic image capturing apparatus of the present application;
图6为本申请的长曝光全景图像拍摄方法第一实施例的流程示意图;6 is a schematic flow chart of a first embodiment of a long exposure panoramic image capturing method according to the present application;
图7为本申请的长曝光全景图像拍摄方法第二实施例中将获取的图像投影至预设投影空间形成对应的投影图像的步骤的细化流程示意图;7 is a refinement flow diagram of a step of projecting an acquired image into a preset projection space to form a corresponding projection image in a second embodiment of the long exposure panoramic image capturing method of the present application;
图8为本申请的长曝光全景图像拍摄方法第三实施例中将满足预设投影条件的多个视角的后续投影图像逐个与参考投影图像进行位置配准和像素融合形成新的参考投影图像,直至遍历完获取的后续投影图像的步骤的细化流程示意图。8 is a third embodiment of the long exposure panoramic image capturing method of the present application, in which a subsequent projection image of a plurality of viewing angles satisfying a preset projection condition is subjected to position registration and pixel fusion one by one with a reference projection image to form a new reference projection image. A detailed flow diagram of the steps up to the step of traversing the acquired subsequent projected image.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The implementation, functional features and advantages of the present application will be further described with reference to the accompanying drawings.
本发明的实施方式Embodiments of the invention
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting.
本发明实施例的长曝光全景图像拍摄装置可应用于终端中。现在将参考附图描述实现本申请各个实施例的终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本申请的说明,其本身并没有特定的意义。因此,“模块”与“部件”可以混合地使用。The long exposure panoramic image capturing apparatus of the embodiment of the present invention can be applied to a terminal. A terminal implementing various embodiments of the present application will now be described with reference to the accompanying drawings. In the following description, the use of suffixes such as "module", "component" or "unit" for indicating an element is merely an explanation for facilitating the present application, and does not have a specific meaning per se. Therefore, "module" and "component" can be used in combination.
移动终端可以以各种形式来实施。例如,本申请中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本申请的实施方式的构造也能够应用于固定类型的终端。The mobile terminal can be implemented in various forms. For example, the terminals described in this application may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Tablets), PMPs (Portable Multimedia Players), navigation devices, and the like. Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like. In the following, it is assumed that the terminal is a mobile terminal. However, those skilled in the art will appreciate that configurations in accordance with embodiments of the present application can be applied to fixed type terminals in addition to elements that are specifically for mobile purposes.
图1为实现本申请各个实施例的一个可选的移动终端的硬件结构示意图。FIG. 1 is a schematic structural diagram of hardware of an optional mobile terminal implementing various embodiments of the present application.
移动终端100可以包括无线通信单元110、A/V(音频/视频)输入单元120、用户输入单元130、感测单元140、输出单元150、存储器160、接口单元170、 控制器180、电源单元190和长曝光全景图像拍摄装置200等等。图1示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。The mobile terminal 100 may include a wireless communication unit 110, an A/V (audio/video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, The controller 180, the power supply unit 190, the long exposure panoramic image capturing device 200, and the like. Figure 1 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
无线通信单元110通常包括一个或多个组件,其允许移动终端100与无线通信装置或网络之间的无线电通信。 Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication device or network.
A/V输入单元120用于接收音频或视频信号。A/V输入单元120可以包括相机121,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通信单元110进行发送,可以根据移动终端的构造提供两个或更多相机121。The A/V input unit 120 is for receiving an audio or video signal. The A/V input unit 120 may include a camera 121 that processes image data of still pictures or video obtained by an image capturing device in a video capturing mode or an image capturing mode. The processed image frame can be displayed on the display unit 151. The image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力值、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时,可以形成触摸屏。The user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal. The user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure values, capacitance, etc. due to contact), Roller, rocker, etc. In particular, when the touch panel is superimposed on the display unit 151 in the form of a layer, a touch screen can be formed.
感测单元140检测移动终端100的当前状态,(例如,移动终端100的打开或关闭状态)、移动终端100的位置、用户对于移动终端100的接触(即,触摸输入)的有无、移动终端100的取向、移动终端100的运动方向和倾斜角度等等,并且生成用于控制移动终端100的操作的命令或信号。例如,感测单元140包括加速度计141和陀螺仪142,加速度计141用于检测移动终端100的实时加速度以得出移动终端100的运动方向,陀螺仪142用于检测移动终端100相对于其所在平面的倾斜角度。另外,感测单元140能够检测电源单元190是否提供电力或者接口单元170是否与外部装置耦接。The sensing unit 140 detects the current state of the mobile terminal 100 (eg, the open or closed state of the mobile terminal 100), the location of the mobile terminal 100, the presence or absence of contact (ie, touch input) by the user with the mobile terminal 100, and the mobile terminal. The orientation of 100, the direction of movement and the tilt angle of the mobile terminal 100, and the like, and generates commands or signals for controlling the operation of the mobile terminal 100. For example, the sensing unit 140 includes an accelerometer 141 for detecting a real-time acceleration of the mobile terminal 100 to derive a moving direction of the mobile terminal 100, and a gyroscope 142 for detecting the mobile terminal 100 relative to its location The angle of inclination of the plane. In addition, the sensing unit 140 can detect whether the power supply unit 190 provides power or whether the interface unit 170 is coupled to an external device.
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用 户识别模块(UIM)、客户识别模块(SIM)、通用客户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为“识别装置”)可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端和外部装置之间传输数据。The interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more. The identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include User Identification Module (UIM), Customer Identification Module (SIM), Universal Customer Identification Module (USIM), and more. In addition, the device having the identification module (hereinafter referred to as "identification device") may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device. The interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between.
输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、振动信号等等)。输出单元150可以包括显示单元151、音频输出模块152等等。 Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151, an audio output module 152, and the like.
显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。The display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力值以及触摸输入位置和触摸输入面积。Meanwhile, when the display unit 151 and the touch panel are superposed on each other in the form of a layer to form a touch screen, the display unit 151 can function as an input device and an output device. The display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like. According to a particular desired embodiment, the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) . The touch screen can be used to detect touch input pressure values as well as touch input positions and touch input areas.
音频输出模块152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152 可以包括拾音器、蜂鸣器等等。The audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like. The audio signal is output as sound. Moreover, the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100. Audio output module 152 It may include a pickup, a buzzer, and the like.
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。The memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like. Moreover, the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
控制器180通常控制移动终端的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。另外,控制器180可以包括用于再现(或回放)多媒体数据的多媒体模块181,例如多媒体模块181可用于实时显示实时生成的长曝光全景图像,多媒体模块181可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。The controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like. In addition, the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, for example, the multimedia module 181 may be used to display a long-exposure panoramic image generated in real time in real time, and the multimedia module 181 may be constructed in the controller 180, or may It is configured to be separated from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。The power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由 控制器180执行。The various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof. For hardware implementations, the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle. For software implementations, implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation. The software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and The controller 180 executes.
长曝光全景图像拍摄装置200包括图像获取模块10、图像投影模块20、配准融合模块30和反向投影模块40,其中,The long exposure panoramic image capturing device 200 includes an image acquiring module 10, an image projecting module 20, a registration fusion module 30, and a back projection module 40, wherein
图像获取模块10,设置为当接收到长曝光拍摄指令时,获取移动终端取景器定时采集的图像;The image obtaining module 10 is configured to acquire an image periodically collected by the viewfinder of the mobile terminal when receiving the long exposure shooting instruction;
图像投影模块20,设置为将获取的图像投影至预设投影空间,形成对应的投影图像,其中,将获取的第一帧图像对应的投影图像作为参考投影图像,将获取的后续图像对应的投影图像作为后续投影图像;The image projection module 20 is configured to project the acquired image to a preset projection space to form a corresponding projection image, wherein the acquired projection image corresponding to the first frame image is used as a reference projection image, and the acquired projection corresponding to the subsequent image is used. The image as a subsequent projected image;
配准融合模块30,设置为将后续投影图像逐个与参考投影图像进行位置配准和像素融合,形成新的参考投影图像,直至遍历完获取的后续投影图像,以获取输出投影图像;The registration fusion module 30 is configured to perform position registration and pixel fusion of the subsequent projection images one by one with the reference projection image to form a new reference projection image until the acquired subsequent projection image is traversed to obtain an output projection image;
反向投影模块40,设置为将输出投影图像反向投影至移动终端取景器采集图像的坐标空间以生成长曝光全景图像。The back projection module 40 is configured to backproject the output projected image to a coordinate space of the mobile terminal viewfinder captured image to generate a long exposure panoramic image.
此外,由于本案主要涉及一种长曝光全景图像拍摄装置和方法,所以对图1中移动终端100硬件结构中相机121的电气结构进行详细介绍,参照图2,图2为图1中相机的电气结构框图。In addition, since the present invention mainly relates to a long-exposure panoramic image capturing apparatus and method, the electrical structure of the camera 121 in the hardware structure of the mobile terminal 100 in FIG. 1 is described in detail. Referring to FIG. 2, FIG. 2 is an electrical structure of the camera of FIG. block diagram.
摄影镜头1211由用于形成被摄体像的多个光学镜头构成,为单焦点镜头或变焦镜头。摄影镜头1211在镜头驱动器1221的控制下能够在光轴方向上移动,镜头驱动器1221根据来自镜头驱动控制电路1222的控制信号,控制摄影镜头1211的焦点位置,在变焦镜头的情况下,也可控制焦点距离。镜头驱动控制电路1222按照来自微型计算机1217的控制命令进行镜头驱动器1221的驱动控制。The photographic lens 1211 is composed of a plurality of optical lenses for forming a subject image, and is a single focus lens or a zoom lens. The photographic lens 1211 is movable in the optical axis direction under the control of the lens driver 1221, and the lens driver 1221 controls the focus position of the photographic lens 1211 in accordance with a control signal from the lens driving control circuit 1222, and can also be controlled in the case of the zoom lens. Focus distance. The lens drive control circuit 1222 performs drive control of the lens driver 1221 in accordance with a control command from the microcomputer 1217.
在摄影镜头1211的光轴上、由摄影镜头1211形成的被摄体像的位置附近配置有摄像元件1212。摄像元件1212用于对被摄体像摄像并取得摄像图像数据。在摄像元件1212上二维且呈矩阵状配置有构成每个像素的光电二极管。每个光电二极管产生与受光量对应的光电转换电流,该光电转换电流由与每个光电二极管连接的电容器进行电荷蓄积。每个像素的前表面配置有拜耳排列的RGB滤色器。 An imaging element 1212 is disposed on the optical axis of the photographic lens 1211 near the position of the subject image formed by the photographic lens 1211. The imaging element 1212 is for capturing an image of a subject and acquiring captured image data. Photodiodes constituting each pixel are arranged two-dimensionally and in a matrix on the imaging element 1212. Each photodiode generates a photoelectric conversion current corresponding to the amount of received light, which is subjected to charge accumulation by a capacitor connected to each photodiode. The front surface of each pixel is provided with a Bayer array of RGB color filters.
摄像元件1212与摄像电路1213连接,该摄像电路1213在摄像元件1212中进行电荷蓄积控制和图像信号读出控制,对该读出的图像信号(模拟图像信号)降低重置噪声后进行波形整形,进而进行增益提高等以成为适当的信号电平。The imaging element 1212 is connected to the imaging circuit 1213. The imaging circuit 1213 performs charge accumulation control and image signal readout control in the imaging element 1212, and performs waveform shaping after reducing the reset noise of the read image signal (analog image signal). Further, gain improvement or the like is performed to obtain an appropriate signal level.
摄像电路1213与A/D转换器1214连接,该A/D转换器1214对模拟图像信号进行模数转换,向总线1227输出数字图像信号(以下称之为图像数据)。The imaging circuit 1213 is connected to an A/D converter 1214 that performs analog-to-digital conversion on the analog image signal and outputs a digital image signal (hereinafter referred to as image data) to the bus 1227.
总线1227是用于传送在相机的内部读出或生成的各种数据的传送路径。在总线1227连接着上述A/D转换器1214,此外还连接着图像处理器1215、JPEG处理器1216、微型计算机1217、SDRAM(Synchronous Dynamic random access memory,同步动态随机存取内存)1218、存储器接口(以下称之为存储器I/F)1219、LCD(Liquid Crystal Display,液晶显示器)驱动器1220。The bus 1227 is a transmission path for transmitting various data read or generated inside the camera. The A/D converter 1214 is connected to the bus 1227, and an image processor 1215, a JPEG processor 1216, a microcomputer 1217, a SDRAM (Synchronous Dynamic Random Access Memory) 1218, and a memory interface are also connected. (hereinafter referred to as memory I/F) 1219, LCD (Liquid Crystal Display) driver 1220.
图像处理器1215对基于摄像元件1212的输出的图像数据进行OB相减处理、白平衡调整、颜色矩阵运算、伽马转换、色差信号处理、噪声去除处理、同时化处理、边缘处理等各种图像处理。JPEG处理器1216在将图像数据记录于记录介质1225时,按照JPEG压缩方式压缩从SDRAM1218读出的图像数据。此外,JPEG处理器1216为了进行图像再现显示而进行JPEG图像数据的解压缩。进行解压缩时,读出记录在记录介质1225中的文件,在JPEG处理器1216中实施了解压缩处理后,将解压缩的图像数据暂时存储于SDRAM1218中并在LCD1226上进行显示。另外,在本实施方式中,作为图像压缩解压缩方式采用的是JPEG方式,然而压缩解压缩方式不限于此,当然可以采用MPEG、TIFF、H.264等其他的压缩解压缩方式。The image processor 1215 performs various kinds of images such as OB subtraction processing, white balance adjustment, color matrix calculation, gamma conversion, color difference signal processing, noise removal processing, simultaneous processing, edge processing, and the like on the image data based on the output of the imaging element 1212. deal with. The JPEG processor 1216 compresses the image data read out from the SDRAM 1218 in accordance with the JPEG compression method when the image data is recorded on the recording medium 1225. Further, the JPEG processor 1216 performs decompression of JPEG image data for image reproduction display. At the time of decompression, the file recorded on the recording medium 1225 is read, and after the compression processing is performed in the JPEG processor 1216, the decompressed image data is temporarily stored in the SDRAM 1218 and displayed on the LCD 1226. Further, in the present embodiment, the JPEG method is adopted as the image compression/decompression method. However, the compression/decompression method is not limited thereto, and other compression/decompression methods such as MPEG, TIFF, and H.264 may be used.
微型计算机1217发挥作为该相机整体的控制部的功能,统一控制相机的各种处理序列。微型计算机1217连接着操作单元1223和闪存1224。The microcomputer 1217 functions as a control unit of the entire camera, and collectively controls various processing sequences of the camera. The microcomputer 1217 is connected to the operation unit 1223 and the flash memory 1224.
操作单元1223包括但不限于实体按键或者虚拟按键,该实体或虚拟按键可以为电源按钮、拍照键、编辑按键、动态图像按钮、再现按钮、菜单按钮、十字键、OK按钮、删除按钮、放大按钮等各种输入按钮和各种输入键等操作控件,检测这些操作控件的操作状态,将检测结果向微型计算机1217输出。此外,在作为显示器的LCD1226的前表面设有触摸面板,检测用户的触摸位置,将该触摸位置向微型计算机1217输出。微型计算机1217根据来自操作 单元1223的操作位置的检测结果,执行与用户的操作对应的各种处理序列。The operating unit 1223 includes, but is not limited to, a physical button or a virtual button, and the entity or virtual button may be a power button, a camera button, an edit button, a dynamic image button, a reproduction button, a menu button, a cross button, an OK button, a delete button, an enlarge button An operation control such as various input buttons and various input keys detects an operation state of these operation controls, and outputs the detection result to the microcomputer 1217. Further, a touch panel is provided on the front surface of the LCD 1226 as a display, and the touch position of the user is detected, and the touch position is output to the microcomputer 1217. Microcomputer 1217 based on operation The detection result of the operation position of the unit 1223 performs various processing sequences corresponding to the operation of the user.
闪存1224存储用于执行微型计算机1217的各种处理序列的程序。微型计算机1217根据该程序进行相机整体的控制。此外,闪存1224存储相机的各种调整值,微型计算机1217读出调整值,按照该调整值进行相机的控制。The flash memory 1224 stores programs for executing various processing sequences of the microcomputer 1217. The microcomputer 1217 performs overall control of the camera in accordance with the program. Further, the flash memory 1224 stores various adjustment values of the camera, and the microcomputer 1217 reads out the adjustment value, and performs control of the camera in accordance with the adjustment value.
SDRAM1218是用于对图像数据等进行暂时存储的可电改写的易失性存储器。该SDRAM1218暂时存储从A/D转换器1214输出的图像数据和在图像处理器1215、JPEG处理器1216等中进行了处理后的图像数据。The SDRAM 1218 is an electrically rewritable volatile memory for temporarily storing image data or the like. The SDRAM 1218 temporarily stores image data output from the A/D converter 1214 and image data processed in the image processor 1215, the JPEG processor 1216, and the like.
存储器接口1219与记录介质1225连接,进行将图像数据和附加在图像数据中的文件头等数据写入记录介质1225和从记录介质1225中读出的控制。记录介质1225例如为能够在相机主体上自由拆装的存储器卡等记录介质,然而不限于此,也可以是内置在相机主体中的硬盘等。The memory interface 1219 is connected to the recording medium 1225, and performs control for writing image data and a file header attached to the image data to the recording medium 1225 and reading out from the recording medium 1225. The recording medium 1225 is, for example, a recording medium such as a memory card that can be detachably attached to the camera body. However, the recording medium 1225 is not limited thereto, and may be a hard disk or the like built in the camera body.
LCD驱动器1220与LCD1226连接,将由图像处理器1215处理后的图像数据存储于SDRAM1218,需要显示时,读取SDRAM1218存储的图像数据并在LCD1226上显示,或者,JPEG处理器1216压缩过的图像数据存储于SDRAM1218,在需要显示时,JPEG处理器1216读取SDRAM1218的压缩过的图像数据,再进行解压缩,将解压缩后的图像数据通过LCD1226进行显示。The LCD driver 1220 is connected to the LCD 1226, and stores image data processed by the image processor 1215 in the SDRAM 1218. When display is required, the image data stored in the SDRAM 1218 is read and displayed on the LCD 1226, or the image data stored in the JPEG processor 1216 is compressed. In the SDRAM 1218, when display is required, the JPEG processor 1216 reads the compressed image data of the SDRAM 1218, decompresses it, and displays the decompressed image data through the LCD 1226.
LCD1226配置在相机主体的背面进行图像显示。然而不限于此,也可以采用有机EL等各种显示面板(LCD1226)。The LCD 1226 is configured to display an image on the back of the camera body. However, it is not limited thereto, and various display panels (LCD 1226) such as an organic EL may be used.
基于上述移动终端硬件结构和相机的电气结构示意图,提出本申请的长曝光全景图像拍摄装置的实施例,长曝光全景图像拍摄装置为移动终端的一部分。Based on the hardware structure of the mobile terminal and the electrical structure diagram of the camera, an embodiment of the long exposure panoramic image capturing apparatus of the present application is proposed, and the long exposure panoramic image capturing apparatus is a part of the mobile terminal.
参照图3,本申请提供一种长曝光全景图像拍摄装置,在长曝光全景图像拍摄装置的第一实施例中,该装置包括:Referring to FIG. 3, the present application provides a long exposure panoramic image capturing apparatus. In a first embodiment of a long exposure panoramic image capturing apparatus, the apparatus includes:
图像获取模块10,设置为当接收到长曝光拍摄指令时,获取移动终端取景器定时采集的图像;The image obtaining module 10 is configured to acquire an image periodically collected by the viewfinder of the mobile terminal when receiving the long exposure shooting instruction;
当用户需要利用移动终端拍摄长曝光照片时,向移动终端输入长曝光拍摄指令以控制移动终端进入长曝光模式,移动终端的相机快门开放,相机的 取景器捕捉长时间内场景中光影的变化。在移动终端相机接收到长曝光拍摄指令时,移动终端的取景器开始定时采集图像,例如每间隔0.5s采集一帧图像,图像获取模块10收集移动终端取景器定时采集的图像。When the user needs to take a long exposure photo with the mobile terminal, input a long exposure shooting instruction to the mobile terminal to control the mobile terminal to enter the long exposure mode, the camera shutter of the mobile terminal is open, the camera The viewfinder captures changes in light and shadow in the scene over a long period of time. When the mobile terminal camera receives the long exposure shooting instruction, the viewfinder of the mobile terminal starts to collect images at a timing, for example, one frame image is acquired every 0.5 s, and the image acquisition module 10 collects the image acquired by the mobile terminal viewfinder at a time.
图像投影模块20,设置为将获取的图像投影至预设投影空间,形成对应的投影图像,其中,将获取的第一帧图像对应的投影图像作为参考投影图像,将获取的后续图像对应的投影图像作为后续投影图像;The image projection module 20 is configured to project the acquired image to a preset projection space to form a corresponding projection image, wherein the acquired projection image corresponding to the first frame image is used as a reference projection image, and the acquired projection corresponding to the subsequent image is used. The image as a subsequent projected image;
图像投影模块20将获取移动终端取景器采集的图像投影至预设投影空间,以形成对应的投影图像。对获取图像进行投影可以有多种执行顺序,例如:可以每获取移动终端取景器采集的一帧图像,就将获取的这一帧图像投影至预设投影空间;也可以每获取移动终端取景器采集的预设帧(例如5帧)图像,再将获取的预设帧图像一次性逐个投影至预设投影空间;此外还可以待移动终端取景器采集图像完毕后,将获取的全部帧图像一次性逐个投影至预设投影空间。对获取的图像进行投影操作的执行顺序可以根据需要灵活设置,例如移动终端图像处理能力较强,可以待移动终端取景器采集图像完毕后,将获取的全部帧图像一次性逐个投影至预设投影空间。其中,获取移动终端取景器采集的第一帧图像在预设投影空间中投影所形成的投影图像作为参考投影图像,获取移动终端取景器采集的后续帧图像在预设投影空间中投影所形成的投影图像作为后续投影图像。所述投影空间包括柱面投影空间、球面投影空间、正方体投影空间等。The image projection module 20 projects the acquired image acquired by the mobile terminal viewfinder to a preset projection space to form a corresponding projected image. Projecting the acquired image may have multiple execution sequences. For example, each frame image acquired by the mobile terminal viewfinder may be captured, and the acquired image of the frame may be projected to a preset projection space; or each mobile terminal viewfinder may be acquired. Collecting preset frames (for example, 5 frames), and then projecting the acquired preset frame images one by one to the preset projection space; in addition, all the acquired frame images may be acquired once the image is captured by the mobile terminal viewfinder. The projections are projected one by one to the preset projection space. The execution sequence of the projection operation on the acquired image can be flexibly set according to requirements. For example, the image processing capability of the mobile terminal is strong, and after the image capture by the mobile terminal viewfinder is completed, all the acquired frame images are projected one by one to the preset projection. space. The projection image formed by the projection of the first frame image captured by the mobile terminal viewfinder in the preset projection space is used as a reference projection image, and the projection of the subsequent frame image collected by the mobile terminal viewfinder in the preset projection space is obtained. The projected image is used as a subsequent projected image. The projection space includes a cylindrical projection space, a spherical projection space, a cube projection space, and the like.
配准融合模块30,设置为将满足预设投影条件的多个视角的后续投影图像逐个与参考投影图像进行位置配准和像素融合,形成新的参考投影图像,直至遍历完获取的后续投影图像,以获取输出投影图像;The registration fusion module 30 is configured to perform position registration and pixel fusion of the subsequent projection images of the plurality of viewing angles that meet the preset projection conditions one by one with the reference projection image to form a new reference projection image until the acquired subsequent projection image is traversed. To obtain an output projection image;
获取的后续投影图像包括多个视角,例如用户水平移动移动终端进行长曝光全景拍摄时,获取的图像为同一水平方向上多个视角的图像,从而投影所得的后续投影图像也是相对于参考投影图像的多个视角的图像。在获取后续投影图像后,配准融合模块30先判断获取的后续投影是否满足预设投影条件,例如预设投影条件包括后续投影图像相对参考投影图像平移或旋转的倾斜角度不能大于预设角度、后续投影图像相对参考投影图像垂直方向平移大于预设值等;然后将参考投影图像作为基准图像,将获取的后续投影图像按 照其投影至预设投影空间的顺序逐个与参考投影图像进行位置配准和像素融合,直到移动终端取景器停止采集图像,即将获取移动终端取景器采集的所有图像在预设投影空间的后续投影图像与参考投影图像位置配准和像素融合之后,将参考投影图像配准和融合所有后续投影图像之后的图像作为输出投影图像,从而将多个视角的后续投影图像与参考投影图像合成为一张大视角的投影图像,从而能够反向投影得到大视角的长曝光全景图像。The acquired subsequent projection image includes a plurality of viewing angles. For example, when the user moves the mobile terminal horizontally for long exposure panoramic shooting, the acquired image is an image of multiple viewing angles in the same horizontal direction, so that the projected subsequent projected image is also relative to the reference projected image. An image of multiple perspectives. After acquiring the subsequent projection image, the registration fusion module 30 first determines whether the acquired subsequent projection satisfies the preset projection condition. For example, the preset projection condition includes that the tilt angle of the subsequent projection image relative to the reference projection image translation or rotation cannot be greater than the preset angle, The subsequent projection image is vertically shifted relative to the reference projection image by a preset value or the like; then the reference projection image is used as a reference image, and the acquired subsequent projection image is pressed Position registration and pixel fusion with the reference projection image one by one according to the order of projection to the preset projection space until the mobile terminal viewfinder stops capturing images, that is, acquiring subsequent projections of all images acquired by the mobile terminal viewfinder in the preset projection space After the image is registered with the reference projection image and the pixels are fused, the reference projection image is registered and the image after all the subsequent projection images are fused as an output projection image, thereby synthesizing the subsequent projection images of the plurality of viewing angles and the reference projection image into one large image. A projected image of the angle of view, thereby enabling back projection to obtain a long exposure panoramic image of a large viewing angle.
后续投影图像先与参考投影图像进行位置配准,然后与参考投影图像进行像素融合,形成新的参考投影图像,然后下一个后续投影图像与新的参考投影图像位置配准并进行像素融合,其中,位置配准是基于图像配准技术,像素融合是基于融合技术。The subsequent projection image is firstly registered with the reference projection image, and then pixel-fused with the reference projection image to form a new reference projection image, and then the next subsequent projection image is registered with the new reference projection image and pixel fusion is performed. Position registration is based on image registration technology, and pixel fusion is based on fusion technology.
上述图像配准技术是指:将不同时间、不同传感器(成像设备)或不同条件下(天候、照度、摄像位置和角度等)获取的两幅或多幅图像进行匹配、叠加的过程,它已经被广泛地应用于遥感数据分析、计算机视觉、图像处理等领域。图像配准技术的一般流程如下:首先对两幅图像进行特征提取得到特征点;通过进行相似性度量找到匹配的特征点对;然后通过匹配的特征点对得到图像空间坐标变换参数;最后由坐标变换参数进行图像配准。The above image registration technology refers to a process of matching and superimposing two or more images acquired at different times, different sensors (imaging devices) or under different conditions (weather, illumination, imaging position, angle, etc.). It is widely used in remote sensing data analysis, computer vision, image processing and other fields. The general flow of image registration technology is as follows: firstly, feature extraction is performed on two images to obtain feature points; matching feature point pairs are found by performing similarity measure; then image space coordinate transformation parameters are obtained by matching feature point pairs; Transform parameters for image registration.
上述图像融合技术是指:将多源信道所采集到的关于同一目标的图像数据经过图像处理和计算机技术等,最大限度的提取各自信道中的有利信息,最后综合成高质量的图像,以提高图像信息的利用率、改善计算机解译精度和可靠性、提升原始图像的空间分辨率和光谱分辨率,利于监测。待融合图像已配准好且像素位宽一致。The above image fusion technology refers to: image data of the same target collected by the multi-source channel is subjected to image processing and computer technology to extract the favorable information in the respective channels to the maximum extent, and finally integrated into a high-quality image to improve The utilization of image information, improving the accuracy and reliability of computer interpretation, and improving the spatial resolution and spectral resolution of the original image are beneficial for monitoring. The image to be fused is already registered and the pixel width is consistent.
为了更好地理解本申请的长曝光全景图像拍摄方法,以下用一具体例子解释说明本申请:例如,在接收到长曝光拍摄指令后,图像获取模块10获取移动终端取景器采集的3帧图像,依次是图像1、图像2和图像3,首先图像投影模块20将图像1投影至预设投影空间形成对应的投影图像1,将投影图像1作为参考投影图像;然后图像投影模块20将获取的图像2投影至预设投影空间形成对应的投影图像2,将投影图像2作为后续投影图像;再者,配准融合模块30将投影图像2(即后续投影图像)与投影图像1(即参考投影图像)进行位置配准和像素融合,形成新的参考投影图像(设为投影图像1a); 然后图像投影模块20将获取的图像3投影至预设投影空间形成对应的投影图像3,将投影图像3作为新的后续投影图像;配准融合模块30再将投影图像3与投影图像1a进行位置配准和像素融合,形成最新的参考投影图像,又因为获取的图像均投影、配准、融合完毕,故将该最新的参考投影图像设置为输出投影图像。In order to better understand the long exposure panoramic image capturing method of the present application, the present application is explained below with a specific example: for example, after receiving the long exposure shooting instruction, the image acquisition module 10 acquires three frames of images acquired by the mobile terminal viewfinder. , in order of image 1, image 2 and image 3, first image projection module 20 projects image 1 to a preset projection space to form a corresponding projection image 1 , and projection image 1 as a reference projection image; then image projection module 20 will acquire Image 2 is projected onto a preset projection space to form a corresponding projection image 2, and projection image 2 is used as a subsequent projection image; further, registration fusion module 30 will project image 2 (ie, subsequent projection image) and projection image 1 (ie, reference projection) Image) performing position registration and pixel fusion to form a new reference projection image (set as projection image 1a); The image projection module 20 then projects the acquired image 3 to a preset projection space to form a corresponding projection image 3, and uses the projection image 3 as a new subsequent projection image; the registration fusion module 30 then positions the projection image 3 and the projection image 1a. Registration and pixel fusion form the latest reference projection image, and since the acquired images are all projected, registered, and fused, the latest reference projection image is set as the output projection image.
反向投影模块40,设置为将输出投影图像反向投影至移动终端取景器采集图像的坐标空间以生成长曝光全景图像。The back projection module 40 is configured to backproject the output projected image to a coordinate space of the mobile terminal viewfinder captured image to generate a long exposure panoramic image.
反向投影模块40对合成后的大视角的输出投影图像执行反向投影,即将输出投影图像从预设投影空间反向投影至移动终端取景器采集图像的坐标空间,以生成用户操作移动终端相机采集图像视角的大视角的长曝光全景图像。The back projection module 40 performs back projection on the synthesized output image of the large angle of view, that is, back projection of the output projection image from the preset projection space to the coordinate space of the image captured by the mobile terminal finder to generate a user operating the mobile terminal camera A long exposure panoramic image of a large viewing angle of an image viewing angle is acquired.
在本实施例中,通过图像获取模块10接收到长曝光拍摄指令时,获取移动终端取景器定时采集的图像;然后图像投影模块20将获取的图像投影至预设投影空间形成对应的投影图像,其中,将获取的第一帧图像对应的投影图像作为参考投影图像,将获取的后续图像对应的投影图像作为后续投影图像;配准融合模块30再将满足预设投影条件的多个视角的后续投影图像逐个与参考投影图像进行位置配准和像素融合,形成新的参考投影图像,直至遍历完获取的后续投影图像,以获取输出投影图像;最后,反向投影模块40将输出投影图像反向投影至移动终端取景器采集图像的坐标空间以生成长曝光全景图像,从而通过后续投影图像与参考投影图像的位置配准和像素融合,对移动终端处于轻微抖动状态下(例如用户握持移动终端进行长曝光拍摄)所采集的图像也能进行长曝光合成,从而加强了移动终端在长曝光拍摄模式下对抖动的适应能力,从而用户在户外进行长曝光拍摄时,无需携带固定装置,使移动终端长曝光拍摄功能使用更方便。并且,将多个视角的后续投影图像与参考投影图像合成为一张大视角的输出投影图像,然后对输出投影图像进行反向投影得到大视角的长曝光全景图像,即通过将长曝光拍摄和全景拍摄的投影技术结合,使用户可以拍摄超出相机视角范围以外的大视角长曝光全景图像,从而克服了相关技术中的方法固定移动终端相机只能拍摄固定视角的场景、拍摄视角有限的缺陷。In this embodiment, when the long-exposure shooting instruction is received by the image acquiring module 10, the image captured by the mobile terminal viewfinder is acquired; then the image projection module 20 projects the acquired image to the preset projection space to form a corresponding projection image. The projection image corresponding to the acquired first frame image is used as a reference projection image, and the acquired projection image corresponding to the subsequent image is used as a subsequent projection image; the registration fusion module 30 further follows the multiple perspectives that satisfy the preset projection condition. The projected image is position-aligned and pixel-fused with the reference projection image one by one to form a new reference projection image until the acquired subsequent projection image is traversed to obtain an output projection image; finally, the reverse projection module 40 reverses the output projection image. Projecting to the coordinate space of the image captured by the mobile terminal viewfinder to generate a long-exposure panoramic image, thereby aligning the position of the subsequent projected image with the reference projected image and pixel fusion, and the mobile terminal is in a slight jitter state (for example, the user holds the mobile terminal) For long exposure shooting) the captured images can also enter The line length exposure synthesis enhances the adaptability of the mobile terminal to the jitter in the long exposure shooting mode, so that the user does not need to carry the fixing device when performing long exposure shooting outdoors, so that the long exposure shooting function of the mobile terminal is more convenient to use. And, combining the subsequent projection images of the plurality of viewing angles with the reference projection image into one output projection image of a large viewing angle, and then performing back projection on the output projection image to obtain a long-exposure panoramic image with a large viewing angle, that is, by taking long exposure shooting and panoramic The combination of the projection techniques of the shooting enables the user to capture a large-angle long-exposure panoramic image beyond the range of the camera's viewing angle, thereby overcoming the disadvantages of the related art method that the fixed mobile terminal camera can only capture a fixed viewing angle scene and has a limited shooting angle.
进一步地,在本申请的长曝光全景图像拍摄装置的第一实施例的基础上, 提出长曝光全景图像拍摄装置的第二实施例,在第二实施例中,参照图4,图像投影模块20包括:Further, based on the first embodiment of the long exposure panoramic image capturing apparatus of the present application, A second embodiment of the long exposure panoramic image capturing device is proposed. In the second embodiment, referring to FIG. 4, the image projection module 20 includes:
参数获取单元21,设置为获取移动终端的运动参数,其中,该运动参数包括运动方向和倾斜角度;The parameter obtaining unit 21 is configured to acquire a motion parameter of the mobile terminal, where the motion parameter includes a motion direction and a tilt angle;
在开始获取移动终端取景器采集的图像时,参数获取单元21控制安装在移动终端内部的加速度计和陀螺仪感应用户握持移动终端的移动方向和倾斜角度;或者使用图像配准的方法对当前图像提取特征,参数获取单元21找到对齐位置,以确定用户握持移动终端的移动方向和倾斜角度。When acquiring the image acquired by the mobile terminal viewfinder, the parameter acquisition unit 21 controls the accelerometer and the gyroscope installed inside the mobile terminal to sense the moving direction and the tilt angle of the user holding the mobile terminal; or use the image registration method to the current The image extraction feature, the parameter acquisition unit 21 finds the alignment position to determine the moving direction and the tilt angle at which the user holds the mobile terminal.
空间确定单元22,设置为根据获取的运动参数确定用于图像投影的投影空间;The space determining unit 22 is configured to determine a projection space for image projection according to the acquired motion parameters;
空间确定单元22根据移动终端的运动方向和倾斜角度确定适用的投影空间;例如当倾斜角度小于预设角度,且运动方向只在水平方向360度旋转,则确定柱面投影空间作为图像投影额的预设投影空间;当移动终端的倾斜角度大于预设角度,或垂直方向平移大于预设值时,提示用户或丢弃倾斜过大的图像。当水平、垂直方向平移都大于预设值时,确定球面投影空间作为图像投影的投影空间,球面投影空间可以合成移动终端相机在三维以球面运动的场景。The space determining unit 22 determines an applicable projection space according to the moving direction and the tilting angle of the mobile terminal; for example, when the tilting angle is smaller than the preset angle, and the moving direction is only rotated 360 degrees in the horizontal direction, determining the cylindrical projection space as the image projection amount The preset projection space; when the tilt angle of the mobile terminal is greater than the preset angle, or the vertical direction shift is greater than the preset value, the user is prompted or the image that is tilted too much is discarded. When the horizontal and vertical directions are greater than the preset value, the spherical projection space is determined as the projection space of the image projection, and the spherical projection space can synthesize the scene in which the mobile terminal camera moves in a three-dimensional manner.
投影单元23,设置为将获取的图像投影至确定的投影空间形成对应的投影图像。The projection unit 23 is arranged to project the acquired image to the determined projection space to form a corresponding projection image.
在用户握持移动终端移动过程拍摄的图像经过投影到确定的投影空间后,可以在相同的投影空间内进行后续的配准过程和像素融合过程。其中,将获取的第一帧图像对应的投影图像作为参考投影图像,将获取的后续图像对应的投影图像作为后续投影图像。After the image captured by the user holding the mobile terminal moving process is projected into the determined projection space, the subsequent registration process and pixel fusion process can be performed in the same projection space. The projection image corresponding to the acquired first frame image is used as a reference projection image, and the acquired projection image corresponding to the subsequent image is used as a subsequent projection image.
在本实施例中,通过参数获取单元21获取移动终端的运动参数,确定用户握持移动终端进行长曝光拍摄的运动趋势,从而空间确定单元22选择合适的投影空间,从而提高了图像投影至确定的投影空间的投影效果,提高了投影图像的完整度,便于后续的投影图像之间的配准和融合,从而提高移动终端进行长曝光全景图像拍摄所得图像的光影效果。 In this embodiment, the motion parameter of the mobile terminal is acquired by the parameter acquisition unit 21, and the motion tendency of the user to hold the mobile terminal for long exposure shooting is determined, so that the space determining unit 22 selects an appropriate projection space, thereby improving image projection to determination. The projection effect of the projection space improves the integrity of the projected image, and facilitates the registration and fusion between subsequent projected images, thereby improving the light and shadow effect of the image obtained by the mobile terminal for long-exposure panoramic image capturing.
进一步地,在本申请的长曝光全景图像拍摄装置的第一实施例的基础上,提出长曝光全景图像拍摄装置的第三实施例,在第三实施例中,参照图5,配准融合模块30包括:Further, based on the first embodiment of the long exposure panoramic image capturing apparatus of the present application, a third embodiment of the long exposure panoramic image capturing apparatus is proposed. In the third embodiment, referring to FIG. 5, the registration fusion module is provided. 30 includes:
特征提取单元31,设置为从参考投影图像中提取参考特征,并逐个从满足预设投影条件的多个视角的后续投影图像中提取与参考特征对应的匹配特征;The feature extraction unit 31 is configured to extract reference features from the reference projection image, and extract matching features corresponding to the reference features from the subsequent projection images of the plurality of perspectives satisfying the preset projection conditions one by one;
在图像配准过程中,首先要选择一种变换模型作为假设,图像全局变换可以选择几何变换、相似变换、仿射、射影变换,图像局部变换可以把图像划分为不同部分,并对每一部分计算单独的配准参数,该配准参数用于将参考投影图像与后续投影图像对齐。然后特征提取单元31提取配准特征,提取配准特征的方法有:特征点匹配、光流法、互相关等。其中:In the image registration process, we first choose a transformation model as a hypothesis. The global transformation of the image can select geometric transformation, similar transformation, affine transformation, and projective transformation. The image local transformation can divide the image into different parts and calculate for each part. A separate registration parameter that is used to align the reference projection image with subsequent projection images. Then, the feature extraction unit 31 extracts the registration features, and the methods for extracting the registration features are: feature point matching, optical flow method, cross-correlation, and the like. among them:
特征点匹配是指:从参照帧(即参考投影图像)提取若干特征点,然后从待配准帧(即后续投影图像)提取或搜索对应的特征点,根据特征点的位置作为数据,求解待配准帧相对参照帧的配准参数。Feature point matching means: extracting a number of feature points from a reference frame (ie, a reference projection image), and then extracting or searching for corresponding feature points from the to-be-registered frame (ie, a subsequent projected image), and solving the problem according to the position of the feature point as data The registration parameters of the registration frame relative to the reference frame.
光流法是指:估计空间运动物体在观察成像平面上的像素运动的瞬时速度,是利用图像序列中像素在时间域上的变化以及相邻帧之间的相关性来找到上一帧跟当前帧之间存在的对应关系,从而计算出相邻帧之间物体的运动信息。The optical flow method refers to estimating the instantaneous velocity of the moving motion of the moving object on the imaging plane by using the variation of the pixel in the image sequence and the correlation between adjacent frames to find the previous frame and the current Correspondence between frames, thereby calculating motion information of objects between adjacent frames.
互相关是指:利用傅里叶变换将图像变换到频域,然后用互相关公式计算待配准帧在空间域中每个位置的相关性,取最大的位置作为待配准帧相对参照帧的配准参数。Cross-correlation means: transforming the image into the frequency domain by Fourier transform, and then calculating the correlation of each position of the to-be-registered frame in the spatial domain by using the cross-correlation formula, taking the largest position as the relative reference frame to be registered. Registration parameters.
配准单元32,设置为根据匹配特征与参考特征之间的位置关系,将满足预设投影条件的后续投影图像与参考投影图像配准结合;The registration unit 32 is configured to combine the subsequent projection image that meets the preset projection condition with the reference projection image according to the positional relationship between the matching feature and the reference feature;
配准单元32根据匹配特征与参考特征之间的位置关系,得出配准参数(例如配准参数为一个二维或三维的移动向量),根据配准参数将满足预设投影条件的后续投影图像与参考投影图像配准结合(即对齐连接)。例如匹配特征有三个,根据三个匹配特征与三个对应参考特征之间的位置关系,得出匹配特征向参考特征的移动方向和移动距离,并根据该移动方向和移动距离移动后续投影图像的每一个像素点,得出后续投影图像每一个像素点移动 后的位置,从而让后续投影图像的匹配特征与参考投影图像的参考特征对齐,也让后续投影图像与参考投影图像的相同图像区域重合。The registration unit 32 obtains a registration parameter according to the positional relationship between the matching feature and the reference feature (for example, the registration parameter is a two-dimensional or three-dimensional motion vector), and the subsequent projection that satisfies the preset projection condition according to the registration parameter The image is registered in conjunction with the reference projection image (ie, aligned). For example, there are three matching features. According to the positional relationship between the three matching features and the three corresponding reference features, the moving direction and the moving distance of the matching feature to the reference feature are obtained, and the subsequent projected image is moved according to the moving direction and the moving distance. Each pixel point, the subsequent projection image is moved every pixel point The subsequent position, such that the matching features of the subsequent projected image are aligned with the reference features of the reference projected image, and also causes the subsequent projected image to coincide with the same image region of the reference projected image.
以仿射变换模型为例,变换有6个参数,用于2*3矩阵表示
Figure PCTCN2016105705-appb-000001
A可由配准算法求出。那么对于投影柱面上当前输入图像帧(即后续投影图像)的一点
Figure PCTCN2016105705-appb-000002
经过配准后坐标位置
Figure PCTCN2016105705-appb-000003
同理求出当前输入图像所有像素点配准后坐标位置,从而根据所有像素点配准后的坐标位置将当前输入图像与参考投影图像进行对齐。
Taking the affine transformation model as an example, the transformation has 6 parameters for 2*3 matrix representation.
Figure PCTCN2016105705-appb-000001
A can be found by a registration algorithm. Then a point for the current input image frame (ie, the subsequent projected image) on the projected cylinder
Figure PCTCN2016105705-appb-000002
Coordinated position
Figure PCTCN2016105705-appb-000003
Similarly, the coordinate positions of all the pixel points of the current input image after registration are obtained, so that the current input image is aligned with the reference projection image according to the coordinate positions of all the pixel points after registration.
融合单元33,设置为将满足预设投影条件的后续投影图像与参考投影图像的重叠区域进行像素融合、非重叠区域直接拼接,以形成新的参考投影图像,直至遍历完获取的后续投影图像,并将最终形成的参考投影图像作为输出投影图像。The merging unit 33 is configured to perform pixel merging and overlapping of the overlapping regions of the reference projection image and the non-overlapping region to form a new reference projection image until the acquired subsequent projection image is traversed. The resulting reference projection image is taken as an output projection image.
对当前满足预设投影条件的输入帧图像(即后续投影图像)上每一个像素点,先判断该像素点是否和先前全景图像(即参考投影图像)的像素有重叠。判断方法是:在对全景图进行初始化时,每个像素位置的颜色初始为0。新输入帧图像的每个像素在合成位置判断如果原始像素的颜色为全0,则不重叠;如果颜色不全为0,则重叠。For each pixel on the input frame image (ie, the subsequent projected image) that currently meets the preset projection condition, it is first determined whether the pixel overlaps with the pixel of the previous panoramic image (ie, the reference projected image). The method of judging is that the color of each pixel position is initially 0 when the panorama is initialized. Each pixel of the new input frame image is judged at the composite position if the colors of the original pixels are all 0, and then overlap; if the colors are not all 0, they overlap.
在满足预设投影条件的后续投影图像与参考投影图像对齐之后,融合单元33将后续投影图像与参考投影图像的重叠区域进行像素融合,即基于图像融合技术,根据重叠区域后续投影图像像素点的颜色值和参考投影图像像素点的颜色值进行像素融合,得到融合后的像素点颜色值;融合单元33将后续投影图像与参考投影图像不重合区域进行像素直接拼接,即若参考投影图像之外的区域有后续投影图像,表明移动终端因移动采集了新的图像,将新采集的后续投影图像与参考投影图像对应位置直接拼接;从而经过像素融合和图像拼接,形成的图像作为新的参考投影图像。同理,利用新的参考投影图像对其它后续投影图像进行相同的处理过程,如此循环,直至遍历完或处理 完获取的后续投影图像。此外,后续投影图像与参考投影图像配准融合后的图像可同步出现在移动终端显示区域,供用户观看。After the subsequent projection image that meets the preset projection condition is aligned with the reference projection image, the fusion unit 33 performs pixel fusion on the overlapping region of the subsequent projection image and the reference projection image, that is, based on the image fusion technique, the pixel of the subsequent projection image according to the overlapping region The color value and the color value of the reference projection image pixel point are pixel-fused to obtain the fused pixel point color value; the merging unit 33 directly splices the subsequent projection image and the reference projection image non-overlapping region, that is, if the reference projection image is used The area has a subsequent projection image, indicating that the mobile terminal acquires a new image due to the movement, and directly splicing the newly acquired subsequent projection image with the corresponding position of the reference projection image; thus, the image formed by the pixel fusion and the image mosaic is used as a new reference projection. image. In the same way, the same process is performed on other subsequent projected images by using the new reference projection image, and the loop is repeated until the traversal or processing is completed. Finished subsequent projection images. In addition, the image after the subsequent projection image is registered and registered with the reference projection image may be synchronously displayed in the display area of the mobile terminal for the user to view.
在移动终端取景器停止采集图像之后,并将所有获取的后续投影图像与参考投影图像进行配准融合之后,最终形成的新的参考投影图像,融合单元33将该最新的参考投影图像作为输出投影图像。After the mobile terminal viewfinder stops acquiring the image, and all the acquired subsequent projection images are registered and fused with the reference projection image, the finally formed new reference projection image, the fusion unit 33 uses the latest reference projection image as an output projection. image.
在本实施例中,通过比较参考投影图像中的参考特征与后续投影图像中的匹配特征,得出参考投影图像与后续投影图像的位置关系,即得出后续投影图像相对参考投影图像的配准参数,然后根据参考投影图像与后续投影图像的位置关系,将后续投影图像与参考投影图像对齐,然后将后续投影图像与参考投影图像的重叠区域进行像素融合、非重叠区域直接拼接以形成新的参考投影图像,直至遍历完获取的后续投影图像;最后将最终形成的参考投影图像作为输出投影图像,从而将每个后续投影图像与更新中的参考投影图像循环进行配准和融合,形成长曝光拍摄图像对应的全景输出投影图像,能够将多个视角的图像合成为一张大视角图像的全景的输出投影图像,并在移动终端处于一定幅度晃动时也能实现长曝光拍摄功能,扩展了移动终端长曝光拍摄适用范围。In this embodiment, by comparing the reference features in the reference projection image with the matching features in the subsequent projection images, the positional relationship between the reference projection image and the subsequent projection image is obtained, that is, the registration of the subsequent projection image with respect to the reference projection image is obtained. Parameter, and then aligning the subsequent projection image with the reference projection image according to the positional relationship between the reference projection image and the subsequent projection image, and then pixel merging the overlapping regions of the subsequent projection image with the reference projection image, directly splicing the non-overlapping regions to form a new one Referring to the projected image until the acquired subsequent projected image is traversed; finally, the finally formed reference projected image is used as an output projected image, thereby registering and merging each subsequent projected image with the updated reference projected image to form a long exposure. The panoramic output projection image corresponding to the captured image can be combined into a panoramic projection output image of a large viewing angle image, and can realize a long exposure shooting function when the mobile terminal is shaken at a certain amplitude, and the mobile terminal is expanded. Long exposure shooting Wai.
进一步地,在本申请的长曝光全景图像拍摄装置第三实施例的基础上,提出长曝光全景图像拍摄装置第四实施例,在第四实施例中,Further, based on the third embodiment of the long exposure panoramic image capturing device of the present application, a fourth embodiment of the long exposure panoramic image capturing device is proposed. In the fourth embodiment,
图像获取模块10,还设置为在获取移动终端取景器定时采集的图像时,获取移动终端的运动参数,运动参数包括移动方向和倾斜角度;The image obtaining module 10 is further configured to acquire a motion parameter of the mobile terminal when acquiring an image captured by the mobile terminal viewfinder, and the motion parameter includes a moving direction and a tilting angle;
运动参数在开始获取移动终端取景器采集的图像时,通过安装在移动终端内部的加速度计和陀螺仪感应用户握持移动终端的移动方向和倾斜角度;或者使用图像配准的方法对当前图像提取特征,找到对齐位置,以确定用户握持移动终端的移动方向和倾斜角度。When the motion parameter starts to acquire the image acquired by the mobile terminal viewfinder, the accelerometer and the gyroscope installed inside the mobile terminal are used to sense the moving direction and the tilt angle of the user holding the mobile terminal; or the current image is extracted by using the image registration method. Feature, find the alignment position to determine the direction and tilt angle at which the user holds the mobile terminal.
配准单元32还设置为:The registration unit 32 is also configured to:
根据移动终端的运动参数确定满足预设投影条件的后续投影图像与参考投影图像的结合方向;Determining, according to the motion parameter of the mobile terminal, a combination direction of the subsequent projection image that satisfies the preset projection condition and the reference projection image;
根据结合方向,以及匹配特征与参考特征之间的位置关系,将满足预设 投影条件的后续投影图像与参考投影图像配准结合。According to the combination direction, and the positional relationship between the matching feature and the reference feature, the preset will be satisfied. Subsequent projection images of the projection conditions are combined with the reference projection image registration.
配准单元32根据移动终端的运动参数可以粗略确定满足预设投影条件的后续投影图像与参考投影图像的结合方向,例如移动终端水平向右移动,则满足预设投影条件的后续投影图像应该与参考投影图像的右侧进行结合,即后续投影图像与参考投影图像的结合方向是右向结合。在确定后续投影图像与参考投影图像的结合方向之后,配准单元32再根据匹配特征与参考特征之间的位置关系,可快速将后续投影图像与参考投影图像对齐并进行配准结合。The registration unit 32 can roughly determine the combination direction of the subsequent projection image and the reference projection image that meet the preset projection condition according to the motion parameter of the mobile terminal. For example, if the mobile terminal moves horizontally to the right, the subsequent projection image that satisfies the preset projection condition should be The combination is made on the right side of the reference projection image, that is, the combined direction of the subsequent projection image and the reference projection image is right-handed. After determining the combination direction of the subsequent projection image and the reference projection image, the registration unit 32 can quickly align the subsequent projection image with the reference projection image and perform registration and combination according to the positional relationship between the matching feature and the reference feature.
在本实施例中,通过在获取移动终端取景器定时采集的图像时,获取移动终端的运动参数,然后根据移动终端的运动方向确定满足预设投影条件的后续投影图像与参考投影图像的结合方向;在确定满足预设投影条件的后续投影图像与参考投影图像的结合方向之后,再根据匹配特征与参考特征之间的位置关系,可快速将后续投影图像与参考投影图像对齐并进行配准结合,提高了后续投影图像与参考投影图像位置配准的效率。In this embodiment, when the image acquired by the mobile terminal viewfinder is acquired, the motion parameter of the mobile terminal is acquired, and then the combination direction of the subsequent projection image that meets the preset projection condition and the reference projection image is determined according to the moving direction of the mobile terminal. After determining the combination direction of the subsequent projection image and the reference projection image satisfying the preset projection condition, according to the positional relationship between the matching feature and the reference feature, the subsequent projection image can be quickly aligned with the reference projection image and the registration is combined. The efficiency of registering the position of the subsequent projection image and the reference projection image is improved.
进一步地,在本申请的长曝光全景图像拍摄装置第三实施例的基础上,提出长曝光全景图像拍摄装置第五实施例,在第五实施例中,Further, based on the third embodiment of the long exposure panoramic image capturing device of the present application, a fifth embodiment of the long exposure panoramic image capturing device is proposed. In the fifth embodiment,
融合单元33还设置为:The fusion unit 33 is also configured to:
获取满足预设投影条件的后续投影图像与参考投影图像的重叠区域中每个像素点位置的输入颜色值、参考颜色值和融合次数;Obtaining an input color value, a reference color value, and a fusion number of each pixel position in an overlapping area of the subsequent projection image and the reference projection image that satisfy the preset projection condition;
根据输入颜色值、参考颜色值和融合次数,调整重叠区域中每个像素点的颜色值以完成像素融合,其中,输入颜色值为后续投影图像在重叠区域中像素点位置的颜色值、参考颜色值为参考投影图像在重叠区域中像素点位置的颜色值、融合次数为参考投影图像在重叠区域中像素点位置进行像素融合的次数。Adjusting the color value of each pixel in the overlapping area according to the input color value, the reference color value, and the number of fusions to complete pixel fusion, wherein the input color value is a color value, a reference color of a pixel position of the subsequent projected image in the overlapping area. The value is the color value of the pixel position of the reference projected image in the overlapping area, and the number of times of fusion is the number of times the pixel of the reference projection image is pixel-fused at the pixel position in the overlapping area.
融合单元33获取满足预设投影条件的后续投影图像中与参考投影图像的重叠区域每个像素点的输入颜色值,获取参考投影图像中与后续投影图像的重叠区域中每个像素点的参考颜色值和像素融合的融合次数;融合单元33根据重叠区域每个像素点位置的输入颜色值、参考颜色值和融合次数,调整重叠区域中每个像素点的颜色值以完成像素融合。例如,根据加权平均法计 算得到重叠区域中每个像素点的颜色值,设重叠区域内某一像素点位置融合次数为F,参考颜色值为CF,输入颜色值为CP,则合成后的颜色值为:The fusion unit 33 acquires an input color value of each pixel point of the overlapping area of the reference projection image in the subsequent projection image that satisfies the preset projection condition, and acquires a reference color of each pixel point in the overlapping area of the reference projection image and the subsequent projection image. The number of fusions of the value and the pixel fusion; the fusion unit 33 adjusts the color value of each pixel in the overlapping region according to the input color value, the reference color value, and the number of fusions of each pixel position of the overlapping region to complete pixel fusion. For example, according to the weighted average method, the color value of each pixel in the overlapping region is calculated, and the number of times of fusion of a certain pixel in the overlapping region is F, the reference color value is C F , and the input color value is C P , then synthesized The color values are:
Figure PCTCN2016105705-appb-000004
Figure PCTCN2016105705-appb-000004
合成后融合次数记录F自增1。该实施例中的颜色值为十六进制颜色码中的值。The number of fusions after synthesis is recorded as F incremented by 1. The color value in this embodiment is the value in the hexadecimal color code.
在本实施例中,通过获取后续投影图像与参考投影图像的重叠区域中每个像素点位置的输入颜色值、参考颜色值和融合次数,然后基于加权平均算法,并根据输入颜色值、参考颜色值和融合次数,调整重叠区域中每个像素点的颜色值以完成像素融合,使重叠区域像素点颜色值融合与所有后续投影图像对应像素点颜色值相关,使像素融合效果更佳。In this embodiment, the input color value, the reference color value, and the number of fusions of each pixel position in the overlapping area of the subsequent projection image and the reference projection image are obtained, and then based on the weighted average algorithm, and according to the input color value and the reference color. The value and the number of fusions are adjusted to adjust the color value of each pixel in the overlapping area to complete the pixel fusion, so that the color value fusion of the pixel in the overlapping area is correlated with the color value of the corresponding pixel of all subsequent projected images, so that the pixel fusion effect is better.
本申请还提供一种长曝光全景图像拍摄方法,在长曝光全景图像拍摄方法第一实施例中,参照图6,该方法包括:The present application also provides a long exposure panoramic image capturing method. In the first embodiment of the long exposure panoramic image capturing method, referring to FIG. 6, the method includes:
步骤S10,当接收到长曝光拍摄指令时,获取移动终端取景器定时采集的图像;Step S10, when receiving the long exposure shooting instruction, acquiring an image periodically collected by the mobile terminal viewfinder;
当用户需要利用移动终端拍摄长曝光照片时,向移动终端输入长曝光拍摄指令以控制移动终端进入长曝光模式,移动终端的相机快门开放,相机的取景器捕捉长时间内场景中光影的变化。在移动终端相机接收到长曝光拍摄指令时,移动终端的取景器开始定时采集图像,例如每间隔0.5s采集一帧图像,长曝光全景图像拍摄装置收集移动终端取景器定时采集的图像。When the user needs to take a long exposure photo with the mobile terminal, a long exposure shooting instruction is input to the mobile terminal to control the mobile terminal to enter the long exposure mode, the camera shutter of the mobile terminal is opened, and the camera viewfinder captures the change of the light and shadow in the scene for a long time. When the mobile terminal camera receives the long exposure shooting instruction, the viewfinder of the mobile terminal starts to acquire images at a timing, for example, one frame image is acquired every 0.5 s, and the long exposure panoramic image capturing device collects images acquired by the mobile terminal viewfinder at a time.
步骤S20,将获取的图像投影至预设投影空间形成对应的投影图像,其中,将获取的第一帧图像对应的投影图像作为参考投影图像,将获取的后续图像对应的投影图像作为后续投影图像;Step S20: Projecting the acquired image to a preset projection space to form a corresponding projection image, wherein the acquired projection image corresponding to the first frame image is used as a reference projection image, and the acquired projection image corresponding to the subsequent image is used as a subsequent projection image. ;
将获取移动终端取景器采集的图像投影至预设投影空间,以形成对应的投影图像。对获取图像进行投影可以有多种执行顺序,例如:可以每获取移动终端取景器采集的一帧图像,就将获取的这一帧图像投影至预设投影空间;也可以每获取移动终端取景器采集的预设帧(例如5帧)图像,再将获取的预设帧图像一次性逐个投影至预设投影空间;此外还可以待移动终端取景器采集图像完毕后,将获取的全部帧图像一次性逐个投影至预设投影空间。对 获取的图像进行投影操作的执行顺序可以根据需要灵活设置,例如移动终端图像处理能力较强,可以待移动终端取景器采集图像完毕后,将获取的全部帧图像一次性逐个投影至预设投影空间。其中,获取移动终端取景器采集的第一帧图像在预设投影空间中投影所形成的投影图像作为参考投影图像,获取移动终端取景器采集的后续帧图像在预设投影空间中投影所形成的投影图像作为后续投影图像。所述投影空间包括柱面投影空间、球面投影空间、正方体投影空间等。The acquired image acquired by the mobile terminal viewfinder is projected to a preset projection space to form a corresponding projected image. Projecting the acquired image may have multiple execution sequences. For example, each frame image acquired by the mobile terminal viewfinder may be captured, and the acquired image of the frame may be projected to a preset projection space; or each mobile terminal viewfinder may be acquired. Collecting preset frames (for example, 5 frames), and then projecting the acquired preset frame images one by one to the preset projection space; in addition, all the acquired frame images may be acquired once the image is captured by the mobile terminal viewfinder. The projections are projected one by one to the preset projection space. Correct The execution order of the acquired image for the projection operation can be flexibly set according to requirements. For example, the image processing capability of the mobile terminal is strong, and after the image capture by the mobile terminal viewfinder is completed, all the acquired frame images are projected one by one to the preset projection space. . The projection image formed by the projection of the first frame image captured by the mobile terminal viewfinder in the preset projection space is used as a reference projection image, and the projection of the subsequent frame image collected by the mobile terminal viewfinder in the preset projection space is obtained. The projected image is used as a subsequent projected image. The projection space includes a cylindrical projection space, a spherical projection space, a cube projection space, and the like.
步骤S30,将满足预设投影条件的多个视角的后续投影图像逐个与参考投影图像进行位置配准和像素融合形成新的参考投影图像,直至遍历完获取的后续投影图像,以获取输出投影图像;Step S30, performing position registration and pixel fusion of the subsequent projection images of the plurality of viewing angles that meet the preset projection conditions one by one with the reference projection image to form a new reference projection image, until the acquired subsequent projection image is traversed to obtain the output projection image. ;
获取的后续投影图像包括多个视角,例如用户水平移动移动终端进行长曝光全景拍摄时,获取的图像为同一水平方向上多个视角的图像,从而投影所得的后续投影图像也是相对于参考投影图像的多个视角的图像。在获取后续投影图像后,先判断获取的后续投影是否满足预设投影条件,例如预设投影条件包括后续投影图像相对参考投影图像平移或旋转的倾斜角度不能大于预设角度、后续投影图像相对参考投影图像垂直方向平移大于预设值等;然后将参考投影图像作为基准图像,将获取的后续投影图像按照其投影至预设投影空间的顺序逐个与参考投影图像进行位置配准和像素融合,直到移动终端取景器停止采集图像,即将获取移动终端取景器采集的所有图像在预设投影空间的后续投影图像与参考投影图像位置配准和像素融合之后,将参考投影图像配准和融合所有后续投影图像之后的图像作为输出投影图像,从而将多个视角的后续投影图像与参考投影图像合成为一张大视角的投影图像,从而能够反向投影得到大视角的长曝光全景图像。The acquired subsequent projection image includes a plurality of viewing angles. For example, when the user moves the mobile terminal horizontally for long exposure panoramic shooting, the acquired image is an image of multiple viewing angles in the same horizontal direction, so that the projected subsequent projected image is also relative to the reference projected image. An image of multiple perspectives. After obtaining the subsequent projection image, it is first determined whether the acquired subsequent projection satisfies the preset projection condition. For example, the preset projection condition includes that the tilt angle of the subsequent projection image relative to the reference projection image translation or rotation cannot be greater than the preset angle, and the subsequent projection image is relative to the reference. The vertical displacement of the projected image is greater than a preset value or the like; then the reference projection image is used as a reference image, and the acquired subsequent projection images are position-aligned and pixel-fused with the reference projection image one by one according to the order in which they are projected to the preset projection space until The mobile terminal viewfinder stops acquiring images, that is, after acquiring the image of all the images captured by the mobile terminal viewfinder in the preset projection space and the reference projection image position registration and pixel fusion, the reference projection image is registered and merged with all subsequent projections. The image after the image is used as an output projection image, thereby synthesizing the subsequent projection image of the plurality of angles of view and the reference projection image into a projection image of a large angle of view, so that the long-exposure panoramic image of the large angle of view can be back-projected.
后续投影图像先与参考投影图像进行位置配准,然后与参考投影图像进行像素融合,形成新的参考投影图像,然后下一个后续投影图像与新的参考投影图像位置配准并进行像素融合,其中,位置配准是基于图像配准技术,像素融合是基于融合技术。The subsequent projection image is firstly registered with the reference projection image, and then pixel-fused with the reference projection image to form a new reference projection image, and then the next subsequent projection image is registered with the new reference projection image and pixel fusion is performed. Position registration is based on image registration technology, and pixel fusion is based on fusion technology.
上述图像配准技术是指:将不同时间、不同传感器(成像设备)或不同条件下(天候、照度、摄像位置和角度等)获取的两幅或多幅图像进行匹配、 叠加的过程,它已经被广泛地应用于遥感数据分析、计算机视觉、图像处理等领域。图像配准技术的一般流程如下:首先对两幅图像进行特征提取得到特征点;通过进行相似性度量找到匹配的特征点对;然后通过匹配的特征点对得到图像空间坐标变换参数;最后由坐标变换参数进行图像配准。The above image registration technology refers to matching two or more images acquired at different times, different sensors (imaging devices) or under different conditions (weather, illumination, imaging position, angle, etc.), The superposition process has been widely used in remote sensing data analysis, computer vision, image processing and other fields. The general flow of image registration technology is as follows: firstly, feature extraction is performed on two images to obtain feature points; matching feature point pairs are found by performing similarity measure; then image space coordinate transformation parameters are obtained by matching feature point pairs; Transform parameters for image registration.
上述图像融合技术是指:将多源信道所采集到的关于同一目标的图像数据经过图像处理和计算机技术等,最大限度的提取各自信道中的有利信息,最后综合成高质量的图像,以提高图像信息的利用率、改善计算机解译精度和可靠性、提升原始图像的空间分辨率和光谱分辨率,利于监测。待融合图像已配准好且像素位宽一致。The above image fusion technology refers to: image data of the same target collected by the multi-source channel is subjected to image processing and computer technology to extract the favorable information in the respective channels to the maximum extent, and finally integrated into a high-quality image to improve The utilization of image information, improving the accuracy and reliability of computer interpretation, and improving the spatial resolution and spectral resolution of the original image are beneficial for monitoring. The image to be fused is already registered and the pixel width is consistent.
为了更好地理解本申请的长曝光全景图像拍摄方法,以下用一具体例子解释说明本申请:例如,在接收到长曝光拍摄指令后,获取移动终端取景器采集的3帧图像,依次是图像1、图像2和图像3,首先将图像1投影至预设投影空间形成对应的投影图像1,将投影图像1作为参考投影图像;然后将获取的图像2投影至预设投影空间形成对应的投影图像2,将投影图像2作为后续投影图像;再者,将投影图像2(即后续投影图像)与投影图像1(即参考投影图像)进行位置配准和像素融合,形成新的参考投影图像(设为投影图像1a);然后将获取的图像3投影至预设投影空间形成对应的投影图像3,将投影图像3作为新的后续投影图像;再将投影图像3与投影图像1a进行位置配准和像素融合,形成最新的参考投影图像,又因为获取的图像均投影、配准、融合完毕,故将该最新的参考投影图像设置为输出投影图像。In order to better understand the long exposure panoramic image capturing method of the present application, the present application is explained below with a specific example: for example, after receiving the long exposure shooting instruction, acquiring three frames of images captured by the mobile terminal viewfinder, which are images in sequence 1. Image 2 and image 3, first project image 1 to a preset projection space to form a corresponding projection image 1, and use projection image 1 as a reference projection image; then project the acquired image 2 to a preset projection space to form a corresponding projection. Image 2, the projected image 2 is used as a subsequent projected image; further, the projected image 2 (ie, the subsequent projected image) is position-registered and pixel-fused with the projected image 1 (ie, the reference projected image) to form a new reference projected image ( Set to the projected image 1a); then project the acquired image 3 to the preset projection space to form a corresponding projected image 3, and use the projected image 3 as a new subsequent projected image; and then positionally map the projected image 3 and the projected image 1a. Blending with the pixels to form the latest reference projection image, and since the acquired images are all projected, registered, and fused, the latest reference is obtained. Shadow image is set to output a projection image.
步骤S40,将输出投影图像反向投影至移动终端取景器采集图像的坐标空间以生成长曝光全景图像。In step S40, the output projection image is back-projected to the coordinate space of the image acquired by the mobile terminal viewfinder to generate a long-exposure panoramic image.
对合成后的大视角的输出投影图像执行反向投影,即将输出投影图像从预设投影空间反向投影至移动终端取景器采集图像的坐标空间,以生成用户操作移动终端相机采集图像视角的大视角的长曝光全景图像。Performing back projection on the synthesized output image of the large viewing angle, that is, back projection of the output projection image from the preset projection space to the coordinate space of the image captured by the mobile terminal finder to generate a large angle of view of the user operating the mobile terminal camera to capture the image Long exposure panoramic image of perspective.
在本实施例中,通过在接收到长曝光拍摄指令时,获取移动终端取景器定时采集的图像;然后将获取的图像投影至预设投影空间形成对应的投影图像,其中,将获取的第一帧图像对应的投影图像作为参考投影图像,将获取的后续图像对应的投影图像作为后续投影图像;再将满足预设投影条件的多 个视角的后续投影图像逐个与参考投影图像进行位置配准和像素融合,形成新的参考投影图像,直至遍历完获取的后续投影图像,以获取输出投影图像;最后,将输出投影图像反向投影至移动终端取景器采集图像的坐标空间以生成长曝光全景图像,从而通过后续投影图像与参考头像图像的位置配准和像素融合,移动终端处于轻微抖动状态下(例如用户握持移动终端进行长曝光拍摄)所采集的图像也能进行长曝光合成,从而加强了移动终端在长曝光拍摄模式下对抖动的适应能力,从而用户在户外进行长曝光拍摄时,无需携带固定装置,使移动终端长曝光拍摄功能使用更方便。并且,将多个视角的后续投影图像与参考投影图像合成为一张大视角的输出投影图像,然后对输出投影图像进行反向投影得到大视角的长曝光全景图像,即通过将长曝光拍摄和全景拍摄的投影技术结合,使用户可以拍摄超出相机视角范围以外的大视角长曝光全景图像,从而克服了相关技术中的方法固定移动终端相机只能拍摄固定视角的场景、拍摄视角有限的缺陷。In this embodiment, when the long exposure shooting instruction is received, the image captured by the mobile terminal viewfinder is acquired; then the acquired image is projected to the preset projection space to form a corresponding projection image, wherein the first image to be acquired is obtained. The projection image corresponding to the frame image is used as the reference projection image, and the projection image corresponding to the acquired subsequent image is used as the subsequent projection image; Subsequent projection images of the viewing angle are position-aligned and pixel-fused with the reference projection image one by one to form a new reference projection image until the acquired subsequent projection image is traversed to obtain an output projection image; finally, the output projection image is back projected The mobile terminal finder collects the coordinate space of the image to generate a long-exposure panoramic image, so that the mobile terminal is in a slight jitter state by the position registration and pixel fusion of the subsequent projected image and the reference avatar image (for example, the user holds the mobile terminal for long Exposure shooting) The captured image can also be combined with long exposure, which enhances the ability of the mobile terminal to adapt to the jitter in the long exposure shooting mode, so that the user does not need to carry a fixing device when performing long exposure shooting outdoors, so that the mobile terminal is long. The exposure shooting function is more convenient to use. And, combining the subsequent projection images of the plurality of viewing angles with the reference projection image into one output projection image of a large viewing angle, and then performing back projection on the output projection image to obtain a long-exposure panoramic image with a large viewing angle, that is, by taking long exposure shooting and panoramic The combination of the projection techniques of the shooting enables the user to capture a large-angle long-exposure panoramic image beyond the range of the camera's viewing angle, thereby overcoming the disadvantages of the related art method that the fixed mobile terminal camera can only capture a fixed viewing angle scene and has a limited shooting angle.
进一步地,在本申请的长曝光全景图像拍摄方法第一实施例的基础上,提出长曝光全景图像拍摄方法第二实施例,在第二实施例中,参照图7,步骤S20包括:Further, based on the first embodiment of the long exposure panoramic image capturing method of the present application, a second embodiment of the long exposure panoramic image capturing method is proposed. In the second embodiment, referring to FIG. 7, step S20 includes:
步骤S21,获取移动终端的运动参数,该运动参数包括运动方向和倾斜角度;Step S21, acquiring a motion parameter of the mobile terminal, where the motion parameter includes a motion direction and a tilt angle;
在开始获取移动终端取景器采集的图像时,通过安装在移动终端内部的加速度计感和陀螺仪感应用户握持移动终端的移动方向和倾斜角度;或者使用图像配准的方法对当前图像提取特征,找到对齐位置,以确定用户握持移动终端的移动方向和倾斜角度。When acquiring the image acquired by the mobile terminal viewfinder, the accelerometer sense and the gyroscope installed inside the mobile terminal are used to sense the moving direction and the tilt angle of the mobile terminal, or the image registration method is used to extract the current image. Find the alignment position to determine the direction and tilt angle at which the user holds the mobile terminal.
步骤S22,根据获取的运动参数确定用于图像投影的投影空间;Step S22, determining a projection space for image projection according to the acquired motion parameters;
根据移动终端的运动方向和倾斜角度确定适用的投影空间;例如当倾斜角度小于预设角度,且运动方向只在水平方向360度旋转,则确定柱面投影空间作为图像投影额的预设投影空间;当移动终端的倾斜角度大于预设角度时,或垂直方向平移大于预设值时,提示用户或丢弃倾斜过大的图像。当水平、垂直方向平移都大于预设值时,确定球面投影空间作为图像投影的投影空间,球面投影空间可以合成移动终端相机在三维以球面运动的场景。 Determining the applicable projection space according to the moving direction and the tilting angle of the mobile terminal; for example, when the tilting angle is smaller than the preset angle, and the moving direction is only rotated 360 degrees in the horizontal direction, determining the cylindrical projection space as the preset projection space of the image projection amount When the tilt angle of the mobile terminal is greater than the preset angle, or when the vertical direction shift is greater than the preset value, the user is prompted or the image that is tilted too much is discarded. When the horizontal and vertical directions are greater than the preset value, the spherical projection space is determined as the projection space of the image projection, and the spherical projection space can synthesize the scene in which the mobile terminal camera moves in a three-dimensional manner.
步骤S23,将获取的图像投影至确定的投影空间形成对应的投影图像。Step S23, projecting the acquired image to the determined projection space to form a corresponding projection image.
在用户握持移动终端移动过程拍摄的图像经过投影到确定的投影空间后,可以在相同的投影空间内进行后续的配准过程和像素融合过程。其中,将获取的第一帧图像对应的投影图像作为参考投影图像,将获取的后续图像对应的投影图像作为后续投影图像。After the image captured by the user holding the mobile terminal moving process is projected into the determined projection space, the subsequent registration process and pixel fusion process can be performed in the same projection space. The projection image corresponding to the acquired first frame image is used as a reference projection image, and the acquired projection image corresponding to the subsequent image is used as a subsequent projection image.
在本实施例中,通过获取移动终端的运动参数,确定用户握持移动终端进行长曝光拍摄的运动趋势,从而选择合适的投影空间,从而提高了图像投影至确定的投影空间的投影效果,提高了投影图像的完整度,便于后续的投影图像之间的配准和融合,从而提高移动终端进行长曝光全景图像拍摄所得图像的光影效果。In this embodiment, by acquiring the motion parameter of the mobile terminal, determining the motion tendency of the user to hold the mobile terminal for long exposure shooting, thereby selecting an appropriate projection space, thereby improving the projection effect of the image projection to the determined projection space, and improving The completeness of the projected image facilitates the registration and fusion between subsequent projected images, thereby improving the light and shadow effect of the image obtained by the mobile terminal for long-exposure panoramic image capturing.
进一步地,在本申请的长曝光全景图像拍摄方法第一实施例的基础上,提出长曝光全景图像拍摄方法第三实施例,在第三实施例中,参照图8,步骤S30包括:Further, based on the first embodiment of the long exposure panoramic image capturing method of the present application, a third embodiment of the long exposure panoramic image capturing method is proposed. In the third embodiment, referring to FIG. 8, step S30 includes:
步骤S31,从参考投影图像中提取参考特征,并逐个从满足预设投影条件的多个视角的后续投影图像中提取与参考特征对应的匹配特征;Step S31, extracting reference features from the reference projection image, and extracting matching features corresponding to the reference features from the subsequent projection images of the plurality of perspectives satisfying the preset projection conditions one by one;
在图像配准过程中,首先要选择一种变换模型作为假设,图像全局变换可以选择几何变换、相似变换、仿射、射影变换,图像局部变换可以把图像划分为不同部分,并对每一部分计算单独的配准参数,该配准参数用于将参考投影图像与后续投影图像对齐。然后提取配准特征,提取配准特征的方法有:特征点匹配、光流法、互相关等。其中:In the image registration process, we first choose a transformation model as a hypothesis. The global transformation of the image can select geometric transformation, similar transformation, affine transformation, and projective transformation. The image local transformation can divide the image into different parts and calculate for each part. A separate registration parameter that is used to align the reference projection image with subsequent projection images. Then, the registration features are extracted, and the methods for extracting the registration features are: feature point matching, optical flow method, cross-correlation, and the like. among them:
特征点匹配是指:从参照帧(即参考投影图像)提取若干特征点,然后从待配准帧(即后续投影图像)提取或搜索对应的特征点,根据特征点的位置作为数据,求解待配准帧相对参照帧的配准参数。Feature point matching means: extracting a number of feature points from a reference frame (ie, a reference projection image), and then extracting or searching for corresponding feature points from the to-be-registered frame (ie, a subsequent projected image), and solving the problem according to the position of the feature point as data The registration parameters of the registration frame relative to the reference frame.
光流法是指:估计空间运动物体在观察成像平面上的像素运动的瞬时速度,是利用图像序列中像素在时间域上的变化以及相邻帧之间的相关性来找到上一帧跟当前帧之间存在的对应关系,从而计算出相邻帧之间物体的运动信息。The optical flow method refers to estimating the instantaneous velocity of the moving motion of the moving object on the imaging plane by using the variation of the pixel in the image sequence and the correlation between adjacent frames to find the previous frame and the current Correspondence between frames, thereby calculating motion information of objects between adjacent frames.
互相关是指:利用傅里叶变换将图像变换到频域,然后用互相关公式计 算待配准帧在空间域中每个位置的相关性,取最大的位置作为待配准帧相对参照帧的配准参数。Cross-correlation means: transforming an image into the frequency domain using Fourier transform, and then using the cross-correlation formula The correlation of each position of the registration frame in the spatial domain is calculated, and the largest position is taken as the registration parameter of the to-be-registered frame relative to the reference frame.
步骤S32,根据匹配特征与参考特征之间的位置关系,将满足预设投影条件的后续投影图像与参考投影图像配准结合;Step S32, according to the positional relationship between the matching feature and the reference feature, the subsequent projection image that satisfies the preset projection condition is combined with the reference projection image;
根据匹配特征与参考特征之间的位置关系,得出配准参数(例如配准参数为一个二维或三维的移动向量),根据配准参数将满足预设投影条件的后续投影图像与参考投影图像配准结合(即对齐连接)。例如匹配特征有三个,根据三个匹配特征与三个对应参考特征之间的位置关系,得出匹配特征向参考特征的移动方向和移动距离,并根据该移动方向和移动距离移动后续投影图像的每一个像素点,得出后续投影图像每一个像素点移动后的位置,从而让后续投影图像的匹配特征与参考投影图像的参考特征对齐,也让后续投影图像与参考投影图像的相同图像区域重合。According to the positional relationship between the matching feature and the reference feature, the registration parameter is obtained (for example, the registration parameter is a two-dimensional or three-dimensional motion vector), and the subsequent projection image and the reference projection satisfying the preset projection condition according to the registration parameter are obtained. Image registration is combined (ie aligned). For example, there are three matching features. According to the positional relationship between the three matching features and the three corresponding reference features, the moving direction and the moving distance of the matching feature to the reference feature are obtained, and the subsequent projected image is moved according to the moving direction and the moving distance. For each pixel, the position of each pixel of the subsequent projected image is obtained, so that the matching feature of the subsequent projected image is aligned with the reference feature of the reference projected image, and the subsequent projected image is coincident with the same image region of the reference projected image. .
以仿射变换模型为例,变换有6个参数,用于2*3矩阵表示
Figure PCTCN2016105705-appb-000005
A可由配准算法求出。那么对于投影柱面上当前输入图像帧(即后续投影图像)的一点
Figure PCTCN2016105705-appb-000006
经过配准后坐标位置
Figure PCTCN2016105705-appb-000007
同理求出当前输入图像所有像素点配准后坐标位置,从而根据所有像素点配准后的坐标位置将当前输入图像与参考投影图像进行对齐。
Taking the affine transformation model as an example, the transformation has 6 parameters for 2*3 matrix representation.
Figure PCTCN2016105705-appb-000005
A can be found by a registration algorithm. Then a point for the current input image frame (ie, the subsequent projected image) on the projected cylinder
Figure PCTCN2016105705-appb-000006
Coordinated position
Figure PCTCN2016105705-appb-000007
Similarly, the coordinate positions of all the pixel points of the current input image after registration are obtained, so that the current input image is aligned with the reference projection image according to the coordinate positions of all the pixel points after registration.
步骤S33,将满足预设投影条件的后续投影图像与参考投影图像的重叠区域进行像素融合、非重叠区域直接拼接,以形成新的参考投影图像,直至遍历完获取的后续投影图像;Step S33, performing pixel fusion on the overlapping area of the subsequent projection image satisfying the preset projection condition and the reference projection image, and directly splicing the non-overlapping area to form a new reference projection image until the acquired subsequent projection image is traversed;
对当前满足预设投影条件的输入帧图像(即后续投影图像)上每一个像素点,先判断该像素点是否和先前全景图像(即参考投影图像)的像素有重叠。判断方法是在对全景图进行初始化时,每个像素位置的颜色初始为0。新输入帧图像的每个像素在合成位置判断如果原始像素的颜色为全0,则不重叠; 如果颜色不全为0,则重叠。For each pixel on the input frame image (ie, the subsequent projected image) that currently meets the preset projection condition, it is first determined whether the pixel overlaps with the pixel of the previous panoramic image (ie, the reference projected image). The judging method is that the color of each pixel position is initially 0 when the panorama is initialized. Each pixel of the new input frame image is judged at the composite position if the color of the original pixel is all 0s, and does not overlap; If the colors are not all 0, they overlap.
在满足预设投影条件的后续投影图像与参考投影图像对齐之后,将后续投影图像与参考投影图像的重叠区域进行像素融合,即基于图像融合技术,根据重叠区域后续投影图像像素点的颜色值和参考投影图像像素点的颜色值进行像素融合,得到融合后的像素点颜色值;将后续投影图像与参考投影图像不重合区域进行像素直接拼接,即若参考投影图像之外的区域有后续投影图像,表明移动终端因移动采集了新的图像,将新采集的后续投影图像与参考投影图像对应位置直接拼接;从而经过像素融合和图像拼接,形成的图像作为新的参考投影图像。同理,利用新的参考投影图像对其它后续投影图像进行相同的处理过程,如此循环,直至遍历完或处理完获取的后续投影图像。此外,后续投影图像与参考投影图像配准融合后的图像可同步出现在移动终端显示区域,供用户观看。After the subsequent projection image satisfying the preset projection condition is aligned with the reference projection image, the overlapping region of the subsequent projection image and the reference projection image is pixel-fused, that is, based on the image fusion technology, according to the color value of the pixel of the subsequent projection image of the overlapping region and Refer to the color value of the pixel of the projected image for pixel fusion to obtain the fused pixel color value; and directly splicing the subsequent projection image and the reference projection image without overlapping regions, that is, if the reference projection image has a subsequent projection image It indicates that the mobile terminal acquires a new image by moving, and directly splicing the newly acquired subsequent projection image with the corresponding position of the reference projection image; thus, the image formed by the pixel fusion and the image mosaic is used as a new reference projection image. Similarly, the same processing is performed on the other subsequent projected images by using the new reference projection image, and the loop is repeated until the acquired subsequent projected images are traversed or processed. In addition, the image after the subsequent projection image is registered and registered with the reference projection image may be synchronously displayed in the display area of the mobile terminal for the user to view.
步骤S34,将最终形成的参考投影图像作为输出投影图像。In step S34, the finally formed reference projection image is taken as an output projection image.
在移动终端取景器停止采集图像之后,并将所有获取的后续投影图像与参考投影图像进行配准融合之后,最终形成的新的参考投影图像,将该最新的参考投影图像作为输出投影图像。After the mobile terminal viewfinder stops acquiring the image, and all the acquired subsequent projection images are registered and fused with the reference projection image, the finally formed new reference projection image is used as the output projection image.
在本实施例中,通过比较参考投影图像中的参考特征与后续投影图像中的匹配特征,得出参考投影图像与后续投影图像的位置关系,即得出后续投影图像相对参考投影图像的配准参数,然后根据参考投影图像与后续投影图像的位置关系,将后续投影图像与参考投影图像对齐,然后将后续投影图像与参考投影图像的重叠区域进行像素融合、非重叠区域直接拼接以形成新的参考投影图像,直至遍历完获取的后续投影图像;最后将最终形成的参考投影图像作为输出投影图像,从而将每个后续投影图像与更新中的参考投影图像循环进行配准和融合,形成长曝光拍摄图像对应的全景输出投影图像,能够将多个视角的图像合成为一张大视角图像的全景的输出投影图像,并使移动终端处于一定幅度晃动时也能实现长曝光拍摄功能,扩展了移动终端长曝光拍摄适用范围。In this embodiment, by comparing the reference features in the reference projection image with the matching features in the subsequent projection images, the positional relationship between the reference projection image and the subsequent projection image is obtained, that is, the registration of the subsequent projection image with respect to the reference projection image is obtained. Parameter, and then aligning the subsequent projection image with the reference projection image according to the positional relationship between the reference projection image and the subsequent projection image, and then pixel merging the overlapping regions of the subsequent projection image with the reference projection image, directly splicing the non-overlapping regions to form a new one Referring to the projected image until the acquired subsequent projected image is traversed; finally, the finally formed reference projected image is used as an output projected image, thereby registering and merging each subsequent projected image with the updated reference projected image to form a long exposure. The panoramic output projection image corresponding to the captured image can be combined into a panoramic projection output image of a large viewing angle image, and the mobile terminal can realize the long exposure shooting function when the mobile terminal is shaken at a certain amplitude, and the mobile terminal is expanded. Long exposure shooting Wai.
进一步地,在本申请的长曝光全景图像拍摄方法第三实施例的基础上,提出长曝光全景图像拍摄方法第四实施例,在第四实施例中,长曝光全景图 像拍摄方法还包括:Further, based on the third embodiment of the long exposure panoramic image capturing method of the present application, a fourth embodiment of the long exposure panoramic image capturing method is proposed. In the fourth embodiment, the long exposure panoramic view is provided. Photo shooting methods also include:
步骤S35,在获取移动终端取景器定时采集的图像时,获取移动终端的运动参数,运动参数包括移动方向和倾斜角度;Step S35: Acquire a motion parameter of the mobile terminal when acquiring an image captured by the mobile terminal viewfinder, and the motion parameter includes a moving direction and a tilt angle;
运动参数在开始获取移动终端取景器采集的图像时,通过安装在移动终端内部的加速度计感和陀螺仪感应用户握持移动终端的移动方向和倾斜角度;或者使用图像配准的方法对当前图像提取特征,找到对齐位置,以确定用户握持移动终端的移动方向和倾斜角度。When the motion parameter starts to acquire the image acquired by the mobile terminal viewfinder, the accelerometer sense and the gyroscope installed inside the mobile terminal sense the user's movement direction and the tilt angle of the mobile terminal; or use the image registration method to the current image. The features are extracted and the alignment position is found to determine the direction and tilt angle at which the user holds the mobile terminal.
步骤S32包括:Step S32 includes:
步骤S321,根据移动终端的运动方向确定满足预设投影条件的后续投影图像与参考投影图像的结合方向;Step S321, determining a combination direction of the subsequent projection image that meets the preset projection condition and the reference projection image according to the moving direction of the mobile terminal;
根据移动终端的运动参数可以粗略确定后满足预设投影条件的续投影图像与参考投影图像的结合方向,例如移动终端水平向右移动,则满足预设投影条件的后续投影图像应该与参考投影图像的右侧进行结合,即后续投影图像与参考投影图像的结合方向是右向结合。According to the motion parameter of the mobile terminal, the combination direction of the continuous projection image and the reference projection image satisfying the preset projection condition can be roughly determined. For example, if the mobile terminal moves horizontally to the right, the subsequent projection image that satisfies the preset projection condition should be compared with the reference projection image. The right side is combined, that is, the combined direction of the subsequent projected image and the reference projected image is right-handed.
步骤S322,根据结合方向,以及匹配特征与参考特征之间的位置关系,将后续投影图像与参考投影图像配准结合。Step S322, combining the subsequent projection image with the reference projection image according to the combination direction and the positional relationship between the matching feature and the reference feature.
在确定后续投影图像与参考投影图像的结合方向之后,再根据匹配特征与参考特征之间的位置关系,可快速将后续投影图像与参考投影图像对齐并进行配准结合。After determining the combination direction of the subsequent projection image and the reference projection image, according to the positional relationship between the matching feature and the reference feature, the subsequent projection image can be quickly aligned with the reference projection image and the registration is combined.
在本实施例中,通过在获取移动终端取景器定时采集的图像时,获取移动终端的运动参数,然后根据移动终端的运动方向确定满足预设投影条件的后续投影图像与参考投影图像的结合方向;在确定满足预设投影条件的后续投影图像与参考投影图像的结合方向之后,再根据匹配特征与参考特征之间的位置关系,可快速将后续投影图像与参考投影图像对齐并进行配准结合,提高了后续投影图像与参考投影图像位置配准的效率。In this embodiment, when the image acquired by the mobile terminal viewfinder is acquired, the motion parameter of the mobile terminal is acquired, and then the combination direction of the subsequent projection image that meets the preset projection condition and the reference projection image is determined according to the moving direction of the mobile terminal. After determining the combination direction of the subsequent projection image and the reference projection image satisfying the preset projection condition, according to the positional relationship between the matching feature and the reference feature, the subsequent projection image can be quickly aligned with the reference projection image and the registration is combined. The efficiency of registering the position of the subsequent projection image and the reference projection image is improved.
进一步地,在本申请的长曝光全景图像拍摄方法第三实施例的基础上,提出长曝光全景图像拍摄方法第五实施例,在第五实施例中,将后续投影图像与参考投影图像的重叠区域进行像素融合的步骤包括: Further, based on the third embodiment of the long exposure panoramic image capturing method of the present application, a fifth embodiment of the long exposure panoramic image capturing method is proposed. In the fifth embodiment, the overlapping of the subsequent projected image and the reference projected image is performed. The steps of performing pixel fusion in the area include:
步骤S331,获取满足预设投影条件的后续投影图像与参考投影图像的重叠区域中每个像素点位置的输入颜色值、参考颜色值和融合次数;Step S331, acquiring an input color value, a reference color value, and a fusion number of each pixel position in an overlapping area of the subsequent projection image and the reference projection image that meet the preset projection condition;
步骤S332,根据输入颜色值、参考颜色值和融合次数,调整重叠区域中每个像素点的颜色值以完成像素融合,其中,输入颜色值为后续投影图像在重叠区域中像素点位置的颜色值、参考颜色值为参考投影图像在重叠区域中像素点位置的颜色值、融合次数为参考投影图像在重叠区域中像素点位置进行像素融合的次数。Step S332, adjusting color values of each pixel in the overlapping area according to the input color value, the reference color value, and the number of fusion times to complete pixel fusion, wherein the input color value is a color value of a pixel position of the subsequent projected image in the overlapping area. The reference color value is a color value of the pixel position of the reference projection image in the overlapping area, and the number of fusion times is the number of times the pixel of the reference projection image is pixel-fused in the overlapping area.
获取满足预设投影条件的后续投影图像中与参考投影图像的重叠区域每个像素点的输入颜色值,获取参考投影图像中与后续投影图像的重叠区域中每个像素点的参考颜色值和像素融合的融合次数;根据重叠区域每个像素点位置的输入颜色值、参考颜色值和融合次数,调整重叠区域中每个像素点的颜色值以完成像素融合。例如,根据加权平均法计算得到重叠区域中每个像素点的颜色值,设重叠区域内某一像素点位置融合次数为F,参考颜色值为CF,输入颜色值为CP,则合成后的颜色值为:Obtaining an input color value of each pixel point in an overlapping area of the subsequent projection image that meets the preset projection condition, and acquiring a reference color value and a pixel of each pixel point in the overlapping area of the reference projection image and the subsequent projection image The number of fusions of the fusion; adjusting the color value of each pixel in the overlapping area according to the input color value, the reference color value and the number of fusion times of each pixel position in the overlapping area to complete the pixel fusion. For example, according to the weighted average method, the color value of each pixel in the overlapping region is calculated, and the number of times of fusion of a pixel in the overlapping region is F, the reference color value is C F , and the input color value is C P , then The color values are:
Figure PCTCN2016105705-appb-000008
Figure PCTCN2016105705-appb-000008
合成后融合次数记录F自增1。该实施例中的颜色值为十六进制颜色码中的值。The number of fusions after synthesis is recorded as F incremented by 1. The color value in this embodiment is the value in the hexadecimal color code.
在本实施例中,通过获取后续投影图像与参考投影图像的重叠区域中每个像素点位置的输入颜色值、参考颜色值和融合次数,然后基于加权平均算法,并根据输入颜色值、参考颜色值和融合次数,调整重叠区域中每个像素点的颜色值以完成像素融合,使重叠区域像素点颜色值融合与所有后续投影图像对应像素点颜色值相关,使像素融合效果更佳。In this embodiment, the input color value, the reference color value, and the number of fusions of each pixel position in the overlapping area of the subsequent projection image and the reference projection image are obtained, and then based on the weighted average algorithm, and according to the input color value and the reference color. The value and the number of fusions are adjusted to adjust the color value of each pixel in the overlapping area to complete the pixel fusion, so that the color value fusion of the pixel in the overlapping area is correlated with the color value of the corresponding pixel of all subsequent projected images, so that the pixel fusion effect is better.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。 It is to be understood that the term "comprises", "comprising", or any other variants thereof, is intended to encompass a non-exclusive inclusion, such that a process, method, article, or device comprising a series of elements includes those elements. It also includes other elements that are not explicitly listed, or elements that are inherent to such a process, method, article, or device. An element that is defined by the phrase "comprising a ..." does not exclude the presence of additional equivalent elements in the process, method, item, or device that comprises the element.
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the embodiments of the present invention are merely for the description, and do not represent the advantages and disadvantages of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better. Implementation. Based on such understanding, the technical solution of the present application, which is essential or contributes to the related art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk, CD-ROM). The instructions include a number of instructions for causing a terminal device (which may be a cell phone, computer, server, air conditioner, or network device, etc.) to perform the methods described in various embodiments of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above is only a preferred embodiment of the present application, and is not intended to limit the scope of the patent application, and the equivalent structure or equivalent process transformations made by the specification and the drawings of the present application, or directly or indirectly applied to other related technical fields. The same is included in the scope of patent protection of this application.
工业实用性Industrial applicability
本申请实施例提供一种长曝光全景图像拍摄装置和方法,加强了移动终端在长曝光拍摄模式下对抖动的适应能力,从而用户在户外进行长曝光拍摄时,无需携带固定装置,使移动终端长曝光拍摄功能使用更方便。 The embodiment of the present application provides a long exposure panoramic image capturing device and method, which enhances the adaptability of the mobile terminal to jitter in a long exposure shooting mode, so that the user does not need to carry a fixing device when performing long exposure shooting outdoors, so that the mobile terminal The long exposure shooting function is more convenient to use.

Claims (20)

  1. 一种长曝光全景图像拍摄装置,包括:A long exposure panoramic image capturing device comprising:
    图像获取模块,设置为当接收到长曝光拍摄指令时,获取移动终端取景器定时采集的图像;The image acquisition module is configured to acquire an image acquired by the mobile terminal viewfinder at a time when receiving the long exposure shooting instruction;
    图像投影模块,设置为将获取的图像投影至预设投影空间,形成对应的投影图像,其中,将获取的第一帧图像对应的投影图像作为参考投影图像,将获取的后续图像对应的投影图像作为后续投影图像;The image projection module is configured to project the acquired image to a preset projection space to form a corresponding projection image, wherein the acquired projection image corresponding to the first frame image is used as a reference projection image, and the acquired projection image corresponding to the subsequent image is obtained. As a subsequent projected image;
    配准融合模块,设置为将满足预设投影条件的多个视角的所述后续投影图像逐个与所述参考投影图像进行位置配准和像素融合,形成新的参考投影图像,直至遍历完获取的后续投影图像,以获取输出投影图像;The registration fusion module is configured to perform position registration and pixel fusion of the subsequent projection images of the plurality of viewing angles satisfying the preset projection conditions one by one with the reference projection image to form a new reference projection image until the traversal is obtained. Subsequently projecting an image to obtain an output projected image;
    反向投影模块,设置为将所述输出投影图像反向投影至移动终端取景器采集图像的坐标空间以生成长曝光全景图像。A back projection module is configured to backproject the output projected image to a coordinate space of the image acquired by the mobile terminal viewfinder to generate a long exposure panoramic image.
  2. 如权利要求1所述的长曝光全景图像拍摄装置,其中,所述图像投影模块包括:The long exposure panoramic image capturing apparatus according to claim 1, wherein the image projection module comprises:
    参数获取单元,设置为获取移动终端的运动参数,该运动参数包括运动方向和倾斜角度;a parameter obtaining unit, configured to acquire a motion parameter of the mobile terminal, where the motion parameter includes a motion direction and a tilt angle;
    空间确定单元,设置为根据获取的运动参数确定用于图像投影的投影空间;a space determining unit configured to determine a projection space for image projection according to the acquired motion parameter;
    投影单元,设置为将获取的图像投影至确定的投影空间形成对应的投影图像。And a projection unit configured to project the acquired image to the determined projection space to form a corresponding projection image.
  3. 如权利要求1或2所述的长曝光全景图像拍摄装置,其中,所述配准融合模块包括:The long exposure panoramic image capturing apparatus according to claim 1 or 2, wherein the registration fusion module comprises:
    特征提取单元,设置为从所述参考投影图像中提取参考特征,并逐个从满足预设投影条件的多个视角的所述后续投影图像中提取与所述参考特征对应的匹配特征;a feature extraction unit configured to extract a reference feature from the reference projection image, and extract matching features corresponding to the reference feature from the subsequent projection images of the plurality of perspectives satisfying the preset projection condition one by one;
    配准单元,设置为根据所述匹配特征与参考特征之间的位置关系,将满足预设投影条件的所述后续投影图像与参考投影图像配准结合; The registration unit is configured to combine the subsequent projection image that meets the preset projection condition with the reference projection image according to the positional relationship between the matching feature and the reference feature;
    融合单元,设置为将满足预设投影条件的所述后续投影图像与参考投影图像的重叠区域进行像素融合、非重叠区域直接拼接,以形成新的参考投影图像,直至遍历完获取的后续投影图像,并将最终形成的参考投影图像作为输出投影图像。The merging unit is configured to perform pixel merging and overlapping of the overlapping area of the reference projection image satisfying the preset projection condition with the non-overlapping area to form a new reference projection image until the acquired subsequent projection image is traversed And the resulting reference projection image is taken as an output projection image.
  4. 如权利要求3所述的长曝光全景图像拍摄装置,其中,The long exposure panoramic image capturing apparatus according to claim 3, wherein
    所述图像获取模块,还设置为在获取移动终端取景器定时采集的图像时,获取移动终端的运动参数,运动参数包括移动方向和倾斜角度;The image acquisition module is further configured to acquire a motion parameter of the mobile terminal when the image acquired by the mobile terminal viewfinder is acquired, and the motion parameter includes a moving direction and a tilt angle;
    所述配准单元还设置为:The registration unit is further configured to:
    根据移动终端的运动参数确定满足预设投影条件的所述后续投影图像与参考投影图像的结合方向;Determining, according to a motion parameter of the mobile terminal, a combination direction of the subsequent projected image that satisfies a preset projection condition and the reference projected image;
    根据所述结合方向,以及匹配特征与参考特征之间的位置关系,将满足预设投影条件的所述后续投影图像与参考投影图像配准结合。And matching the subsequent projection image satisfying the preset projection condition with the reference projection image according to the bonding direction and the positional relationship between the matching feature and the reference feature.
  5. 如权利要求3所述的长曝光全景图像拍摄装置,其中,The long exposure panoramic image capturing apparatus according to claim 3, wherein
    所述融合单元还设置为:The fusion unit is further configured to:
    获取满足预设投影条件的所述后续投影图像与参考投影图像的重叠区域中每个像素点位置的输入颜色值、参考颜色值和融合次数;Obtaining an input color value, a reference color value, and a number of fusion times of each pixel position in an overlapping area of the subsequent projection image and the reference projection image that satisfy a preset projection condition;
    根据所述输入颜色值、参考颜色值和融合次数,调整所述重叠区域中每个像素点的颜色值以完成像素融合,其中,输入颜色值为后续投影图像在重叠区域中像素点位置的颜色值、参考颜色值为参考投影图像在重叠区域中像素点位置的颜色值、融合次数为参考投影图像在重叠区域中像素点位置进行像素融合的次数。And adjusting a color value of each pixel in the overlapping area to complete pixel fusion according to the input color value, the reference color value, and the number of fusions, wherein the input color value is a color of a pixel position of the subsequent projected image in the overlapping area. The value and the reference color value are color values of the pixel position of the reference projection image in the overlapping area, and the number of times of fusion is the number of times the pixel of the reference projection image is pixel-fused in the overlapping area.
  6. 如权利要求2所述的长曝光全景图像拍摄装置,其中,所述参数获取单元设置为通过以下方式获取移动终端的运动参数:The long exposure panoramic image capturing apparatus according to claim 2, wherein said parameter acquisition unit is configured to acquire a motion parameter of the mobile terminal by:
    所述参数获取单元控制安装在移动终端内部的加速度计和陀螺仪感应用户握持移动终端的运动方向和倾斜角度,或者,使用图像配准方法对当前图像提取特征,找到对齐位置,以确定用户握持移动终端的运动方向和倾斜角度。The parameter acquisition unit controls an accelerometer and a gyroscope installed inside the mobile terminal to sense a movement direction and a tilt angle of the user holding the mobile terminal, or use an image registration method to extract features from the current image, find an alignment position, and determine a user. Hold the moving direction and tilt angle of the mobile terminal.
  7. 如权利要求2所述的长曝光全景图像拍摄装置,其中,所述投影空 间包括柱面投影空间、球面投影空间、正方体投影空间。The long exposure panoramic image capturing apparatus according to claim 2, wherein said projection is empty The space includes a cylindrical projection space, a spherical projection space, and a cube projection space.
  8. 如权利要求3所述的长曝光全景图像拍摄装置,其中,所述特征提取单元设置为通过以下方式之一提取特征:特征点匹配、光流法、互相关。The long exposure panoramic image capturing apparatus according to claim 3, wherein the feature extraction unit is configured to extract features by one of the following methods: feature point matching, optical flow method, and cross correlation.
  9. 如权利要求3所述的长曝光全景图像拍摄装置,其中,所述配准单元设置为根据匹配特征与参考特征之间的位置关系,得出配准参数,根据所述配准参数将满足预设投影条件的后续投影图像与参考投影图像配准结合。The long exposure panoramic image capturing apparatus according to claim 3, wherein the registration unit is configured to obtain a registration parameter according to a positional relationship between the matching feature and the reference feature, and the registration parameter is satisfied according to the registration parameter. The subsequent projection image of the projection condition is combined with the reference projection image registration.
  10. 如权利要求5所述的长曝光全景图像拍摄装置,其中,所述融合单元设置为根据加权平均法计算得到重叠区域中每个像素点的颜色值。The long exposure panoramic image capturing apparatus according to claim 5, wherein said merging unit is configured to calculate a color value of each pixel point in the overlap region according to a weighted averaging method.
  11. 一种长曝光全景图像拍摄方法,包括:A long exposure panoramic image capturing method, comprising:
    当接收到长曝光拍摄指令时,获取移动终端取景器定时采集的图像;Obtaining an image acquired by the mobile terminal viewfinder at a time when receiving a long exposure shooting instruction;
    将获取的图像投影至预设投影空间形成对应的投影图像,其中,将获取的第一帧图像对应的投影图像作为参考投影图像,将获取的后续图像对应的投影图像作为后续投影图像;Projecting the acquired image to a preset projection space to form a corresponding projection image, wherein the acquired projection image corresponding to the first frame image is used as a reference projection image, and the acquired projection image corresponding to the subsequent image is used as a subsequent projection image;
    将满足预设投影条件的多个视角的所述后续投影图像逐个与所述参考投影图像进行位置配准和像素融合形成新的参考投影图像,直至遍历完获取的后续投影图像,以获取输出投影图像;The subsequent projection images of the plurality of viewing angles satisfying the preset projection condition are position-aligned and pixel-fused with the reference projection image one by one to form a new reference projection image, until the acquired subsequent projection image is traversed to obtain an output projection image;
    将所述输出投影图像反向投影至移动终端取景器采集图像的坐标空间以生成长曝光全景图像。The output projection image is back projected to a coordinate space of the mobile terminal viewfinder acquisition image to generate a long exposure panoramic image.
  12. 如权利要求11所述的长曝光全景图像拍摄方法,其中,所述将获取的图像投影至预设投影空间形成对应的投影图像的步骤包括:The method of claim 11 , wherein the step of projecting the acquired image to a preset projection space to form a corresponding projection image comprises:
    获取移动终端的运动参数,该运动参数包括运动方向和倾斜角度;Obtaining a motion parameter of the mobile terminal, where the motion parameter includes a motion direction and a tilt angle;
    根据获取的运动参数确定用于图像投影的投影空间;Determining a projection space for image projection based on the acquired motion parameters;
    将获取的图像投影至确定的投影空间形成对应的投影图像。The acquired image is projected onto the determined projection space to form a corresponding projected image.
  13. 如权利要求11或12所述的长曝光全景图像拍摄方法,其中,所述将满足预设投影条件的多个视角的所述后续投影图像逐个与所述参考投影图像进行位置配准和像素融合形成新的参考投影图像,遍历完获取的后续投影图像,以获取输出投影图像的步骤包括: The long exposure panoramic image capturing method according to claim 11 or 12, wherein said subsequent projection images of a plurality of viewing angles satisfying a predetermined projection condition are subjected to position registration and pixel fusion one by one with said reference projection image Forming a new reference projection image, and traversing the acquired subsequent projection image to obtain an output projection image includes:
    从所述参考投影图像中提取参考特征,并逐个从满足预设投影条件的多个视角的所述后续投影图像中提取与所述参考特征对应的匹配特征;Extracting reference features from the reference projection image, and extracting matching features corresponding to the reference features from the subsequent projection images of the plurality of perspectives satisfying the preset projection conditions one by one;
    根据所述匹配特征与参考特征之间的位置关系,将满足预设投影条件的所述后续投影图像与参考投影图像配准结合;And matching the subsequent projection image that meets the preset projection condition with the reference projection image according to the positional relationship between the matching feature and the reference feature;
    将满足预设投影条件的所述后续投影图像与参考投影图像的重叠区域进行像素融合、非重叠区域直接拼接,以形成新的参考投影图像,直至遍历完获取的后续投影图像;Performing pixel fusion on the overlapping area of the predetermined projection image and the reference projection image, and directly splicing the non-overlapping area to form a new reference projection image until the acquired subsequent projection image is traversed;
    将最终形成的参考投影图像作为输出投影图像。The resulting reference projection image is taken as an output projection image.
  14. 如权利要求13所述的长曝光全景图像拍摄方法,其中,所述长曝光全景图像拍摄方法还包括:在获取移动终端取景器定时采集的图像时,获取移动终端的运动参数,运动参数包括移动方向和倾斜角度;The long exposure panoramic image capturing method according to claim 13, wherein the long exposure panoramic image capturing method further comprises: acquiring a motion parameter of the mobile terminal when acquiring an image acquired by the timing of the mobile terminal viewfinder, wherein the motion parameter comprises moving Direction and inclination angle;
    所述根据所述匹配特征与参考特征之间的位置关系,将满足预设投影条件的所述后续投影图像与参考投影图像配准结合的步骤包括:The step of combining the subsequent projection image satisfying the preset projection condition with the reference projection image according to the positional relationship between the matching feature and the reference feature comprises:
    根据移动终端的运动参数确定满足预设投影条件的所述后续投影图像与参考投影图像的结合方向;Determining, according to a motion parameter of the mobile terminal, a combination direction of the subsequent projected image that satisfies a preset projection condition and the reference projected image;
    根据所述结合方向,以及匹配特征与参考特征之间的位置关系,将满足预设投影条件的所述后续投影图像与参考投影图像配准结合。And matching the subsequent projection image satisfying the preset projection condition with the reference projection image according to the bonding direction and the positional relationship between the matching feature and the reference feature.
  15. 如权利要求13所述的长曝光全景图像拍摄方法,其中,所述将满足预设投影条件的所述后续投影图像与参考投影图像的重叠区域进行像素融合的步骤包括:The long exposure panoramic image capturing method according to claim 13, wherein the step of pixel-blending the overlapping region of the predetermined projection image that satisfies the preset projection condition with the reference projection image comprises:
    获取满足预设投影条件的所述后续投影图像与参考投影图像的重叠区域中每个像素点位置的输入颜色值、参考颜色值和融合次数;Obtaining an input color value, a reference color value, and a number of fusion times of each pixel position in an overlapping area of the subsequent projection image and the reference projection image that satisfy a preset projection condition;
    根据所述输入颜色值、参考颜色值和融合次数,调整所述重叠区域中每个像素点的颜色值以完成像素融合,其中,输入颜色值为后续投影图像在重叠区域中像素点位置的颜色值、参考颜色值为参考投影图像在重叠区域中像素点位置的颜色值、融合次数为参考投影图像在重叠区域中像素点位置进行像素融合的次数。And adjusting a color value of each pixel in the overlapping area to complete pixel fusion according to the input color value, the reference color value, and the number of fusions, wherein the input color value is a color of a pixel position of the subsequent projected image in the overlapping area. The value and the reference color value are color values of the pixel position of the reference projection image in the overlapping area, and the number of times of fusion is the number of times the pixel of the reference projection image is pixel-fused in the overlapping area.
  16. 如权利要求12所述的长曝光全景图像拍摄方法,其中,所述获取 移动终端的运动参数,包括:The long exposure panoramic image capturing method according to claim 12, wherein said obtaining Motion parameters of the mobile terminal, including:
    控制安装在移动终端内部的加速度计和陀螺仪感应用户握持移动终端的运动方向和倾斜角度,或者,使用图像配准方法对当前图像提取特征,找到对齐位置,以确定用户握持移动终端的运动方向和倾斜角度。The accelerometer and the gyroscope installed inside the mobile terminal are controlled to sense the moving direction and the tilting angle of the user holding the mobile terminal, or the image registration method is used to extract the feature from the current image, and the alignment position is found to determine the user holding the mobile terminal. Direction of movement and angle of inclination.
  17. 如权利要求12所述的长曝光全景图像拍摄方法,其中,所述投影空间包括柱面投影空间、球面投影空间、正方体投影空间。The long exposure panoramic image capturing method according to claim 12, wherein the projection space comprises a cylindrical projection space, a spherical projection space, and a cube projection space.
  18. 如权利要求13所述的长曝光全景图像拍摄方法,其中,所述从所述参考投影图像中提取参考特征,并逐个从满足预设投影条件的多个视角的所述后续投影图像中提取与所述参考特征对应的匹配特征,包括:通过以下方式之一进行特征提取:特征点匹配、光流法、互相关。The long exposure panoramic image capturing method according to claim 13, wherein said extracting reference features from said reference projection images and extracting one by one from said subsequent projection images satisfying a plurality of viewing angles of predetermined projection conditions The matching feature corresponding to the reference feature includes: performing feature extraction by one of the following methods: feature point matching, optical flow method, and cross-correlation.
  19. 如权利要求13所述的长曝光全景图像拍摄方法,其中,所述根据所述匹配特征与参考特征之间的位置关系,将满足预设投影条件的所述后续投影图像与参考投影图像配准结合,包括:根据匹配特征与参考特征之间的位置关系,得出配准参数,根据所述配准参数将满足预设投影条件的后续投影图像与参考投影图像配准结合。The long exposure panoramic image capturing method according to claim 13, wherein the registering the subsequent projection image satisfying the preset projection condition with the reference projection image according to the positional relationship between the matching feature and the reference feature The combining includes: obtaining a registration parameter according to the positional relationship between the matching feature and the reference feature, and combining the subsequent projection image that meets the preset projection condition with the reference projection image according to the registration parameter.
  20. 如权利要求15所述的长曝光全景图像拍摄方法,其中,所述根据所述输入颜色值、参考颜色值和融合次数,调整所述重叠区域中每个像素点的颜色值以完成像素融合,包括:根据加权平均法计算得到重叠区域中每个像素点的颜色值。 The long exposure panoramic image capturing method according to claim 15, wherein said adjusting a color value of each pixel in said overlapping region according to said input color value, a reference color value, and a number of fusion times to complete pixel fusion, Including: calculating the color value of each pixel in the overlapping area according to the weighted average method.
PCT/CN2016/105705 2015-11-24 2016-11-14 Long-exposure panoramic image shooting apparatus and method WO2017088678A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510827869.8A CN105430263A (en) 2015-11-24 2015-11-24 Long-exposure panoramic image photographing device and method
CN201510827869.8 2015-11-24

Publications (1)

Publication Number Publication Date
WO2017088678A1 true WO2017088678A1 (en) 2017-06-01

Family

ID=55508166

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/105705 WO2017088678A1 (en) 2015-11-24 2016-11-14 Long-exposure panoramic image shooting apparatus and method

Country Status (2)

Country Link
CN (1) CN105430263A (en)
WO (1) WO2017088678A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242772A (en) * 2018-08-23 2019-01-18 上海圭目机器人有限公司 Airfield pavement surface image joining method based on the acquisition of intelligent platform area array cameras
CN109523468A (en) * 2018-11-15 2019-03-26 深圳市道通智能航空技术有限公司 Image split-joint method, device, equipment and unmanned plane
CN111080519A (en) * 2019-11-28 2020-04-28 常州新途软件有限公司 Automobile panoramic all-around view image fusion method
WO2020113534A1 (en) * 2018-12-06 2020-06-11 华为技术有限公司 Method for photographing long-exposure image and electronic device
CN112312034A (en) * 2020-10-29 2021-02-02 北京小米移动软件有限公司 Exposure method and device of image acquisition module, terminal equipment and storage medium
CN112508831A (en) * 2020-12-02 2021-03-16 深圳开立生物医疗科技股份有限公司 Ultrasonic wide-scene image generation method, device, equipment and storage medium
CN112634334A (en) * 2020-12-24 2021-04-09 长春理工大学 Ultrahigh dynamic projection display method and system based on fusion pixel modulation
CN112702497A (en) * 2020-12-28 2021-04-23 维沃移动通信有限公司 Shooting method and device
CN113111810A (en) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 Target identification method and system
CN113781533A (en) * 2021-09-10 2021-12-10 北京方正印捷数码技术有限公司 Image registration method, image registration device, printer and storage medium

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105430263A (en) * 2015-11-24 2016-03-23 努比亚技术有限公司 Long-exposure panoramic image photographing device and method
CN105898159B (en) * 2016-05-31 2019-10-29 努比亚技术有限公司 A kind of image processing method and terminal
CN106740474A (en) * 2016-12-23 2017-05-31 深圳市豪恩汽车电子装备有限公司 Panorama reverse image processing method and processing device
CN107071277B (en) * 2017-03-31 2020-04-03 努比亚技术有限公司 Optical drawing shooting device and method and mobile terminal
CN107564084B (en) * 2017-08-24 2022-07-01 腾讯科技(深圳)有限公司 Method and device for synthesizing motion picture and storage equipment
CN108076293A (en) * 2018-01-02 2018-05-25 努比亚技术有限公司 Light paints image pickup method, mobile terminal and computer-readable medium
CN110189285B (en) * 2019-05-28 2021-07-09 北京迈格威科技有限公司 Multi-frame image fusion method and device
CN113284206A (en) * 2021-05-19 2021-08-20 Oppo广东移动通信有限公司 Information acquisition method and device, computer readable storage medium and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867720A (en) * 2009-04-17 2010-10-20 索尼公司 Generate in the camera of the synthetic panoramic picture of high-quality
US20120300020A1 (en) * 2011-05-27 2012-11-29 Qualcomm Incorporated Real-time self-localization from panoramic images
CN102917167A (en) * 2011-08-02 2013-02-06 索尼公司 Image processing device, control method and a computer readable medium
CN102999891A (en) * 2011-09-09 2013-03-27 中国航天科工集团第三研究院第八三五八研究所 Binding parameter based panoramic image mosaic method
CN104321803A (en) * 2012-06-06 2015-01-28 索尼公司 Image processing device, image processing method, and program
CN104574339A (en) * 2015-02-09 2015-04-29 上海安威士科技股份有限公司 Multi-scale cylindrical projection panorama image generating method for video monitoring
CN105430263A (en) * 2015-11-24 2016-03-23 努比亚技术有限公司 Long-exposure panoramic image photographing device and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102013110B (en) * 2010-11-23 2013-01-02 李建成 Three-dimensional panoramic image generation method and system
US20140340427A1 (en) * 2012-01-18 2014-11-20 Logos Technologies Llc Method, device, and system for computing a spherical projection image based on two-dimensional images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867720A (en) * 2009-04-17 2010-10-20 索尼公司 Generate in the camera of the synthetic panoramic picture of high-quality
US20120300020A1 (en) * 2011-05-27 2012-11-29 Qualcomm Incorporated Real-time self-localization from panoramic images
CN102917167A (en) * 2011-08-02 2013-02-06 索尼公司 Image processing device, control method and a computer readable medium
CN102999891A (en) * 2011-09-09 2013-03-27 中国航天科工集团第三研究院第八三五八研究所 Binding parameter based panoramic image mosaic method
CN104321803A (en) * 2012-06-06 2015-01-28 索尼公司 Image processing device, image processing method, and program
CN104574339A (en) * 2015-02-09 2015-04-29 上海安威士科技股份有限公司 Multi-scale cylindrical projection panorama image generating method for video monitoring
CN105430263A (en) * 2015-11-24 2016-03-23 努比亚技术有限公司 Long-exposure panoramic image photographing device and method

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242772B (en) * 2018-08-23 2023-01-31 上海圭目机器人有限公司 Airport pavement surface image splicing method based on intelligent platform area-array camera acquisition
CN109242772A (en) * 2018-08-23 2019-01-18 上海圭目机器人有限公司 Airfield pavement surface image joining method based on the acquisition of intelligent platform area array cameras
CN109523468A (en) * 2018-11-15 2019-03-26 深圳市道通智能航空技术有限公司 Image split-joint method, device, equipment and unmanned plane
CN109523468B (en) * 2018-11-15 2023-10-20 深圳市道通智能航空技术股份有限公司 Image stitching method, device, equipment and unmanned aerial vehicle
WO2020113534A1 (en) * 2018-12-06 2020-06-11 华为技术有限公司 Method for photographing long-exposure image and electronic device
CN111080519A (en) * 2019-11-28 2020-04-28 常州新途软件有限公司 Automobile panoramic all-around view image fusion method
CN112312034A (en) * 2020-10-29 2021-02-02 北京小米移动软件有限公司 Exposure method and device of image acquisition module, terminal equipment and storage medium
CN112508831A (en) * 2020-12-02 2021-03-16 深圳开立生物医疗科技股份有限公司 Ultrasonic wide-scene image generation method, device, equipment and storage medium
CN112634334A (en) * 2020-12-24 2021-04-09 长春理工大学 Ultrahigh dynamic projection display method and system based on fusion pixel modulation
CN112634334B (en) * 2020-12-24 2023-09-01 长春理工大学 Ultrahigh dynamic projection display method and system based on fused pixel modulation
CN112702497A (en) * 2020-12-28 2021-04-23 维沃移动通信有限公司 Shooting method and device
CN113111810A (en) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 Target identification method and system
CN113111810B (en) * 2021-04-20 2023-12-08 北京嘀嘀无限科技发展有限公司 Target identification method and system
CN113781533A (en) * 2021-09-10 2021-12-10 北京方正印捷数码技术有限公司 Image registration method, image registration device, printer and storage medium

Also Published As

Publication number Publication date
CN105430263A (en) 2016-03-23

Similar Documents

Publication Publication Date Title
WO2017088678A1 (en) Long-exposure panoramic image shooting apparatus and method
JP5659305B2 (en) Image generating apparatus and image generating method
JP5659304B2 (en) Image generating apparatus and image generating method
KR100866230B1 (en) Method for photographing panorama picture
JP5769813B2 (en) Image generating apparatus and image generating method
JP5865388B2 (en) Image generating apparatus and image generating method
US20070081081A1 (en) Automated multi-frame image capture for panorama stitching using motion sensor
EP2993894B1 (en) Image capturing method and electronic apparatus
WO2014023231A1 (en) Wide-view-field ultrahigh-resolution optical imaging system and method
JP2002503893A (en) Virtual reality camera
JP2013046270A (en) Image connecting device, photographing device, image connecting method, and image processing program
KR20120012201A (en) Method for photographing panorama picture
CN110278366B (en) Panoramic image blurring method, terminal and computer readable storage medium
JP6741498B2 (en) Imaging device, display device, and imaging display system
WO2018196854A1 (en) Photographing method, photographing apparatus and mobile terminal
JP5248951B2 (en) CAMERA DEVICE, IMAGE SHOOTING SUPPORT DEVICE, IMAGE SHOOTING SUPPORT METHOD, AND IMAGE SHOOTING SUPPORT PROGRAM
CN110365910B (en) Self-photographing method and device and electronic equipment
JP2011217275A (en) Electronic device
JP2015046051A (en) Image processing apparatus, image processing method, program, and imaging system
WO2017071560A1 (en) Picture processing method and device
JP5021370B2 (en) Imaging apparatus, display method, and program
JP5914714B2 (en) Imaging equipment and method
TWI704408B (en) Omnidirectional camera apparatus and image mapping/combining method thereof
CN110875998A (en) Panoramic photographic device and image mapping combination method thereof
CN104811602A (en) Self-shooting method and self-shooting device for mobile terminals

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16867902

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16867902

Country of ref document: EP

Kind code of ref document: A1