CN102780855A - Image processing method and related device - Google Patents

Image processing method and related device Download PDF

Info

Publication number
CN102780855A
CN102780855A CN2011101341156A CN201110134115A CN102780855A CN 102780855 A CN102780855 A CN 102780855A CN 2011101341156 A CN2011101341156 A CN 2011101341156A CN 201110134115 A CN201110134115 A CN 201110134115A CN 102780855 A CN102780855 A CN 102780855A
Authority
CN
China
Prior art keywords
image
superposition
depth value
dimensional depth
many
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011101341156A
Other languages
Chinese (zh)
Other versions
CN102780855B (en
Inventor
郑昆楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MStar Software R&D Shenzhen Ltd
MStar Semiconductor Inc Taiwan
Original Assignee
MStar Software R&D Shenzhen Ltd
MStar Semiconductor Inc Taiwan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MStar Software R&D Shenzhen Ltd, MStar Semiconductor Inc Taiwan filed Critical MStar Software R&D Shenzhen Ltd
Priority to CN201110134115.6A priority Critical patent/CN102780855B/en
Publication of CN102780855A publication Critical patent/CN102780855A/en
Application granted granted Critical
Publication of CN102780855B publication Critical patent/CN102780855B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to an image processing method and a related device. Corresponding three-dimensional depth values are respectively acquired for all parts of an image, and one part of the image is captured into an image object according to the three-dimensional depth values which correspond to all the parts, so that the image object can be synthetized with other image objects.

Description

The method of image processing and relevant apparatus
Technical field
The invention relates to a kind of method and relevant apparatus of image processing, and particularly relevant for a kind of according to the image three-dimensional depth value by isolating image object and method of using and relevant apparatus in the image.
Background technology
Be quality, information, content and the entertaining of promoting static state and/or dynamic image, image processing and synthetic by extensive utilization.For example, image go the back of the body (background removing) be exactly a kind of image processing commonly used.The purpose that image goes to carry on the back is prospect important in the image independently to be captured be the image object, makes prospect ability and more unessential background separation.
In known technology, image goes the back of the body how to carry out with colourity edge (chroma keying) technology that changes.Carrying out colourity edge repeatedly the time, the main object that forms prospect can be placed under the background of monochrome (blue or green), lets prospect and color background captured to image in the lump.Then, via making behind the image, color background that just can color is known is removed, and is prospect image object with the image capture of prospect.Then, prospect image object just can be in the same place with the background video superposition of making separately is synthetic, becomes new image data.It then is according to edge between prospect and background and/or color distortion prospect image object to be separated from background that another known image goes to carry on the back technology.
Yet aforementioned known techniques still has the shortcoming in the application.If there is the color of some part to be similar to or to be same as the color of background in the prospect image, will go unlucky being removed, make the prospect image imperfect.In addition, if the prospect image is comparatively complicated, and/or not obvious, all can make known technology be difficult to from image, correctly tell prospect image object with the edge of background.
Summary of the invention
The present invention provides a kind of technology of carrying out image processing according to the three dimensional depth value, and three dimensional depth value capable of using prospect acquisition dynamic with one or static image is the image object, and with convenient follow-up image processing, the image of for example saying so is synthetic.
One of the object of the invention provides a kind of method of image processing; Comprise: receive an image; For the each several part of image is obtained corresponding three dimensional depth value respectively, and be an image object (the first image object) with the part acquisition of this image according to the pairing three dimensional depth value of image each several part.After obtaining one second image object, carry out a superposition and handle, the first image object and the second image object are synthesized in a resultant image.
Among one embodiment, can carry out pre-treatment, when superposition is handled, first image object after the pre-treatment and/or the second image object after the pre-treatment synthesized in resultant image again the first image object and/or the second image object.Pre-treatment can be to the convergent-divergent of image object, adjust its color, brightness, contrast and/or sharpness, and/or adjust its pairing three dimensional depth value and/or distance value.
During handling, superposition comprises a superposition priority treatment, superposition area of coverage processing and a superposition reprocessing.When carrying out the superposition priority treatment, one corresponding superposition is provided in proper order respectively according to the first image object and pairing three dimensional depth value of the second image object and/or distance value.When carrying out the processing of the superposition area of coverage, the first image object and the second image object superposition are blended in the resultant image according to the first image object and the pairing superposition order of the second image object.For example; If the distance value of the first image object more is positioned at the place ahead than the distance value of the second image object; Can keep the first image object, the part that overlaps with the first image object in the second image object is removed, be covered on the second image object so that the first image object is coincided.
When carrying out the superposition reprocessing, do trickle processing to the superposition result of the first image object and the second image object; For example, colour mixture, anti-sawtooth (anti-aliasing) are made and/or (feathering) processing of sprouting wings in the edge that coincides of first, second image object, the image that both are coincided seems more natural.
First and/or the second image object can be got by image capture.This image can be a 3-dimensional image, and each several part is a corresponding parallactic shift amount respectively; According to the corresponding parallactic shift amount of image each several part, the each several part that can be image is obtained corresponding three dimensional depth value respectively.
Another purpose of the present invention provides a kind of device of image processing, comprises a pre-processing module and a superposition processing module that a depth value module, a separation module, selectivity are provided with.The depth value module is that the each several part of an image is obtained corresponding three dimensional depth value respectively; It is an image object that separation module captures the part of image according to the pairing three dimensional depth value of image each several part.Pre-processing module carries out pre-treatment to the image object.The superposition processing module is carried out superposition to the image object and is handled; It comprises a superposition priority treatment module, a superposition area of coverage processing module and a superposition post-processing module, carries out superposition priority treatment, processing of the superposition area of coverage and superposition reprocessing respectively.
For there is better understanding above-mentioned and other aspects of the present invention, hereinafter is special lifts preferred embodiment, and conjunction with figs., elaborates as follows:
Description of drawings
Fig. 1 is with the sketch map of video camera picked-up 3-dimensional image according to one embodiment of the invention.
Fig. 2 is the image processor according to one embodiment of the invention.
Fig. 3 is the sketch map that carries out image processing according to one embodiment of the invention.
The main element symbol description
10: device
12,26: the depth value module
14: separation module
16,28: pre-processing module
18: the superposition processing module
20: superposition priority treatment module
22: superposition area of coverage processing module
24: the superposition post-processing module
30: range unit
Pi_A, Pi_B: image data
MS: video camera
CL-CR: camera lens
OB1-OB2: object
IR1-IR2, IL1-IL2, I1-I2: imaging
PR-PL, PRo-PLo, Po: image
Y1-Y2, Ys1-Ys2: distance value
Yob1-Yob2: three dimensional depth value
X1-X2: parallactic shift amount
Iob1, Iob2: image object
Iovlp: overlapping part
Embodiment
Because when left eye and right eye are seen same object, it is a little different that left eye and the image that appears of right eye have, and the image that the brain of human body is promptly seen according to eyes is set up three-dimensional (3D) image.Please refer to Figure 1A, it absorbs the sketch map of a 3-dimensional image with a video camera MS according to one embodiment of the invention.Video camera MS is provided with left camera lens CL and right camera lens CR; For an object OB1 with video camera MS standoff distance value Y1, left camera lens CL is the imaging IL1 among the left image PL with object OB1 acquisition, and right camera lens CR then is the imaging IR1 among the right image PR with object OB1 acquisition.According to left image PL and right image PR, can form a 3-dimensional image, wherein, between the imaging IL1 of left image PL and right image PR and imaging IR1 one parallactic shift amount X1 is arranged.In the time will playing 3-dimensional image, left image PL and right image PR are presented to audience's right and left eyes respectively, see through the parallactic shift amount X1 between imaging IL1 and IR1, can make the audience see the 3-dimensional image of object OB1.
Similarly, as far as an object OB2 with video camera MS standoff distance value Y2, left camera lens CL is the imaging IL2 among the left image PL with object OB2 acquisition, and right camera lens CR then is the imaging IR2 among the right image PR with object OB1 acquisition; Between the imaging IL2 of left side image PL and right image PR and IR2 one parallactic shift amount X2 is arranged, shown in Figure 1B.What should be specifically noted that is, compared to object OB1, object OB2 and video camera MS distance be (being that distance value Y2 is greater than Y1) far away, and the parallactic shift amount X2 between IL2 and IR2 of then forming images is less than the parallactic shift amount X1 between imaging IL1 and IR1.
According to above-mentioned characteristic, develop the notion that a kind of three dimensional depth value (3D depth).Can know that by Figure 1A and Figure 1B comparison forms the left image IL of 3-dimensional image and the parallactic shift amount of right image IR each several part, just can learn the distance between object and video camera MS.The left image PL that supposes Figure 1A is defined as one with reference to image (reference image), among then left image PL and the right image PR imaging IL1 of object OB1 and the parallactic shift amount X1 between IR1 on the occasion of, be the three dimensional depth value of imaging IL1 and IR1.In like manner, in Figure 1B, when left image PL is defined as one equally with reference to image, among left image PL and the right image PR imaging IL2 of object OB2 and the parallactic shift amount X2 between IR2 on the occasion of, be the three dimensional depth value of imaging IL2 and IR2.Otherwise in Figure 1A and Figure 1B, if the right image PR of definition is with reference to image, the three dimensional depth value of then form images IL1 and IR1 is the negative value of parallactic shift amount X1, and the three dimensional depth value of imaging IL2 and IR2 then is the negative value of parallactic shift amount X2.Shown in Fig. 1 C, between left and right sides image IL and IR, compare out the three dimensional depth value of imaging IL1 and IR1, just can learn the distance value Y1 between object OB1 and the video camera MS.In like manner, imaging IL2 and IR2 among comparison left and right sides image IL and the IR just can be worth knowing the distance value Y2 between object OB2 and video camera MS by the three dimensional depth of imaging IL2 and IR2.In other words, according to the three dimensional depth value, can learn close together between object OB1 and the video camera MS, distance is far away between object OB2 and video camera MS.
According to the principle of three dimensional depth value shown in Fig. 1 C, can the imaging of object OB1 among image PL and the PR and the imaging area of object OB2 be divided into prospect and background.In detail,, can learn close together between object OB1 and the video camera MS according to the three dimensional depth value, between object OB2 and video camera MS apart from far away, but and the imaging IL1 of judgment object OB1 and IR1 are prospect, the imaging IL2 of object OB2 and IR2 are background.In view of the above, can imaging IL1 and the IR1 of object OB1 be come out to become prospect image object by acquisition among image PL and the PR, and reach the function that image goes background.
Please refer to Fig. 2 and Fig. 3; Fig. 2 is a functional block diagram according to the device 10 of one embodiment of the invention, and Fig. 3 is a running sketch map according to the device 10 of one embodiment of the invention.Device 10 is the device of an image processing, is provided with a depth value module 12, a separation module 14, pre-processing module 16 and 28, with a superposition processing module 18.Then comprise a superposition priority treatment module 20, a superposition area of coverage processing module 22 and a superposition post-processing module 24 in the superposition processing module 18.
In device 10, depth value module 12 receives the input of an image data Pi_A, and image data Pi_A can comprise dynamically or static, two-dimentional or three-dimensional image; Depth value module 12 can obtain corresponding three dimensional depth value and distance value respectively for the each several part of image, and separation module 14 just can with distance value the part of image to be captured according to the pairing three dimensional depth value of image each several part be an image object.With Fig. 3 is example; Image data Pi_A can comprise the 3-dimensional image that is formed by left image PL and right image PR; Depth value module 12 can obtain corresponding three dimensional depth value to each several part in this 3-dimensional image; Think that each several part provides distance value, and make separation module 14 be able to foreground separation come out that for example saying so is image object Iob1 according to the principle of Fig. 1 C with the imaging IL1 and the IR1 acquisition of prospect according to three dimensional depth value and distance value; Image object Iob1 is corresponding to distance value Yob1, and this distance value You1 just is associated with distance value Y1.
Except principle according to Fig. 1 C; Absorb left image PL and right image PR respectively by video camera MS; Outside the three dimensional depth value that obtains object OB1 and OB2, in another embodiment of the present invention, if image data Pi_A itself just is associated with the distance value that a range unit is detected; The auxiliary three dimensional depth value that produces of the distance value of depth value module 12 range unit also capable of using detecting, scape before making the present invention get to separate among this image data Pi_A according to this.Illustrate with Fig. 3; In one embodiment; When the xy of object OB1 and OB2 plane distribution position is captured to imaging I1 among the image data Pi_A and I2; Range unit 30 is also measured object OB1 and OB2 distance value Ys1 and the Ys2 plane normal (being the z axle) in the lump, makes imaging I1 and I2 be associated with distance value Ys1 and Ys2 respectively.And depth value module 12 just can be stipulated the three dimensional depth value of imaging I1 and I2 according to distance value Ys1 and Ys2, and separation module 14 just can be in view of the above separated the imaging I1 of prospect, as image object Iob1.Range unit 30 can be laser, infrared ray, sound wave, ultrasonic waves and/or electromagnetic range unit.
In another embodiment; Image data Pi_A can be that virtual three-dimensional model is resulting via the nomogram (rendering) of computer graphics; Computer graphics also can provide the distance value (degree of depth corresponding diagram for example of threedimensional model in the lump; Depth map) or be associated with the parameter of distance value; And depth value module 12 just can be utilized the distance value that computer graphics provides or be associated with the auxiliary three dimensional depth value that produces of parameter of distance value, makes separation module 14 can the acquisition of the prospect among the image data Pi_A be image object independently.
After separation module 14 captured the image object, the embodiment of the invention can see through 16 pairs of image objects of pre-processing module and carry out pre-treatment.Pre-treatment can be to the convergent-divergent of image object, adjust its color, brightness, contrast and/or sharpness, and/or adjust its pairing three dimensional depth value and/or distance value.
Be similar to the running of depth value module 12, the embodiment of the invention also can comprise depth value module 26 provides corresponding distance value for another image data Pi_B.Image data Pi_B can be the image object that a quilt is coincided; Illustrate with Fig. 3, image data Pi_B can be image object Iob2, and corresponding to distance value Yob2, and pre-processing module 28 can carry out pre-treatment to it, is similar to the pre-treatment that 16 pairs of image objects of pre-processing module Iob1 carries out.
Behind the image object of obtaining image data Pi_A, Pi_B and the corresponding distance value; Superposition processing module 18 just can be carried out the superposition of image object and handled under the superposition parameter control; The image object of image data Pi_A and Pi_B is synthetic, to produce a resultant image; This resultant image can be two dimension or three-dimensional, static or dynamic image.When carrying out the superposition processing, the superposition priority treatment module 20 in the superposition processing module 18 is carried out the superposition priority treatment, and the superposition order of a correspondence is provided respectively according to pairing three dimensional depth value of each image object and distance value.Superposition area of coverage processing module 22 is carried out the processing of the superposition area of coverage, and the superposition order corresponding according to each image object is blended into each image object superposition in the resultant image.Superposition post-processing module 24 is carried out the superposition reprocessing, and the superposition result of superposition area of coverage processing module 22 is done trickle processing, for example says so the edge that coincides of image object is made colour mixture, anti-sawtooth and/or sprouted wings and handle, and makes resultant image seem more natural.
With Fig. 3 is that example is explained the running that superposition is handled.After obtaining image object Iob1, Iob2 three dimensional depth value and respective distances value Yob1 and Yob2 separately, superposition priority treatment module 20 can be stipulated the priority and order of superposition according to those three dimensional depth values and distance value; For example; Compare the three dimensional depth value of image object Iob1 and three dimensional depth value and the distance value Yob2 of distance value Yob1 and image object Iob2; Can learn that image object lob1 more is positioned at the place ahead than image object lob2, so the superposition of image object Iob1 has precedence over image object Iob2 in proper order.When superposition area of coverage processing module 22 during with the resultant image Po of image object Iob1 and Iob2 superposition to a two dimension; The preferential image object Iob1 of superposition order can be by complete reservation; And in the lower image object Iob2 of superposition order; If the image object Iob1 of superposition overlaps (like overlapping part Iovlp) with part is arranged by the image object Iob2 of superposition,,, image object Iob1 is covered on the image object Iob2 so that being coincided just can the part that overlap among the image object Iob2 be removed.
In like manner, the output that image coincides also can be 3-dimensional image, for example says so by left and right sides image PRo and the synthetic 3-dimensional image of PLo.Superposition processing module 18 can be left and right sides image PRo and PLo carries out processing of the superposition area of coverage and superposition reprocessing respectively.In an embodiment of the present invention, the image object Iob1 among the left image PRo is from the imaging IL1 among the left image PL, and the image object Iob1 among the right image PLo is then formed by the imaging IR1 among the right image PR.In like manner, if another image data Pi_B also comprises the left and right sides image of 3-dimensional image, the image object Iob2 among the then left image PRo can be formed by the left image of image data Pi_B, and the image object Iob2 among the right image PLo is then formed by the right image of image data Pi_B.
In other embodiments, image data Pi_B can be bidimensional image, then in the case, is bidimensional image if desire image output, and image object Iob1 can directly coincide and cover on the bidimensional image element Iob2, need not consider its three dimensional depth value and distance value Yob2.Therefore depth value module 26 is selectivity settings in this example, and it can be omitted.In another embodiment; Image data Pi_B also is a bidimensional image; But the desire image output is a 3-dimensional image, then can give the three dimensional depth value that image element Iob2 one presets, and with the left and right sides image of image data Pi_B as 3-dimensional image; In order to coincide, to form three-dimensional image output with other image elements.Giving of this three dimensional depth value can also can make the user import with reference to the three dimensional depth value of image data Pi_A.In detail; Suppose that the three dimensional depth value is big more in this example; Represent that this image element is placed in the place ahead more; Then when image data Pi_B used as a background, its pairing image object Iob2 was endowed a value less than the pairing three dimensional depth value of every other image element, as its three dimensional depth value.Perhaps, the user also can manually import the three dimensional depth value of image data Pi_B, with adjustment image element Iob2 and the relative distance of image element Iob1 on the output 3-dimensional image.In another embodiment, image data Pi_A and/or Pi_B can also be dynamic images.Dynamic image is formed by a plurality of picture frames, and device 10 can carry out the acquisition and the superposition of image object to the image of each picture frame.
What should be specifically noted that is; Coinciding of two image data Pi_A and image data Pi_B is merely an example; The present invention is not limited to handle two image datas, and many dissimilar image datas of can handling simultaneously and coincide, to export various dissimilar image datas.
In device 10, pre-processing module 16 and 28 can be according to the synthetic demand adjustment image object of image object superposition.For example, pre-processing module 16 can the value of reducing the distance Yob1, and accordingly image object Iob1 is amplified, and the position that in synthetic image, makes image object Iob1 more forward.And/or, pre-processing module 16 also increases the brightness of image object Iob1 to highlight image object Iob1.Relatively, 28 of pre-processing modules can reduce the brightness of image object Iob2, to reduce its sharpness, make resultant image demonstrate the effect of the shallow depth of field its obfuscation.Should notice that pre-processing module 16 and 28 is selectivity settings, pre-processing module 16 and/or 28 can omit.
Each module in the device 10 can realize with hardware, firmware and/or software. Depth value module 12 and 26 can be same module, and pre-processing module 16 and 28 can also be same module.
Application of the present invention can illustrate as follows.The present invention can be applicable to the shooting of film, and image data Pi_A and Pi_B can take respectively, uses the operation principles superposition of device 10 synthetic again.The present invention also can be applicable to the image processing and the image editing of photo.Moreover the present invention also can apply to video phone, video conference and/or networking photography and play; For example, available 3-D photography machine MS (Fig. 1) takes the video signal participant with formation image data Pi_A, and utilizes image and the background separation of technology of the present invention with the participant, and is synthetic with the background superposition of another image data Pi_B again.The present invention also can apply to tourist industry, multimedia application, physical culture, educational undertaking, amusement and or recreation.For example say, take the user, user's image is come out by background separation with 3-D photography machine MS, synthetic with the virtual background superposition of recreation again.
In summary, compared to known technology, the present invention uses the three dimensional depth value to carry out separating of prospect and background, makes prospect made image processing more accurate, more convenient by independent utilization, and more horn of plenty is polynary also to make presentation content.
In sum, though the present invention discloses as above with preferred embodiment, so it is not in order to limit the present invention.Have common knowledge the knowledgeable in the technical field under the present invention, do not breaking away from the spirit and scope of the present invention, when doing various changes and retouching.Therefore, protection scope of the present invention is as the criterion when being defined by claims.

Claims (26)

1. the method for an image processing comprises:
Receive at least one first image and at least one second image,
Obtain at least one three dimensional depth value of correspondence at least a portion of each this first image;
According to this three dimensional depth value, be one first image object with this part acquisition of each first image, to produce one first superposition object;
At least a portion of this at least one second image obtains one second image object according to each, to produce one second superposition object; And
Should carry out superposition processing to produce a resultant image with this at least one second superposition object by at least one first superposition object.
2. the method for claim 1 is characterized in that, each first image is produced by many image frames, and these many image frames respectively comprise identical many objects; And
Obtain the step of this corresponding at least one three dimensional depth value for this at least a portion of this each first image respectively and calculate each these many objects how at least one displacement between these many image frames respectively, to obtain the many three dimensional depths value that is relevant to these many objects; Wherein, this each part correlation of this each first image is in one of these many objects.
3. the method for claim 1 is characterized in that, also comprises the detection-sensitive distance of reception corresponding to this at least a portion of this each first image; And
The step that obtains this corresponding three dimensional depth value for this at least a portion of this each first image is according to obtaining this corresponding three dimensional depth value respectively corresponding to this detection-sensitive distance of this at least a portion.
4. the method for claim 1 is characterized in that, this each first image is a computer graphics, and it provides at least one parameter that is associated with distance value corresponding to this at least a portion; And
The step that obtains this corresponding three dimensional depth value for this at least a portion of this each first image is according to obtaining this three dimensional depth value of correspondence respectively corresponding to this at least one parameter that is associated with distance value of this at least a portion.
5. the method for claim 1 is characterized in that, this part acquisition of each first image is this first image object with this, comprises with the step that produces this first superposition object:
Each first image object carries out a pre-treatment to this, to produce this first superposition object respectively; Wherein, this pre-treatment comprises size, color, brightness, sharpness or the three dimensional depth value of adjusting these many first image objects.
6. the method for claim 1 is characterized in that, this at least a portion of each second image obtains this second image object according to this, comprises with the step that produces this second superposition object:
Each second image object carries out a pre-treatment to this, to produce this second superposition object respectively; Wherein, this pre-treatment comprises size, color, brightness, sharpness or the three dimensional depth value of adjusting these many second image objects.
7. the method for claim 1 is characterized in that, this at least a portion of each this at least one second image obtains this second image object according to this, comprises with the step that produces this second superposition object:
For this at least a portion of this each second image obtains corresponding at least one three dimensional depth value, and this at least a portion acquisition of each second image is one second image object with this.
8. method as claimed in claim 7 is characterized in that, this superposition is handled and comprised:
According to this at least one first image object and the pairing many three dimensional depths value of this at least one second image object, provide corresponding multipriority to weigh to this at least one first superposition object and this at least one second superposition object respectively, be relevant to superposition order; And
Carry out a superposition area of coverage and handle, should be blended in this resultant image with this at least one second superposition object superposition by at least one first superposition object according to this superposition order.
9. the method for claim 1 is characterized in that, also is included as this at least one second image object and specifies corresponding at least one three dimensional depth value respectively.
10. method as claimed in claim 9; It is characterized in that, specify the step of corresponding at least one three dimensional depth value respectively for this at least one second image object and comprise: specify this corresponding three dimensional depth value of this second image object with reference to this corresponding three dimensional depth value of this first image object.
11. method as claimed in claim 9; It is characterized in that, specify the step of corresponding at least one three dimensional depth value respectively for this at least one second image object and comprise: manually import set information with reference to one and specify this corresponding three dimensional depth value of this second image object.
12. method as claimed in claim 9 is characterized in that, this superposition is handled and is comprised:
According to this first image object and the pairing many three dimensional depths value of this second image object, provide corresponding multipriority to weigh to this first image object and this second image object respectively, be relevant to superposition order; And
Carry out a superposition area of coverage and handle, should be blended in this resultant image with this at least one second superposition object superposition by at least one first superposition object according to this superposition order.
13. the method for claim 1; It is characterized in that; This superposition is handled also to comprise this resultant image is carried out a superposition reprocessing, and the polygon edge that comprises this at least one first superposition object and this at least one second superposition object carries out colour mixture, anti-sawtooth and the processing of sprouting wings.
14. the device of an image processing comprises:
One depth value module, at least a portion that is used to one first image obtains corresponding at least one three dimensional depth value;
One first image processing module is used for according to this three dimensional depth value, is one first image object with this part acquisition of this first image, to produce one first superposition object;
One second image processing module is used for obtaining one second image object according at least one second image, to produce one second superposition object; And
One superposition processing module should be carried out superposition processing to produce a resultant image with this at least one second superposition object by at least one first superposition object.
15. device as claimed in claim 14 is characterized in that, each first image is produced by many image frames, and these many image frames respectively comprise identical many objects; And
This first image processing module calculates each these many objects how at least one displacement between these many image frames respectively, to obtain the many three dimensional depths value that is relevant to these many objects; Wherein, this each part correlation of this each first image is in one of these many objects.
16. device as claimed in claim 14 is characterized in that, also comprises reception corresponding to one of this at least a portion of this each first image detection-sensitive distance; And
This first image processing module is according to obtaining this corresponding three dimensional depth value respectively corresponding to this detection-sensitive distance of this at least a portion.
17. device as claimed in claim 14 is characterized in that, this each first image is a computer graphics, and it provides at least one parameter that is associated with distance value corresponding to this at least a portion; And
This first image processing module is according to obtaining this corresponding three dimensional depth value respectively corresponding to this at least one parameter that is associated with distance value of this at least a portion.
18. device as claimed in claim 14 is characterized in that, this first image processing module comprises:
One pre-processing module is in order to each first image object carries out a pre-treatment to this, to produce this first superposition object respectively; Wherein, this pre-treatment comprises size, color, brightness, sharpness or the three dimensional depth value of adjusting these many first image objects.
19. device as claimed in claim 14 is characterized in that, this second image processing module comprises:
One pre-processing module is in order to each second image object carries out a pre-treatment to this, to produce this second superposition object respectively; Wherein, this pre-treatment comprises size, color, brightness, sharpness or the three dimensional depth value of adjusting these many second image objects.
20. device as claimed in claim 14; It is characterized in that; This second image processing module comprises an acquisition module, obtains corresponding at least one three dimensional depth value with at least a portion of thinking this second image, and is one second image object with this at least a portion acquisition of this second image.
21. device as claimed in claim 20 is characterized in that, this superposition processing module comprises:
One superposition priority treatment module; In order to this at least one first image object of foundation and the pairing many three dimensional depths value of this at least one second image object; Provide corresponding multipriority to weigh to this at least one first superposition object and this at least one second superposition object respectively, be relevant to superposition order; And
One superposition area of coverage processing module is in order to should be blended in this resultant image with this at least one second superposition object superposition by at least one first superposition object according to this superposition order.
22. device as claimed in claim 14 is characterized in that, also comprises a depth value designated module, with thinking that this at least one second image object specifies corresponding at least one three dimensional depth value respectively.
23. device as claimed in claim 22 is characterized in that, this depth value designated module is specified this corresponding three dimensional depth value of this second image object with reference to this corresponding three dimensional depth value of this first image object.
24. device as claimed in claim 22 is characterized in that, this depth value designated module is manually imported set information with reference to one and is specified this corresponding three dimensional depth value of this second image object.
25. device as claimed in claim 22 is characterized in that, this superposition processing module comprises:
One superposition priority treatment module; In order to this at least one first image object of foundation and the pairing many three dimensional depths value of this at least one second image object; Provide corresponding multipriority to weigh to this at least one first superposition object and this at least one second superposition object respectively, be relevant to superposition order; And
One superposition area of coverage processing module is in order to should be blended in this resultant image with this at least one second superposition object superposition by at least one first superposition object according to this superposition order.
26. device as claimed in claim 14 is characterized in that, this superposition processing module also comprises:
One superposition post-processing module, in order to this resultant image is carried out a superposition reprocessing, the polygon edge that comprises this at least one first superposition object and this at least one second superposition object carries out colour mixture, anti-sawtooth and the processing of sprouting wings.
CN201110134115.6A 2011-05-13 2011-05-13 The method of image processing and relevant apparatus Expired - Fee Related CN102780855B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110134115.6A CN102780855B (en) 2011-05-13 2011-05-13 The method of image processing and relevant apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110134115.6A CN102780855B (en) 2011-05-13 2011-05-13 The method of image processing and relevant apparatus

Publications (2)

Publication Number Publication Date
CN102780855A true CN102780855A (en) 2012-11-14
CN102780855B CN102780855B (en) 2016-03-16

Family

ID=47125602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110134115.6A Expired - Fee Related CN102780855B (en) 2011-05-13 2011-05-13 The method of image processing and relevant apparatus

Country Status (1)

Country Link
CN (1) CN102780855B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105430295A (en) * 2015-10-30 2016-03-23 努比亚技术有限公司 Device and method for image processing
CN106162141A (en) * 2015-03-13 2016-11-23 钰立微电子股份有限公司 Image processing apparatus and image processing method
WO2017050115A1 (en) * 2015-09-24 2017-03-30 努比亚技术有限公司 Image synthesis method
CN107527381A (en) * 2017-09-11 2017-12-29 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
US10475197B2 (en) 2017-10-02 2019-11-12 Wistron Corporation Image processing method, image processing device and computer readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040239670A1 (en) * 2003-05-29 2004-12-02 Sony Computer Entertainment Inc. System and method for providing a real-time three-dimensional interactive environment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040239670A1 (en) * 2003-05-29 2004-12-02 Sony Computer Entertainment Inc. System and method for providing a real-time three-dimensional interactive environment

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106162141A (en) * 2015-03-13 2016-11-23 钰立微电子股份有限公司 Image processing apparatus and image processing method
US10148934B2 (en) 2015-03-13 2018-12-04 Eys3D Microelectronics, Co. Image process apparatus and image process method
CN106162141B (en) * 2015-03-13 2020-03-10 钰立微电子股份有限公司 Image processing apparatus and image processing method
WO2017050115A1 (en) * 2015-09-24 2017-03-30 努比亚技术有限公司 Image synthesis method
CN105430295A (en) * 2015-10-30 2016-03-23 努比亚技术有限公司 Device and method for image processing
WO2017071559A1 (en) * 2015-10-30 2017-05-04 努比亚技术有限公司 Image processing apparatus and method
CN105430295B (en) * 2015-10-30 2019-07-12 努比亚技术有限公司 Image processing apparatus and method
CN107527381A (en) * 2017-09-11 2017-12-29 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN107527381B (en) * 2017-09-11 2023-05-12 Oppo广东移动通信有限公司 Image processing method and device, electronic device and computer readable storage medium
US10475197B2 (en) 2017-10-02 2019-11-12 Wistron Corporation Image processing method, image processing device and computer readable storage medium

Also Published As

Publication number Publication date
CN102780855B (en) 2016-03-16

Similar Documents

Publication Publication Date Title
US11076142B2 (en) Real-time aliasing rendering method for 3D VR video and virtual three-dimensional scene
US11711504B2 (en) Enabling motion parallax with multilayer 360-degree video
EP3712856B1 (en) Method and system for generating an image
US20230283762A1 (en) Method and system for near-eye focal plane overlays for 3d perception of content on 2d displays
US20040208358A1 (en) Image generation system, image generation method, program, and information storage medium
TWI443600B (en) Method and associated apparatus of image processing
WO2015093315A1 (en) Image processing device and method, and program
CN111047709B (en) Binocular vision naked eye 3D image generation method
CN103370732A (en) Object display device, object display method, and object display program
CN102780855B (en) The method of image processing and relevant apparatus
WO2019085022A1 (en) Generation method and device for optical field 3d display unit image
CN103607584A (en) Real-time registration method for depth maps shot by kinect and video shot by color camera
US20210295587A1 (en) Stylized image painting
JP2011120233A (en) 3d video special effect apparatus, 3d video special effect method, and, 3d video special effect program
CN103269430A (en) Three-dimensional scene generation method based on building information model (BIM)
EP3402410B1 (en) Detection system
JPWO2013069171A1 (en) Image processing apparatus and image processing method
CN108124148A (en) A kind of method and device of the multiple view images of single view video conversion
JP2004178579A (en) Manufacturing method of printed matter for stereoscopic vision, and printed matter for stereoscopic vision
CN111656409A (en) Information processing apparatus, information processing method, and computer program
US20170176934A1 (en) Image playing method and electronic device for virtual reality device
CN202713529U (en) 3D video color corrector
US20230216999A1 (en) Systems and methods for image reprojection
KR20110101722A (en) Image processing system and method using multi view image
GB2595728A (en) Method and apparatus for training a neural network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160316

Termination date: 20190513

CF01 Termination of patent right due to non-payment of annual fee