US5134480A - Time-recursive deinterlace processing for television-type signals - Google Patents
Time-recursive deinterlace processing for television-type signals Download PDFInfo
- Publication number
- US5134480A US5134480A US07/576,676 US57667690A US5134480A US 5134480 A US5134480 A US 5134480A US 57667690 A US57667690 A US 57667690A US 5134480 A US5134480 A US 5134480A
- Authority
- US
- United States
- Prior art keywords
- data
- signals
- frame
- coupled
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0117—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
- H04N7/012—Conversion between an interlaced and a progressive signal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/94—Vector quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/39—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability involving multiple description coding [MDC], i.e. with separate layers being structured as independently decodable descriptions of input picture data
Definitions
- This invention relates to methods and systems capable of processing interlaced television fields to convert to progressively scanned format with effective increase in vertical resolution and reduction of artifacts in image sequences.
- Interlaced scanning is an efficient method of bandwidth compression for television transmission.
- interlacing results in many well known artifacts.
- improved definition television IDTV
- NTSC National Television Systems Committee
- Proper conversion from interlaced to progressive format reduces line flicker and improves the vertical resolution of the displayed images.
- HDTV high definition television
- FIG. 1 shows the three-dimensional (3-D) domain of a sampled image sequence x (m,n,t), in which the missing lines are in dotted form.
- the vertical-temporal grid (for a constant value of n) is shown, with circles on the scan lines of an interlaced sequence and crosses on the missing lines.
- Interlaced scanning introduces aliasing, unless the moving image is properly prefiltered with a low-pass filter.
- the 3-D frequency responses of three possible such filters are shown in FIG. 3, as a direct consequence of the shape of the quincunx sampling grid (A, B, C, D and X) in FIG. 2.
- Different reconstruction methods are appropriate for each form of prefiltering.
- temporal filtering is due to camera time integration and is performed independent of spatial filtering, resulting in separable prefiltering. If vertical filtering is not strong enough to fully cover the area between two successive field lines, there is indeed some information missing between the field lines, which can be estimated using appropriate models for the image sequence source.
- the source model used for this purpose assumes that the video scene contains a number of rigid objects moving in a translational manner.
- the same concepts can be used to achieve purely spatial or temporal resolution enhancement. Nonlinear interpolation always reduces the energy of the enhancement signal, defined as the difference between the deinterlaced and the actual signals.
- Some simple temporal models assume that the image scene contains rigid moving objects, and that the components of their displacement vectors from frame to frame are not necessarily integer multiples of the spatial sampling period. This implies that the same object is sampled in many frames. Considering the overlap of all these sampling grids on a specific object, it is obvious that the "virtual" spatial sampling rate for the object is much denser than the one in any individual frame.
- A, B, D and D are the pixels above below, behind and in front of pixel X.
- the nonlinear median filter X Med(A,B,C) usually results in acceptable visual appearance, and is used with success in digital receivers. Even though artifacts still appear, there is some overall obvious improvement over interlaced displays.
- a system for processing interlaced data signals to provide data omitted from individual fields, includes input means for coupling current and later fields of interlaced data (each such field omitting pixel data for line positions at which pixel data is included in prior and later fields), interpolation means for providing pixel data at omitted line positions in a field by interpolation of pixel data at corresponding positions in prior and later lines of that field, and delayed coupling means for providing prior frame output data.
- the system also includes block matching system means for comparing current frame data with interpolated later frame data and previous frame output data to select overall best matched data for blocks of pixels for use in frame output data.
- the processing system may also include a prediction circuit for combining interpolated current frame data with overall best matched data on a pixel by pixel basis in proportions responsive to a selectable constant and for supplying processed frame output data to an output point.
- a block matching circuit for comparing current frame data with data in a reference frame.
- the matching circuit includes input means for supplying current frame data and reference frame data, horizontal matching means for making best matched block comparisons in the horizontal direction as between current and reference frame data and developing corresponding least different value signals and motion vector signals, and vertical comparator means for comparing difference values related to vertical direction vectors to develop least difference value signals and corresponding motion vector signals.
- the matching circuit also includes quadtree decision means for determining block size to be processed based on predetermined threshold values, first multiplexer means for selecting motion vector values corresponding to the best matched block in response to horizontal and vertical motion vector signals, buffer means responsive to motion vector values from the first multiplexer means for selecting best matched block data, and second multiplexer means for selecting least block difference values in response to horizontal and vertical least difference value signals.
- a block matching system for comparing current frame data to data of prior and later reference frames.
- the matching system includes input means for supplying current frame data, interpolated next frame data, previous frame output data and second previous frame output data, and first, second and third block matching circuit means (each coupled to each receive current frame data and to respectively receive next, previous and second previous frame data inputs) for comparing current frame data and data in a reference frame to derive least difference values and best matched block data.
- the matching system also includes comparator means for generating a control signal indicative of the least value among said three difference values and multiplexer means and responsive to such control signal for selecting the overall best matched block data.
- FIG. 1 shows the three-dimensional domain of a sampled image sequence.
- FIG. 2 shows a vertical-temporal grid of the image sequence with n held constant.
- FIG. 3 shows three-dimensional frequency responses for three low-pass filters.
- FIG. 4 is a block diagram of an interlaced data processing system in accordance with the invention.
- FIG. 5 is a block diagram of a linear interpolation unit useful in the FIG. 4 system.
- FIG. 6 is a block diagram of a block matching system in accordance with the invention.
- FIG. 7 is a block diagram of a block matching circuit useful in the FIG. 6 system.
- FIG. 8 is a block diagram of vertical direction comparator useful in the FIG. 7 block matching circuit.
- FIG. 9 is a block diagram of a quadtree decision unit useful in the FIG. 7 block matching circuit.
- FIG. 10 illustrates a systolic array useful in the FIG. 7 block matching circuit and FIG. 10A further illustrates a processing element used in FIG. 10.
- FIG. 11 illustrates an additional portion of the FIG. 10 array and FIG. 11A further illustrates a processing element used in FIG. 11.
- FIG. 12 is a block diagram of a prediction circuit useful in the FIG. 4 system.
- FIG. 13 illustrates a multiplier utilizing systolic arrays, which is useful in the FIG. 12 prediction circuit and FIG. 13A further illustrates a processing element used in FIG. 13.
- FIG. 14 shows a frame buffer of the type useful in the FIG. 4 system.
- FIG. 15 is a block diagram of an interlaced data processing system adapted for processing color signals in accordance with the invention.
- FIG. 16 is a block diagram of an interlaced data processing system permitting reduced data rate transmission in accordance with the invention.
- FIG. 17 is a block diagram of a downsampling circuit useful in the FIG. 16 system.
- FIG. 18 is a chart comparing error measurements for deinterlacing using the present invention with other approaches, for a sequence of images.
- processing system 10 includes an input terminal 12 for coupling successive fields of interlaced data, wherein each field omits pixel data at line positions at which pixel data is included in preceding and succeeding fields.
- each field omits alternate lines of data and the prior and next fields include picture data only on such alternate lines.
- System 10 also includes an output terminal 14 at which processed frame output data is provided after filling in the omitted pixel data in each field interval in accordance with the invention to provide full frames of data of the field rate.
- field refers to an incomplete frame of data, for example, the alternate fields of an NTSC television signal
- frame refers to a complete frame of data, for example, the composite of two fields of NTSC data, or a field of an NTSC television signal with missing lines of data completed by insertion of estimated values for pixel positions in such lines;
- Previous frame refers to the frame immediately preceding a field or frame in point
- second previous frame refers to the frame immediately preceding the previous frame
- next field refers to the field immediately following a field or frame in point
- system 10 includes input means, shown as terminal 10 and field delay unit 16, arranged so that when a field of data denoted as X(t+1) for reference timing purposes is supplied to terminal 10, the field X(t) will be supplied to block matching system 18. Thus, by supplying the "next" field, X(t+1), to terminal 10, the "current” field, X(t), is supplied to unit 18.
- the input means also supplies the input signal to linear interpolation means 20.
- Linear interpolation means 20 is effective to fill in an approximation of the pixel data at missing lines of each individual field by interpolation of pixel data at corresponding positions in preceding and succeeding lines of the same individual field, so as to provide full frames of alternating lines of actual input pixel data and interleaved lines of interpolated data, at the frame rate.
- section 20a of unit 20 receives the next field input data X(t+1) directly from input terminal 12 and its output is an interpolated full frame version Xi(t+1).
- section 20b receives current field input data X(t) from delay unit 16 and its output is an interpolated full frame version Xi(t).
- FIG. 5 there is illustrated an embodiment of a suitable linear interpolation circuit.
- linear interpolations are performed between two scanning lines of one field to generate the missing intermediate line. This is done by full adder means 28 and a shift register 30 for each missing pixel.
- the input field containing only even or odd lines of a frame is input to field buffer 32.
- two pixels are selected which are at the same horizontal position on two consecutive lines (i.e., the lines before and after the missing line position) and stored separately in the two registers 36 and 38.
- the values of the two pixels are then added in means 28 and shifted to the right by one bit in shift register 30, which is equivalent to dividing the sum of the two pixel values by a factor of two.
- the output signal of unit 20a (and of unit 20b) is a linearly interpolated frame. As indicated, inputting X(t+1) to section 20a of unit 20 results in the interpolated output Xi(t+1) and, similarly, interpolating of input X(t) in section 20b results in the interpolated output Xi(t).
- linear interpolation means 20 is made up of two identical sections 20a and 20b operating independently.
- the FIG. 4 processing system also includes delayed coupling means, shown as frame delay unit 22, for providing previous frame output data and second previous frame output data.
- output frame data Y(t) which is provided to output terminal 14 is also coupled to frame delay section 22a and sequentially to section 22b.
- timing is such that as a series of output frames are coupled to terminal 14, at the time the current frame output data Y(t) appears at terminal 14, the previous frame output Y(t-1) will appear at the output of section 22a and the second previous frame output Y(t+2) will appear at the output of section 22b.
- the frame output data for these purposes is provided at the output of prediction unit 24, which will be described below.
- Field and frame delay units 16 and 22 are effective to provide total delays at their outputs equal to one-half cycle and one cycle, respectively, one cycle being equal to the reciprocal of the frame rate.
- the form of buffer units 16 and 22 will be discussed with reference to FIG. 14, however appropriate structure and operation will be apparent to those skilled in the art.
- block matching system means 18 receives as inputs the Y(t-1) and Y(t-2) outputs from frame delay unit 22, the current field data X(t) from field delay unit 16 and the interpolated next frame data Xi(t+1) from interpolation unit 20, via input means shown as input terminals 40, 42, 44 and 46, respectively.
- System 18 functions to compare current frame data to interpolated next frame data, previous frame output data and second previous frame output data to select overall best matched data for blocks of pixels for the current frame. Such overall best matched data Xp(t) is then supplied to prediction unit 24, which will be discussed below.
- FIG. 6 provides additional detail on block matching system means 18.
- the inputs to unit 18 via terminals 40, 42 and 46 serve as reference frames for processing by three block matching circuit means, shown as quadtree segmented block matching subsystems 50, 52 and 54, which are substantially identical with differences to be discussed.
- Block matching circuits 50,52 and 54 operate to compare current frame data and particular reference frame data to each generate two separate outputs, least difference and best matched block.
- the first output represents the least difference between two input frames with respect to a block of pixel data.
- the best matched block is the block in the reference frame corresponding to such least difference.
- the matching system 18 includes comparator means 56, which receives the difference output from each of units 50, 52 and 54, and multiplexer means 58, which receives each of the three best-matched block outputs from those units.
- System 18, as illustrated, also includes frame buffer 60 coupled to the output of multiplexer 58 and address counter 62, which provides address control signals to units 50, 52, 54 and 60.
- the output from buffer 60 is the current frame data Xp(t) appearing at output terminal 64.
- comparator 56 receives the three least difference inputs, it chooses the least value from among the three possibilities and generates a control signal which is coupled to multiplexer 58 to determine which one of the three best-matched block inputs is coupled to frame buffer 60 for each block of pixels.
- the best-matched block coupled to buffer 60 thus represents the overall best-matched data.
- the structure and operation of units 56, 58, 60 and 62 as appropriate to implement the invention will be apparent to those skilled in the art once the concepts of the present invention are understood.
- the comparator 56 may utilize a TTL7486 integrated circuit for comparing two 4-bit binary numbers, with two such circuits used to compare two 8-bit numbers. In total, four such integrated circuits can be used in order to accomplish comparison of the three input difference values.
- Each of the three block matching circuit means 50, 52 and 54 in FIG. 6 performs a 4 ⁇ 4 block matching from the current frame to a particular previous reference frame. Actually, it is a 4 ⁇ 2 matching because two lines of the current block of pixel data are missing.
- circuit 50 performs this matching between current and previous frames
- circuit 52 between current and second previous frames
- circuit 54 between current and next (future) frames.
- circuit 52 provides a motion adaption function as will be further discussed, and as such differs from circuits 50 and 54 in that it need only perform block matching at the zero vector block.
- the motion adaption circuit 52 will be active only when the data received by unit 72 of circuit 52, as will be further discussed, comes from the same address of both buffer units 74 and 76 of unit 72. Except for this simplification, block matching circuits 50, 52 and 54 may be of the same design as further shown in FIG. 7.
- circuit 50 includes input means for supplying current frame data X(t), reference frame data Y(t-1), and address control signals (from counter 62 in FIG. 6), shown as input terminals 40, 44 and 70, respectively. Also included are horizontal matching means, shown as the block matching in horizontal direction unit 72, for determining the best matched block horizontally as between current frame data coupled via buffer means shown as frame buffer 74 and reference frame data coupled via buffer means 76, which operate under the control of address signals received at terminal 70 from counter 62 in FIG. 6.
- block matching circuit 50 also includes vertical comparator means, shown as comparator in vertical direction unit 78, for comparing difference values related to vertical direction vectors to select least difference values and corresponding motion vectors.
- Quadtree decision means shown as quadtree decision unit 80, operates to determine block size to be processed based on a predetermined threshold value.
- first and second multiplexer means indicated as mux 82 and mux 84, respectively, receive vector and difference outputs from vertical comparator 78 and decision unit 80.
- Second mux 84 selects the least block difference values in response to horizontal and vertical difference values and first mux 82 selects motion vector values corresponding to the best matched block based on horizontal and vertical motion vector values, in response to control signals provided by decision unit 80.
- the output least block difference values from mux 84 are coupled to output terminal 86.
- Buffer means 76 couples to output terminal 88 best matched block data selected from the input reference frame data on the basis of the motion vector values from mux 82.
- the relevant input reference frame data is Y(t-1), whereas for units 52 and 54 it is Y(t-2) and Xi(t+1), respectively.
- horizontal matching unit 72 receives two inputs, the current block of current frame data X(t) and the reference data from the reference frame Y(t-1). It selects the block which is best matched in the horizontal direction, the corresponding least difference ("diff") and motion vectors ("vectors").
- the quadtree decision unit 80 and vertical comparator unit 78 determine whether to use 8 ⁇ 8 blocks or 4 ⁇ 4 blocks and the best matched block and least difference are then determined.
- the vertical comparator unit 78 takes six difference values and six pairs of motion vectors which come from segmented block matching parts. Each difference value and motion vector represents a vertical direction. The difference values are compared and the least value and the corresponding vector are selected.
- the vertical comparator means 78 compares the six possible difference values and chooses the smallest one through operation of comparator 90. At the same time, it couples a select signal to the multiplexer 92 to select the corresponding vector.
- the quadtree decision means 80 determines whether a block should be divided, or not.
- the difference values of 4 ⁇ 4 blocks are first stored in memory unit 94 operating under the control of address generator unit 96, which also supplies control signals to mux 84 and mux 82 in FIG. 7. Then the difference values of the same 8 ⁇ 8 block are added in adder unit 98. The sum is compared in comparator 100 with a threshold value which is stored in a PROM threshold unit 102. If the sum is less than the reference threshold, the 8 ⁇ 8 block is used; otherwise, 4 ⁇ 4 is used.
- the decision output of this unit 80 is coupled to mux 84 and mux 82 in FIG. 7, which generate the proper difference value and motion vector.
- FIG. 10 there is shown a systolic array able to carry out a 4 ⁇ 2 motion block matching as previously referred to with reference to block matching circuits 50, 52 and 54 in FIGS. 6 and 7.
- the X at upper left represents the current input block with a 4 ⁇ 2 size.
- the Y at upper right refers to the reference data with a size equal to (4+2*d max ) ⁇ 4, where d max is the maximum searching distance of the block matching.
- the two digit numbers around the array in FIG. 10 are the coordinates of the input and reference data (note that these coordinate values do not correspond to the reference numerals connected to various physical elements elsewhere in the drawings).
- the coordinates 21, 22, 23, 24 of the input block are the pixels at lines at which data is omitted in the interlaced fields whose values are to be delivered, so their input values are equal to zero.
- the circular elements as illustrated are commonly referenced as processing element 1.
- the FIG. 10 also includes delay elements shown as squares, commonly referenced as delay element 4. Processing element 1 is shown in more detail in FIG. 10A.
- processing elements 2 are adders, as shown in FIG. 11A, which add the two inputs to the element.
- the output of each processing element 2 is the total difference value between the current block and one possible block of reference frame data.
- the processing elements 1 and 2 perform the function of equation (4) to be described.
- Those difference values are sent to processing elements 3, which are comparators.
- the minimum value of the difference values is chosen to provide the best match to the current block of pixel data in the horizontal direction.
- Processing element 3 can be implemented through application of the TTL7485 integrated circuit previously referred to.
- the processing element 3 performs the function of equation (5) to be described.
- the same systolic arrays can be employed in parallel.
- the number of systolic arrays required is determined by the number of d max in the relationship (2*d max +1).
- a systolic array system as described above, is required.
- the outputs of the vertical arrays are coupled to another array of the processing element 3 to find a minimum value which is the function of equation (7) to be described. After all the blocks are processed, the best results are passed to the decision and mux units in FIG. 7 for further processing.
- the makeup and operation of block matching system 18 of FIG. 4 have now been described with reference to FIGS. 6 through 11.
- the output of system 18 is the current frame data Xp(t) which is coupled to prediction means 24 in FIG. 4.
- prediction unit 24 is shown in greater detail.
- the prediction means 24 includes input means, shown as input terminals 104 and 106, for supplying interpolated current frame data Xi(t) and selected overall best matched block data Xp(t) as derived in block matching system 18 in FIG. 4.
- first multiplier means shown as multiplier unit 108, which receives the best matched block data via frame buffer 110, coupled to terminal 106.
- Multiplier unit 108 acts to multiply the best matched data by a selectable constant C, which has a value less than 1 (less than unity).
- multiplier means shown as multiplier unit 112 receives the current frame data via frame buffer 114 coupled to terminal 104 and is effective to multiply that current data by the value of the difference between constant C and unity (i.e., by 1- C).
- the value of constant C as selected is set into multiplier 108 to control its output level and is also provided to unit 118 which causes the difference value 1-C to be set into multiplier 112.
- the FIG. 12 prediction unit 24 further includes adder means, shown as full adder unit 120, which receives and adds the outputs of multipliers 108 and 112 and provides the resulting sum on a pixel-by-pixel basis to frame buffer 122. Also included is address counter unit 124 which provides address signals for processing control to buffers 110, 114 and 122. The output of frame buffer 122 at output terminal 126 is the final processed frame output data Y(t) which as shown in FIG. 4, is coupled to output terminal 14 of the processing system 10. The output data Y(t) from prediction unit 24 is also coupled to frame delay unit 22 to provide a succession of output frames which are delayed to provide reference frame inputs to block matching system 18 as indicated in FIG. 4.
- adder means shown as full adder unit 120, which receives and adds the outputs of multipliers 108 and 112 and provides the resulting sum on a pixel-by-pixel basis to frame buffer 122.
- address counter unit 124 which provides address signals for processing control to buffers 110, 114 and 122.
- the prediction unit 24 functions to determine the final processed frame output data values of the missing pixels in each field to form the complete field output data Y(t). This is accomplished by implementation of the following formula:
- Xi(t) is the linearly interpolated input signal from interpolation unit 20
- Xp(t) is the best matched result from block matching system 18
- C is the selected constant, which functions as a noise reduction parameter.
- Two multipliers, such as units 108 and 112, and a full adder, such as unit 120, are needed for each pixel prediction as indicated from the above equation.
- address counter 124 is utilized to keep track of the order of the pixels. For each pixel, the data from the respective multipliers 108 and 112 is first multiplied by C or 1-C, respectively.
- FIG. 13 there is illustrated one example of a multiplier, such as units 108 and 112 used in FIG. 12.
- the FIG. 13 multiplier utilizes systolic arrays. It is a 4 bit by 4 bit multiplier so that the maximum output is 8 bits.
- Each processing element 5 shown as a square in FIG. 13 is illustrated in greater detail in FIG. 13A.
- Each element 5 includes an AND gate and a full adder (FA) as shown in FIG. 13A.
- FA full adder
- As data is coupled to processing element 5, X and Y bits first undergo a logical AND operation, and are then added to the resultant of the preceding processing element.
- the requirement for an 8 bit multiplier is accomplished by an 8 ⁇ 8 systolic array using the same processing elements.
- FIG. 14 illustrates basic form of a frame or field buffer as used in different areas of the system (such as 16 or 22a or 22b in FIG. 4, for example).
- MxN pixels are typically needed in a random access memory (RAM) as shown in FIG. 14, in order to store a frame.
- the read/write (R/W) signal controls the entry of data into and recovery of data from the RAM. After data is stored in the buffer unit, it is kept there for a cycle which is equal to the reciprocal of the frame rate.
- the row and column addresses determine where data is to be stored and read, and the appropriate address selectors can typically be implemented by demultiplexer arrangements.
- x(t) is the input field at time t
- y(t) is the output frame at time t
- F is a function to be defined
- x(t) is defined as the frame at time t, coming from linear spatial interpolation of x(t). Note that the field x(t) is included in half of the lines of the frame y(t), and of the frame x(t) as well.
- the initial condition y(O) is chosen equal to x(O).
- the frame y(t) in equation (1) is found by motion compensation. This can be done in various ways.
- the method used is motion estimation by quadtree-based segmented block matching of blocks of the known field x(t) on the frame y(t-1), allowing displacement vectors with half pixel accuracy.
- the initial size of blocks is 16 ⁇ 8, which corresponds to a 16 ⁇ 16 square block with half the lines missing. In this way, if a missing line happens to also be missing from the previous field, it will still be taken from the higher resolution signal y(t) which contains information from the previous fields.
- each block from the field x(t) can be matched to either the previous interpolated frame y(t-1), or the next linearly interpolated frame xt+1), depending on which of the two results in smaller error.
- Providing for look-ahead results in minimal delay, equal to the field-refresh period. This is important whenever there are sudden changes of scene, in which case there can be no proper match between x(t) and y(t-1).
- Motion adaptation is also used to maintain the sharpness and quality of the motionless part of each frame.
- Corresponding blocks in x(t) and x(t-2) are compared; if they are well matched, y(t) is generated by y(t-1) only: ##EQU3## This modification keeps background sharp and noiseless. Otherwise, flickering temporal patterns with a period of two frames can appear even in totally motionless areas.
- Some of the blocks may contain a piece of a moving object and a piece of the background, or may contain pieces of more than one moving object.
- the resulting interpolated values would be incorrect for part of the block.
- the maximum absolute deviation over all pixels of the 16 ⁇ 8 block corresponding to the displacement vector (d m , d n )
- Each pixel difference evaluation requires four parameters, resulting in a four dimensional (m,n,d m ,d n ) problem.
- d n In order to map to a dependence graph easily, it is decomposed into a three-dimensional problem and an one-dimensional problem by fixing d n first, that is, ##EQU5## After v d .sbsb.n is found, the u is determined by ##EQU6## Therefore, the motion vector of the current block is given by
- the block size is adaptive based on values of the minimum absolute difference between the current and the reference block
- the data of the current block comes from a field instead of a frame, i.e., half of the lines of the block are missing, and provision is made to process data in this form.
- motion adaptation based on difference calculation is an important component in the algorithm and has been accommodated in the preceding description of the implementation and operation of processing systems in accordance with the invention.
- systems for deinterlacing and their operation can be enhanced through use of the signals carrying the color information.
- performance can be enhanced through use of both luminance data signals and chrominance data signals.
- d is a positive value and the least T err represents the best match for the block.
- FIG. 15 a parallel form of implementation of the invention for use with color signals is illustrated.
- processing system 10 of FIG. 4 is placed in parallel with two additional processing systems 10a and 10b of substantially similar form.
- a color television signal coupled to input means shown as input terminal 130, is processed by signal separation means 132, which couples Y, U and V components of the input signal to input terminals 12, 12a and 12b of parallel systems 10, 10a and 10b, respectively.
- signal separation means 132 which couples Y, U and V components of the input signal to input terminals 12, 12a and 12b of parallel systems 10, 10a and 10b, respectively.
- least difference signals from comparator units 56, 56a and 56b see comparator 56 in FIG. 6) and overall best matched block signals from multiplexer units 58, 58a and 58b (see multiplexer 58 in FIG. 6) are derived in the three processing systems 10, 10a and 10b as previously discussed with respect to system 10.
- FIG. 15 also includes comparator means 134 which receives the Y, U and V least difference signals and provides a control signal indicative of the smallest value least difference signal.
- comparator 134 may be similar to comparator 56 in FIG. 6, as previously discussed.
- the control signal from comparator 134 is coupled to multiplexer means 136, which also receives the Y, U and V overall best matched block inputs from systems 10, 10a and 10b.
- multiplexer 136 may be similar to multiplexer 58 in FIG. 6, as previously discussed. Multiplexer 136 is effective to select the overall best matched block for color operation. While the color system shown and described with reference to FIG. 15 utilizes three parallel systems and has not been structurally optimized for color operation, it will be understood by those skilled in the art that changes may be made to avoid unnecessary duplication of elements and otherwise provide an effective color system for operation in accordance with the invention.
- Pyramid coding has been found to be a useful tool in many image processing applications, because it supplies multi-resolution signals (see for example, "The Laplacian Pyramid as a Compact Image Code", IEEE Transactions on Communications, Vol. COM-31, No. 4, April 1983).
- the traditional way to generate pyramids is to use linear interpolation.
- the enhancement signal, generated in such approaches has high energy which makes a high compression rate hard to achieve.
- applicants have developed a different and improved result through the use of non-linear interpolation.
- Interlacing is a form of subsampling or downsampling by a factor of two and the deinterlacing process described earlier is one form of nonlinear interpolation.
- the energy of an enhancement signal is reduced significantly.
- downsampling is performed at temporal-vertical domain (interlacing) and no prefilter is used because of reasons mentioned above, thus achieving critical sampling. Therefore, the number of samples for the enhancement signal in FIG. 16 is only half that of such previous approaches. Because of both critical sampling and energy reduction of the enhancement signal (compared with linear deinterlacing), it can be coded very efficiently with low bit rate, achieving reduced data rate coding.
- FIG. 16 there is shown a system for processing data to achieve reduced data rate coding in accordance with the invention.
- the FIG. 16 system includes a deinterlacing system 10 such as shown and described with reference to FIG. 4.
- units 140, 142, and 144 arranged with system 10 and bypass coupling 146 to provide an enhancement signal at output terminal 148 and interlaced signal output at terminal 150 in response to an input progressive format signal provided to input means shown as terminal 152 in the form of input frames of progressively scanned image sequence data.
- the FIG. 16 circuit also includes first and second sampling means, shown as downsampling units 140 and 144, respectively, and comparison means, shown as adder 142 which acts to subtract pixel values to determine difference values.
- the signal at terminal 152 is progressive (non-interlaced) format image data, each frame of which is downsampled by a factor of two in the vertical direction by sampling unit 140.
- the output signal at terminal 150 is thus in the form of fields of interlaced data which also represent the input to system 10.
- the output signal coupled from system 10 to unit 142 is a progressive signal again, the missing lines having been predicted and filled in as described above with reference to system 10. Because the signals coupled to comparison unit 142 from system 10 and via connection 146 from input 152 have the same dimension, variations or differences between them can be calculated. The difference calculation is carried out by full adder unit 142.
- one field (the field represented by the lines not omitted by action of unit 140) will be identical and the differences between these identical fields are all zeros. Since there is no need to retain those zeros during transmission, downsampling is performed again in sampling unit 144 to omit those zero entries from the variation or difference signal provided by unit 142. As a result, the enhancement signal provided at output terminal 148 has efficient low bit rate coding.
- the enhancement signal and interlaced signal at output terminals 148 and 150 are effective in combination to provide accurate representation of the input data with reduced data rate coding.
- Unit 142 is a full adder with a 2's compliment input and is effective to perform the desired subtraction.
- Units 140 and 144 which may be identical are shown in greater detail in FIG. 17.
- an input frame is first stored in a frame buffer 154.
- the address counter 156 generates the line addresses of the frame.
- the count provided by counter 156 is increased by two each time, so every other line is skipped.
- the results are stored in a field buffer 158 which provides the field signal output.
- Adaptive arithmetic coding is one method that is suitable for coding low-energy signals, like the enhancement signal at terminal 148 in FIG. 16.
- Adaptive arithmetic coding separates a coding problem into two parts: first, finding a statistical model for the entity to be modeled; and second, performing the coding based on the model with optimum performance.
- the symbols in the enhancement signal are coded sequentially based on an estimate of probability which is calculated by using a particular template.
- the pixel values enhancement signals are linearly quantized with adjustable quantization steps. The bit rate is controlled by choosing the size of the zero-zone and the quantization steps. The bit rate is very sensitive to the change of the zero-zone size.
- the pixels around the current pixel X decide the state of pixel X. Based on previous statistics and the state, the probability and entropy of pixel X is evaluated, and the minimum cost (in bits) for coding X is equal to -log 2 (prob), where prob is the probability of X under the state.
- the total cost of the whole image is the bit rate that can be achieved. According to simulation testing, when the average energy of the enhancement signal is around 20 to 30, it can be coded with 0.3 bits/pixel using arithmetic coding.
- sequences of progressive video images were interlaced by dropping half the lines.
- the outputs were compared with the original sequence in terms of their MSEs evaluated by comparing the interpolated values of all the pixels in the missing lines, with their known actual values. This sequence was used for demonstration purposes and generation of tables with error measurements.
- FIG. 18 shows typical MSE curves from a number of successive frames belonging to that sequence. Shown are the MSE's when the interpolated value for each pixel is equal to (see FIG. 2):
- the MSE of the algorithm is being reduced as the index of the algorithm used increases.
- the algorithm as described has reasonably low complexity for real time VLSI implementation, since it includes simple straightforward operations and block matching motion estimation. Furthermore, as a result of the use of time-recursive prediction, the frame memory requirements are not severe, because one past frame conveys information from all previously displayed history. Applications include Improved Definition TV Receivers and interpolation for multiresolution multiple channel video coding.
Abstract
Description
Y(t)=C*X.sub.p (t)+(1-C)*X.sub.i (t)
y(t)=F[y(t-1),x(t)] (1)
y(t)=F[y(t-1),x(t+1)] (2)
y(t)=cx(t)+(1-c)F[y(t-1),x(t),x(t+1)] (3)
max {|x(m,n,t)-y(m-d.sub.m,n-d.sub.n, t-1)|}
max {|x(m,n,t)-x(m-d.sub.m,n-d.sub.n,t+1)|}
u=min S(d.sub.m,d.sub.n) (d.sub.m,d.sub.n)
v=(d.sub.m,d.sub.n)|.sub.u.
v=(v.sub.d.sbsb.n,d.sub.n)|n (8)
T.sub.err =Y.sub.err +d*(U.sub.err +V.sub.err)
Claims (39)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US07/576,676 US5134480A (en) | 1990-08-31 | 1990-08-31 | Time-recursive deinterlace processing for television-type signals |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US07/576,676 US5134480A (en) | 1990-08-31 | 1990-08-31 | Time-recursive deinterlace processing for television-type signals |
Publications (1)
Publication Number | Publication Date |
---|---|
US5134480A true US5134480A (en) | 1992-07-28 |
Family
ID=24305486
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US07/576,676 Expired - Lifetime US5134480A (en) | 1990-08-31 | 1990-08-31 | Time-recursive deinterlace processing for television-type signals |
Country Status (1)
Country | Link |
---|---|
US (1) | US5134480A (en) |
Cited By (69)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5193004A (en) * | 1990-12-03 | 1993-03-09 | The Trustees Of Columbia University In The City Of New York | Systems and methods for coding even fields of interlaced video sequences |
WO1994004000A1 (en) * | 1992-07-27 | 1994-02-17 | The Trustees Of Columbia University In The City Of New York | Digitally assisted motion compensated deinterlacing for enhanced definition television |
US5327240A (en) * | 1991-12-24 | 1994-07-05 | Texas Instruments Incorporated | Methods, systems and apparatus for providing improved definition video |
EP0613293A2 (en) * | 1993-02-22 | 1994-08-31 | Industrial Technology Research Institute | Multiple module block matching architecture |
US5384599A (en) * | 1992-02-21 | 1995-01-24 | General Electric Company | Television image format conversion system including noise reduction apparatus |
USRE35093E (en) * | 1990-12-03 | 1995-11-21 | The Trustees Of Columbia University In The City Of New York | Systems and methods for coding even fields of interlaced video sequences |
EP0697788A2 (en) | 1994-08-19 | 1996-02-21 | Eastman Kodak Company | Adaptive, global-motion compensated deinterlacing of sequential video fields with post processing |
US5512956A (en) * | 1994-02-04 | 1996-04-30 | At&T Corp. | Adaptive spatial-temporal postprocessing for low bit-rate coded image sequences |
US5546130A (en) * | 1993-10-11 | 1996-08-13 | Thomson Consumer Electronics S.A. | Method and apparatus for forming a video signal using motion estimation and signal paths with different interpolation processing |
EP0739129A2 (en) * | 1995-04-21 | 1996-10-23 | Eastman Kodak Company | A system and method for creating high-quality stills from interlaced video images |
US5598226A (en) * | 1993-08-04 | 1997-01-28 | Avt Communications Ltd. | Reparing corrupted data in a frame of an image sequence |
US5608656A (en) * | 1993-08-09 | 1997-03-04 | C-Cube Microsystems, Inc. | Motion vector encoding circuit and method thereof |
US5633956A (en) * | 1992-02-26 | 1997-05-27 | British Broadcasting Corporation | Video image processing |
WO1997039422A1 (en) * | 1996-04-17 | 1997-10-23 | Sarnoff Corporation | Pipelined pyramid processor for image processing systems |
US5703966A (en) * | 1995-06-27 | 1997-12-30 | Intel Corporation | Block selection using motion estimation error |
US5892546A (en) * | 1991-06-25 | 1999-04-06 | Canon Kabushiki Kaisha | Encoding apparatus and method for encoding a quantized difference between an input signal and a prediction value |
US5910909A (en) * | 1995-08-28 | 1999-06-08 | C-Cube Microsystems, Inc. | Non-linear digital filters for interlaced video signals and method thereof |
US6014182A (en) * | 1997-10-10 | 2000-01-11 | Faroudja Laboratories, Inc. | Film source video detection |
US6023294A (en) * | 1993-07-20 | 2000-02-08 | Thomson Multimedia S.A. | Bit budget estimation method and device for variable word length encoders |
US6104755A (en) * | 1996-09-13 | 2000-08-15 | Texas Instruments Incorporated | Motion detection using field-difference measurements |
US6108041A (en) * | 1997-10-10 | 2000-08-22 | Faroudja Laboratories, Inc. | High-definition television signal processing for transmitting and receiving a television signal in a manner compatible with the present system |
US6166773A (en) * | 1995-11-08 | 2000-12-26 | Genesis Microchip Inc. | Method and apparatus for de-interlacing video fields to progressive scan video frames |
US6188437B1 (en) | 1998-12-23 | 2001-02-13 | Ati International Srl | Deinterlacing technique |
US6219103B1 (en) * | 1998-02-25 | 2001-04-17 | Victor Company Of Japan, Ltd. | Motion-compensated predictive coding with video format conversion |
US6229925B1 (en) * | 1997-05-27 | 2001-05-08 | Thomas Broadcast Systems | Pre-processing device for MPEG 2 coding |
US20020097795A1 (en) * | 2000-11-13 | 2002-07-25 | Jingsong Xia | Equalizer for time domain signal processing |
US6452972B1 (en) * | 1997-09-12 | 2002-09-17 | Texas Instruments Incorporated | Motion detection using field-difference measurements |
US6456329B1 (en) | 1999-04-19 | 2002-09-24 | Sarnoff Corporation | De-interlacing of video signals |
US6456328B1 (en) | 1996-12-18 | 2002-09-24 | Lucent Technologies Inc. | Object-oriented adaptive prefilter for low bit-rate video systems |
US20020171759A1 (en) * | 2001-02-08 | 2002-11-21 | Handjojo Benitius M. | Adaptive interlace-to-progressive scan conversion algorithm |
US20020191689A1 (en) * | 2001-06-19 | 2002-12-19 | Jingsong Xia | Combined trellis decoder and decision feedback equalizer |
US20020191716A1 (en) * | 2001-06-07 | 2002-12-19 | Jingsong Xia | Error generation for adaptive equalizer |
US6567564B1 (en) * | 1996-04-17 | 2003-05-20 | Sarnoff Corporation | Pipelined pyramid processor for image processing systems |
US20030122961A1 (en) * | 2001-12-28 | 2003-07-03 | Motorola, Inc. | Method for de-interlacing video information |
US20030160895A1 (en) * | 2002-02-25 | 2003-08-28 | Yiwei Wang | Adaptive median filters for de-interlacing |
US20030206053A1 (en) * | 2002-04-04 | 2003-11-06 | Jingsong Xia | Carrier recovery for DTV receivers |
US20030214976A1 (en) * | 2002-04-05 | 2003-11-20 | Shidong Chen | Synchronization symbol re-insertion for a decision feedback equalizer combined with a trellis decoder |
US20030214350A1 (en) * | 2002-04-05 | 2003-11-20 | Citta Richard W. | Data-directed frequency acquisition loop |
US20030215044A1 (en) * | 2002-04-05 | 2003-11-20 | Citta Richard W. | Data-directed frequency-and-phase lock loop |
US20030235259A1 (en) * | 2002-04-04 | 2003-12-25 | Jingsong Xia | System and method for symbol clock recovery |
US6680752B1 (en) | 2000-03-31 | 2004-01-20 | Ati International Srl | Method and apparatus for deinterlacing video |
US20040013191A1 (en) * | 2002-04-05 | 2004-01-22 | Shidong Chen | Transposed structure for a decision feedback equalizer combined with a trellis decoder |
US6714593B1 (en) * | 1997-10-21 | 2004-03-30 | Robert Bosch Gmbh | Motion compensating prediction of moving image sequences |
US20040091039A1 (en) * | 2001-06-06 | 2004-05-13 | Jingsong Xia | Adaptive equalizer having a variable step size influenced by output from a trellis decoder |
US20050074082A1 (en) * | 2002-04-05 | 2005-04-07 | Citta Richard W. | Data-directed frequency-and-phase lock loop for decoding an offset-QAM modulated signal having a pilot |
US20050206785A1 (en) * | 2000-04-20 | 2005-09-22 | Swan Philip L | Method for deinterlacing interlaced video by a graphics processor |
US6950143B2 (en) | 2000-12-11 | 2005-09-27 | Koninklijke Philips Electronics N.V. | Motion compensated de-interlacing in video signal processing |
US20050231635A1 (en) * | 2004-04-16 | 2005-10-20 | Lin Ken K | Automated inverse telecine process |
US20060222267A1 (en) * | 2005-04-01 | 2006-10-05 | Po-Wei Chao | Method and apparatus for pixel interpolation |
US20060227242A1 (en) * | 2005-04-12 | 2006-10-12 | Po-Wei Chao | Method and apparatus of deinterlacing |
US20060228022A1 (en) * | 2005-04-12 | 2006-10-12 | Po-Wei Chao | Method and apparatus of false color suppression |
US20070104382A1 (en) * | 2003-11-24 | 2007-05-10 | Koninklijke Philips Electronics N.V. | Detection of local visual space-time details in a video signal |
US20080049977A1 (en) * | 2006-08-24 | 2008-02-28 | Po-Wei Chao | Method for edge detection, method for motion detection, method for pixel interpolation utilizing up-sampling, and apparatuses thereof |
WO2008038152A2 (en) * | 2006-09-29 | 2008-04-03 | Crystal Signal Inc. | Digital scaling |
US20080159391A1 (en) * | 2007-01-03 | 2008-07-03 | International Business Machines Corporation | Method and apparatus of temporal filtering for side information interpolation and extrapolation in wyner-ziv video compression systems |
US20080231616A1 (en) * | 2007-03-20 | 2008-09-25 | Seong Gyun Kim | Liquid crystal display and method for driving the same |
US20080279479A1 (en) * | 2007-05-07 | 2008-11-13 | Mstar Semiconductor, Inc | Pixel interpolation apparatus and method thereof |
US20090002559A1 (en) * | 2007-06-29 | 2009-01-01 | Eunice Poon | Phase Shift Insertion Method For Reducing Motion Artifacts On Hold-Type Displays |
US20090059065A1 (en) * | 2007-08-31 | 2009-03-05 | Kabushiki Kaisha Toshiba | Interpolative frame generating apparatus and method |
US20090087120A1 (en) * | 2007-09-28 | 2009-04-02 | Ati Technologies Ulc | Apparatus and method for generating a detail-enhanced upscaled image |
US20090147133A1 (en) * | 2007-12-10 | 2009-06-11 | Ati Technologies Ulc | Method and apparatus for high quality video motion adaptive edge-directional deinterlacing |
US20090167778A1 (en) * | 2007-12-28 | 2009-07-02 | Ati Technologies Ulc | Apparatus and method for single-pass, gradient-based motion compensated image rate conversion |
US20100091862A1 (en) * | 2008-10-14 | 2010-04-15 | Sy-Yen Kuo | High-Performance Block-Matching VLSI Architecture With Low Memory Bandwidth For Power-Efficient Multimedia Devices |
US20110022418A1 (en) * | 2007-12-28 | 2011-01-27 | Haiyan He | Arrangement And Approach For Motion-Based Image Data Processing |
US8718448B2 (en) | 2011-05-04 | 2014-05-06 | Apple Inc. | Video pictures pattern detection |
US8964117B2 (en) | 2007-09-28 | 2015-02-24 | Ati Technologies Ulc | Single-pass motion adaptive deinterlacer and method therefore |
DE102017200015A1 (en) * | 2017-01-02 | 2018-07-05 | Siemens Aktiengesellschaft | Determine at least one subsequent record for a real-time application |
US10264212B1 (en) | 2018-06-27 | 2019-04-16 | The United States Of America As Represented By Secretary Of The Navy | Low-complexity deinterlacing with motion detection and overlay compensation |
US11240465B2 (en) | 2020-02-21 | 2022-02-01 | Alibaba Group Holding Limited | System and method to use decoder information in video super resolution |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4901145A (en) * | 1987-06-09 | 1990-02-13 | Sony Corporation | Motion vector estimation in television images |
US4937667A (en) * | 1987-11-09 | 1990-06-26 | Etat Francais Represente Par Le Ministre Delegue Des Postes Et Telecommunications (Centre Nationale D'etudes Des Telecommunications) | Method and apparatus for processing picture signals having interlaced field scanning |
US4982285A (en) * | 1989-04-27 | 1991-01-01 | Victor Company Of Japan, Ltd. | Apparatus for adaptive inter-frame predictive encoding of video signal |
US4985768A (en) * | 1989-01-20 | 1991-01-15 | Victor Company Of Japan, Ltd. | Inter-frame predictive encoding system with encoded and transmitted prediction error |
US5010401A (en) * | 1988-08-11 | 1991-04-23 | Mitsubishi Denki Kabushiki Kaisha | Picture coding and decoding apparatus using vector quantization |
-
1990
- 1990-08-31 US US07/576,676 patent/US5134480A/en not_active Expired - Lifetime
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4901145A (en) * | 1987-06-09 | 1990-02-13 | Sony Corporation | Motion vector estimation in television images |
US4937667A (en) * | 1987-11-09 | 1990-06-26 | Etat Francais Represente Par Le Ministre Delegue Des Postes Et Telecommunications (Centre Nationale D'etudes Des Telecommunications) | Method and apparatus for processing picture signals having interlaced field scanning |
US5010401A (en) * | 1988-08-11 | 1991-04-23 | Mitsubishi Denki Kabushiki Kaisha | Picture coding and decoding apparatus using vector quantization |
US4985768A (en) * | 1989-01-20 | 1991-01-15 | Victor Company Of Japan, Ltd. | Inter-frame predictive encoding system with encoded and transmitted prediction error |
US4982285A (en) * | 1989-04-27 | 1991-01-01 | Victor Company Of Japan, Ltd. | Apparatus for adaptive inter-frame predictive encoding of video signal |
Non-Patent Citations (9)
Cited By (124)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
USRE35093E (en) * | 1990-12-03 | 1995-11-21 | The Trustees Of Columbia University In The City Of New York | Systems and methods for coding even fields of interlaced video sequences |
US5193004A (en) * | 1990-12-03 | 1993-03-09 | The Trustees Of Columbia University In The City Of New York | Systems and methods for coding even fields of interlaced video sequences |
US5892546A (en) * | 1991-06-25 | 1999-04-06 | Canon Kabushiki Kaisha | Encoding apparatus and method for encoding a quantized difference between an input signal and a prediction value |
US5900910A (en) * | 1991-06-25 | 1999-05-04 | Canon Kabushiki Kaisha | Block matching method and apparatus which can use luminance/chrominance data |
US5327240A (en) * | 1991-12-24 | 1994-07-05 | Texas Instruments Incorporated | Methods, systems and apparatus for providing improved definition video |
US5384599A (en) * | 1992-02-21 | 1995-01-24 | General Electric Company | Television image format conversion system including noise reduction apparatus |
US5633956A (en) * | 1992-02-26 | 1997-05-27 | British Broadcasting Corporation | Video image processing |
US5305104A (en) * | 1992-07-27 | 1994-04-19 | The Trustees Of Columbia University In The City Of New York | Digitally assisted motion compensated deinterlacing for enhanced definition television |
WO1994004000A1 (en) * | 1992-07-27 | 1994-02-17 | The Trustees Of Columbia University In The City Of New York | Digitally assisted motion compensated deinterlacing for enhanced definition television |
EP0613293A3 (en) * | 1993-02-22 | 1995-01-18 | Ind Tech Res Inst | Multiple module block matching architecture. |
EP0613293A2 (en) * | 1993-02-22 | 1994-08-31 | Industrial Technology Research Institute | Multiple module block matching architecture |
US6023294A (en) * | 1993-07-20 | 2000-02-08 | Thomson Multimedia S.A. | Bit budget estimation method and device for variable word length encoders |
US5598226A (en) * | 1993-08-04 | 1997-01-28 | Avt Communications Ltd. | Reparing corrupted data in a frame of an image sequence |
US6071004A (en) * | 1993-08-09 | 2000-06-06 | C-Cube Microsystems, Inc. | Non-linear digital filters for interlaced video signals and method thereof |
US5608656A (en) * | 1993-08-09 | 1997-03-04 | C-Cube Microsystems, Inc. | Motion vector encoding circuit and method thereof |
US6122442A (en) * | 1993-08-09 | 2000-09-19 | C-Cube Microsystems, Inc. | Structure and method for motion estimation of a digital image by matching derived scores |
US5740340A (en) * | 1993-08-09 | 1998-04-14 | C-Cube Microsystems, Inc. | 2-dimensional memory allowing access both as rows of data words and columns of data words |
US5546130A (en) * | 1993-10-11 | 1996-08-13 | Thomson Consumer Electronics S.A. | Method and apparatus for forming a video signal using motion estimation and signal paths with different interpolation processing |
US5512956A (en) * | 1994-02-04 | 1996-04-30 | At&T Corp. | Adaptive spatial-temporal postprocessing for low bit-rate coded image sequences |
US5682205A (en) * | 1994-08-19 | 1997-10-28 | Eastman Kodak Company | Adaptive, global-motion compensated deinterlacing of sequential video fields with post processing |
EP0697788A2 (en) | 1994-08-19 | 1996-02-21 | Eastman Kodak Company | Adaptive, global-motion compensated deinterlacing of sequential video fields with post processing |
EP0739129A2 (en) * | 1995-04-21 | 1996-10-23 | Eastman Kodak Company | A system and method for creating high-quality stills from interlaced video images |
US5579054A (en) * | 1995-04-21 | 1996-11-26 | Eastman Kodak Company | System and method for creating high-quality stills from interlaced video |
EP0739129A3 (en) * | 1995-04-21 | 1998-05-13 | Eastman Kodak Company | A system and method for creating high-quality stills from interlaced video images |
US5703966A (en) * | 1995-06-27 | 1997-12-30 | Intel Corporation | Block selection using motion estimation error |
US5910909A (en) * | 1995-08-28 | 1999-06-08 | C-Cube Microsystems, Inc. | Non-linear digital filters for interlaced video signals and method thereof |
US6166773A (en) * | 1995-11-08 | 2000-12-26 | Genesis Microchip Inc. | Method and apparatus for de-interlacing video fields to progressive scan video frames |
WO1997039422A1 (en) * | 1996-04-17 | 1997-10-23 | Sarnoff Corporation | Pipelined pyramid processor for image processing systems |
US5963675A (en) * | 1996-04-17 | 1999-10-05 | Sarnoff Corporation | Pipelined pyramid processor for image processing systems |
US6567564B1 (en) * | 1996-04-17 | 2003-05-20 | Sarnoff Corporation | Pipelined pyramid processor for image processing systems |
US6104755A (en) * | 1996-09-13 | 2000-08-15 | Texas Instruments Incorporated | Motion detection using field-difference measurements |
US6456328B1 (en) | 1996-12-18 | 2002-09-24 | Lucent Technologies Inc. | Object-oriented adaptive prefilter for low bit-rate video systems |
US6647150B2 (en) | 1997-04-15 | 2003-11-11 | Sarnoff Corporation | Parallel pipeline processing system |
US6229925B1 (en) * | 1997-05-27 | 2001-05-08 | Thomas Broadcast Systems | Pre-processing device for MPEG 2 coding |
US6452972B1 (en) * | 1997-09-12 | 2002-09-17 | Texas Instruments Incorporated | Motion detection using field-difference measurements |
US6580463B2 (en) | 1997-10-10 | 2003-06-17 | Faroudja Laboratories, Inc. | Film source video detection |
US6108041A (en) * | 1997-10-10 | 2000-08-22 | Faroudja Laboratories, Inc. | High-definition television signal processing for transmitting and receiving a television signal in a manner compatible with the present system |
US20040008777A1 (en) * | 1997-10-10 | 2004-01-15 | Swartz Peter D. | Film source video detection |
US6859237B2 (en) | 1997-10-10 | 2005-02-22 | Genesis Microchip Inc. | Film source video detection |
US6201577B1 (en) | 1997-10-10 | 2001-03-13 | Faroudja Laboratories, Inc. | Film source video detection |
US8120710B2 (en) | 1997-10-10 | 2012-02-21 | Tamiras Per Pte. Ltd., Llc | Interlaced video field motion detection |
US20090161021A1 (en) * | 1997-10-10 | 2009-06-25 | Genesis Microchip Inc. | Interlaced video field motion detection |
US7522221B2 (en) | 1997-10-10 | 2009-04-21 | Genesis Microchip Inc. | Interlaced video field motion detection |
US20050078215A1 (en) * | 1997-10-10 | 2005-04-14 | Swartz Peter D. | Interlaced video field motion detection |
US6014182A (en) * | 1997-10-10 | 2000-01-11 | Faroudja Laboratories, Inc. | Film source video detection |
US6714593B1 (en) * | 1997-10-21 | 2004-03-30 | Robert Bosch Gmbh | Motion compensating prediction of moving image sequences |
US6219103B1 (en) * | 1998-02-25 | 2001-04-17 | Victor Company Of Japan, Ltd. | Motion-compensated predictive coding with video format conversion |
US6188437B1 (en) | 1998-12-23 | 2001-02-13 | Ati International Srl | Deinterlacing technique |
US6456329B1 (en) | 1999-04-19 | 2002-09-24 | Sarnoff Corporation | De-interlacing of video signals |
US6680752B1 (en) | 2000-03-31 | 2004-01-20 | Ati International Srl | Method and apparatus for deinterlacing video |
US7271841B2 (en) | 2000-04-20 | 2007-09-18 | Atl International Srl | Method for deinterlacing interlaced video by a graphics processor |
US6970206B1 (en) | 2000-04-20 | 2005-11-29 | Ati International Srl | Method for deinterlacing interlaced video by a graphics processor |
US20050206785A1 (en) * | 2000-04-20 | 2005-09-22 | Swan Philip L | Method for deinterlacing interlaced video by a graphics processor |
US7072392B2 (en) | 2000-11-13 | 2006-07-04 | Micronas Semiconductors, Inc. | Equalizer for time domain signal processing |
US20020097795A1 (en) * | 2000-11-13 | 2002-07-25 | Jingsong Xia | Equalizer for time domain signal processing |
US6950143B2 (en) | 2000-12-11 | 2005-09-27 | Koninklijke Philips Electronics N.V. | Motion compensated de-interlacing in video signal processing |
US6940557B2 (en) * | 2001-02-08 | 2005-09-06 | Micronas Semiconductors, Inc. | Adaptive interlace-to-progressive scan conversion algorithm |
US20060146187A1 (en) * | 2001-02-08 | 2006-07-06 | Handjojo Benitius M | Adaptive interlace-to-progressive scan conversion algorithm |
US20020171759A1 (en) * | 2001-02-08 | 2002-11-21 | Handjojo Benitius M. | Adaptive interlace-to-progressive scan conversion algorithm |
US20040091039A1 (en) * | 2001-06-06 | 2004-05-13 | Jingsong Xia | Adaptive equalizer having a variable step size influenced by output from a trellis decoder |
US7130344B2 (en) | 2001-06-06 | 2006-10-31 | Micronas Semiconductors, Inc. | Adaptive equalizer having a variable step size influenced by output from a trellis decoder |
US20020191716A1 (en) * | 2001-06-07 | 2002-12-19 | Jingsong Xia | Error generation for adaptive equalizer |
US7190744B2 (en) | 2001-06-07 | 2007-03-13 | Micronas Semiconductors, Inc. | Error generation for adaptive equalizer |
US7418034B2 (en) | 2001-06-19 | 2008-08-26 | Micronas Semiconductors. Inc. | Combined trellis decoder and decision feedback equalizer |
US20020191689A1 (en) * | 2001-06-19 | 2002-12-19 | Jingsong Xia | Combined trellis decoder and decision feedback equalizer |
US20030122961A1 (en) * | 2001-12-28 | 2003-07-03 | Motorola, Inc. | Method for de-interlacing video information |
US7129988B2 (en) | 2002-02-25 | 2006-10-31 | Chrontel, Inc. | Adaptive median filters for de-interlacing |
US20030160895A1 (en) * | 2002-02-25 | 2003-08-28 | Yiwei Wang | Adaptive median filters for de-interlacing |
US20030206053A1 (en) * | 2002-04-04 | 2003-11-06 | Jingsong Xia | Carrier recovery for DTV receivers |
US20030235259A1 (en) * | 2002-04-04 | 2003-12-25 | Jingsong Xia | System and method for symbol clock recovery |
US6995617B2 (en) | 2002-04-05 | 2006-02-07 | Micronas Semiconductors, Inc. | Data-directed frequency-and-phase lock loop |
US20030215044A1 (en) * | 2002-04-05 | 2003-11-20 | Citta Richard W. | Data-directed frequency-and-phase lock loop |
US20050074082A1 (en) * | 2002-04-05 | 2005-04-07 | Citta Richard W. | Data-directed frequency-and-phase lock loop for decoding an offset-QAM modulated signal having a pilot |
US20040013191A1 (en) * | 2002-04-05 | 2004-01-22 | Shidong Chen | Transposed structure for a decision feedback equalizer combined with a trellis decoder |
US20060159214A1 (en) * | 2002-04-05 | 2006-07-20 | Citta Richard W | Data-directed frequency-and-phase lock loop |
US20030214350A1 (en) * | 2002-04-05 | 2003-11-20 | Citta Richard W. | Data-directed frequency acquisition loop |
US6980059B2 (en) | 2002-04-05 | 2005-12-27 | Micronas Semiconductors, Inc. | Data directed frequency acquisition loop that synchronizes to a received signal by using the redundancy of the data in the frequency domain |
US7504890B2 (en) | 2002-04-05 | 2009-03-17 | Micronas Semiconductors, Inc. | Data-directed frequency-and-phase lock loop |
US20030214976A1 (en) * | 2002-04-05 | 2003-11-20 | Shidong Chen | Synchronization symbol re-insertion for a decision feedback equalizer combined with a trellis decoder |
US7272203B2 (en) | 2002-04-05 | 2007-09-18 | Micronas Semiconductors, Inc. | Data-directed frequency-and-phase lock loop for decoding an offset-QAM modulated signal having a pilot |
US7321642B2 (en) | 2002-04-05 | 2008-01-22 | Micronas Semiconductors, Inc. | Synchronization symbol re-insertion for a decision feedback equalizer combined with a trellis decoder |
US7376181B2 (en) | 2002-04-05 | 2008-05-20 | Micronas Semiconductors, Inc. | Transposed structure for a decision feedback equalizer combined with a trellis decoder |
US20070104382A1 (en) * | 2003-11-24 | 2007-05-10 | Koninklijke Philips Electronics N.V. | Detection of local visual space-time details in a video signal |
US20050231635A1 (en) * | 2004-04-16 | 2005-10-20 | Lin Ken K | Automated inverse telecine process |
US20060222267A1 (en) * | 2005-04-01 | 2006-10-05 | Po-Wei Chao | Method and apparatus for pixel interpolation |
US8059920B2 (en) | 2005-04-01 | 2011-11-15 | Realtek Semiconductor Corp. | Method and apparatus for pixel interpolation |
US20060227242A1 (en) * | 2005-04-12 | 2006-10-12 | Po-Wei Chao | Method and apparatus of deinterlacing |
US7822271B2 (en) | 2005-04-12 | 2010-10-26 | Realtek Semiconductor Corp | Method and apparatus of false color suppression |
US20100074522A1 (en) * | 2005-04-12 | 2010-03-25 | Po-Wei Chao | Method and Apparatus of False Color Suppression |
US7978265B2 (en) | 2005-04-12 | 2011-07-12 | Realtek Semiconductor Corp. | Method and apparatus of deinterlacing |
US20060228022A1 (en) * | 2005-04-12 | 2006-10-12 | Po-Wei Chao | Method and apparatus of false color suppression |
US7634132B2 (en) | 2005-04-12 | 2009-12-15 | Realtek Semiconductor Corp. | Method and apparatus of false color suppression |
US9495728B2 (en) | 2006-08-24 | 2016-11-15 | Realtek Semiconductor Corp. | Method for edge detection, method for motion detection, method for pixel interpolation utilizing up-sampling, and apparatuses thereof |
US20080049977A1 (en) * | 2006-08-24 | 2008-02-28 | Po-Wei Chao | Method for edge detection, method for motion detection, method for pixel interpolation utilizing up-sampling, and apparatuses thereof |
WO2008038152A3 (en) * | 2006-09-29 | 2011-03-03 | Crystal Signal Inc. | Digital scaling |
WO2008038152A2 (en) * | 2006-09-29 | 2008-04-03 | Crystal Signal Inc. | Digital scaling |
US8374234B2 (en) | 2006-09-29 | 2013-02-12 | Francis S. J. Munoz | Digital scaling |
US20080080614A1 (en) * | 2006-09-29 | 2008-04-03 | Munoz Francis S J | Digital scaling |
US8494053B2 (en) * | 2007-01-03 | 2013-07-23 | International Business Machines Corporation | Method and apparatus of temporal filtering for side information interpolation and extrapolation in Wyner-Ziv video compression systems |
US20080159391A1 (en) * | 2007-01-03 | 2008-07-03 | International Business Machines Corporation | Method and apparatus of temporal filtering for side information interpolation and extrapolation in wyner-ziv video compression systems |
KR101329075B1 (en) * | 2007-03-20 | 2013-11-12 | 엘지디스플레이 주식회사 | LCD and drive method thereof |
US8669930B2 (en) * | 2007-03-20 | 2014-03-11 | Lg Display Co., Ltd. | Liquid crystal display and method for driving the same |
US20080231616A1 (en) * | 2007-03-20 | 2008-09-25 | Seong Gyun Kim | Liquid crystal display and method for driving the same |
US8175416B2 (en) * | 2007-05-07 | 2012-05-08 | Mstar Semiconductor, Inc. | Pixel interpolation apparatus and method thereof |
US20080279479A1 (en) * | 2007-05-07 | 2008-11-13 | Mstar Semiconductor, Inc | Pixel interpolation apparatus and method thereof |
US20090002559A1 (en) * | 2007-06-29 | 2009-01-01 | Eunice Poon | Phase Shift Insertion Method For Reducing Motion Artifacts On Hold-Type Displays |
US8098333B2 (en) | 2007-06-29 | 2012-01-17 | Seiko Epson Corporation | Phase shift insertion method for reducing motion artifacts on hold-type displays |
US20090059065A1 (en) * | 2007-08-31 | 2009-03-05 | Kabushiki Kaisha Toshiba | Interpolative frame generating apparatus and method |
US20090087120A1 (en) * | 2007-09-28 | 2009-04-02 | Ati Technologies Ulc | Apparatus and method for generating a detail-enhanced upscaled image |
US8300987B2 (en) | 2007-09-28 | 2012-10-30 | Ati Technologies Ulc | Apparatus and method for generating a detail-enhanced upscaled image |
US8964117B2 (en) | 2007-09-28 | 2015-02-24 | Ati Technologies Ulc | Single-pass motion adaptive deinterlacer and method therefore |
US20090147133A1 (en) * | 2007-12-10 | 2009-06-11 | Ati Technologies Ulc | Method and apparatus for high quality video motion adaptive edge-directional deinterlacing |
US8259228B2 (en) | 2007-12-10 | 2012-09-04 | Ati Technologies Ulc | Method and apparatus for high quality video motion adaptive edge-directional deinterlacing |
US8396129B2 (en) | 2007-12-28 | 2013-03-12 | Ati Technologies Ulc | Apparatus and method for single-pass, gradient-based motion compensated image rate conversion |
US20090167778A1 (en) * | 2007-12-28 | 2009-07-02 | Ati Technologies Ulc | Apparatus and method for single-pass, gradient-based motion compensated image rate conversion |
US8908100B2 (en) * | 2007-12-28 | 2014-12-09 | Entropic Communications, Inc. | Arrangement and approach for motion-based image data processing |
US20110022418A1 (en) * | 2007-12-28 | 2011-01-27 | Haiyan He | Arrangement And Approach For Motion-Based Image Data Processing |
US8787461B2 (en) * | 2008-10-14 | 2014-07-22 | National Taiwan University | High-performance block-matching VLSI architecture with low memory bandwidth for power-efficient multimedia devices |
US20100091862A1 (en) * | 2008-10-14 | 2010-04-15 | Sy-Yen Kuo | High-Performance Block-Matching VLSI Architecture With Low Memory Bandwidth For Power-Efficient Multimedia Devices |
US8718448B2 (en) | 2011-05-04 | 2014-05-06 | Apple Inc. | Video pictures pattern detection |
US9661261B2 (en) | 2011-05-04 | 2017-05-23 | Apple Inc. | Video pictures pattern detection |
DE102017200015A1 (en) * | 2017-01-02 | 2018-07-05 | Siemens Aktiengesellschaft | Determine at least one subsequent record for a real-time application |
US10264212B1 (en) | 2018-06-27 | 2019-04-16 | The United States Of America As Represented By Secretary Of The Navy | Low-complexity deinterlacing with motion detection and overlay compensation |
US11240465B2 (en) | 2020-02-21 | 2022-02-01 | Alibaba Group Holding Limited | System and method to use decoder information in video super resolution |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5134480A (en) | Time-recursive deinterlace processing for television-type signals | |
Dubois et al. | Noise reduction in image sequences using motion-compensated temporal filtering | |
Wang et al. | Time-recursive deinterlacing for IDTV and pyramid coding | |
Bellers et al. | De-interlacing: A key technology for scan rate conversion | |
US5793435A (en) | Deinterlacing of video using a variable coefficient spatio-temporal filter | |
US5068724A (en) | Adaptive motion compensation for digital television | |
US5488421A (en) | Interlaced-to-progressive scanning converter with a double-smoother and a method therefor | |
US5093720A (en) | Motion compensation for interlaced digital television signals | |
US5969777A (en) | Noise reduction apparatus | |
US5091782A (en) | Apparatus and method for adaptively compressing successive blocks of digital video | |
JP3509165B2 (en) | Half pixel interpolation method and apparatus for motion compensated digital video systems | |
US4985767A (en) | Spatio-temporal sub-sampling of digital video signals representing a succession of interlaced or sequential images, transmission of high-definition television images, and emission and reception stages for such a system | |
US5237413A (en) | Motion filter for digital television system | |
RU2118066C1 (en) | Device for processing of video signals by preprocessor for generation of non-interlaced video signals from interlaced video signals | |
Robbins et al. | Recursive motion compensation: A review | |
Kappagantula et al. | Motion compensated predictive coding | |
US5457481A (en) | Memory system for use in a moving image decoding processor employing motion compensation technique | |
EP0734176A2 (en) | Motion compensation apparatus for use in a video encoding system | |
US5386248A (en) | Method and apparatus for reducing motion estimator hardware and data transmission capacity requirements in video systems | |
AU616971B2 (en) | Device for decoding signals representative of a sequence of images and high definition television image transmission system including such a device | |
JP3946781B2 (en) | Image information conversion apparatus and method | |
RU2154917C2 (en) | Improved final processing method and device for image signal decoding system | |
Belfor et al. | Subsampling of digital image sequences using motion information | |
Oistamo et al. | Reconstruction of quincunx-coded image sequences using vector median | |
Haskell | Interframe coding of monochrome television-a review |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: AMERICAN TELEPHONE AND TELEGRAPH COMPANY, 550 MADI Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNOR:NETRAVALI, ARUN N.;REEL/FRAME:005576/0182 Effective date: 19901128 Owner name: TRUSTEES OF COLUMBIA UNIVERSITY IN THE CITY OF NEW Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNORS:WANG, FENG-MING;ANASTASSIOU, DIMITRIS;REEL/FRAME:005576/0176 Effective date: 19901130 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
CC | Certificate of correction | ||
FPAY | Fee payment |
Year of fee payment: 4 |
|
CC | Certificate of correction | ||
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
AS | Assignment |
Owner name: NATIONAL SCIENCE FOUNDATION, VIRGINIA Free format text: CONFIRMATORY LICENSE;ASSIGNOR:TRUSTEES OF COLOMBIA UNIVERSITY IN NYC, THE;REEL/FRAME:013077/0372 Effective date: 19951025 |
|
FEPP | Fee payment procedure |
Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 12 |