CN102542256A - Advanced warning system for giving front conflict alert to pedestrians - Google Patents

Advanced warning system for giving front conflict alert to pedestrians Download PDF

Info

Publication number
CN102542256A
CN102542256A CN2011104045741A CN201110404574A CN102542256A CN 102542256 A CN102542256 A CN 102542256A CN 2011104045741 A CN2011104045741 A CN 2011104045741A CN 201110404574 A CN201110404574 A CN 201110404574A CN 102542256 A CN102542256 A CN 102542256A
Authority
CN
China
Prior art keywords
model
picture
patch
picture point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011104045741A
Other languages
Chinese (zh)
Other versions
CN102542256B (en
Inventor
丹·罗森鲍姆
阿米亚德·古尔曼
吉迪昂·斯坦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mobileye Technologies Ltd
Original Assignee
Mobileye Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mobileye Technologies Ltd filed Critical Mobileye Technologies Ltd
Priority to CN201710344179.6A priority Critical patent/CN107423675B/en
Publication of CN102542256A publication Critical patent/CN102542256A/en
Application granted granted Critical
Publication of CN102542256B publication Critical patent/CN102542256B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Abstract

Provided is an advanced warning system and method for giving front conflict alear to pedestrains, which can be installed in the camera of a vehicle. The method comprises the following steps: obtaining image frames according to the given intervals; selecting patches in at least one of the image frames; tracking the light stream between image frames of several image points of the patches; fitting image points to at least one model; and confirming the conflict time (TTC) if it is predicted that there is a conflict based on the fitting between image points and at least one model. The image points can be fit to the roadbed model, with part of the image points being modeled as image from roadbed. Based on the fitting between the image points and the road model, it can be predicted that there will be no conflict. At lesat one model can further comprise a mixed model. The first part of the image points can be modeled as imaging from roadbed while the second part thereof can be modeled as imaging from the actual vertical object. The image point can be fit to vertical surface model while the image point part can be model as imaging from vertical object. The TTC can be determined based on the fitting between the image points and the vertical surface model.

Description

Trap and pedestrian are carried out the advanced warning system that front shock is warned
Background
1. technical background
The present invention relates to provide the driver assistance system of front shock warning.
2. description of related art
Be that (driver assistance system's basic driver assistance system DAS) comes into the market in recent years with the video camera; This driver assistance system comprises lane departur warning (lane departure warning; LDW), distance light is controlled (Automatic High-beam Control automatically; AHC), pedestrian identification and front shock warning (forward collision warning, FCW).
Lane departur warning (LDW) system is designed, under the situation of unintentional deviation and gives a warning.When vehicle through or give a warning when being about to through lane markings.Based on the use of turn signal, the change of the angle of bearing circle, car speed and brake activate to confirm driver intention.
In Flame Image Process, the Moravec Corner Detection Algorithm possibly be that the angle point that defines in the lump of Corner Detection Algorithm the earliest is the point with low self-similarity.How similar the Moravec algorithm has through considering to concentrate on patch (patch) and near the overlapping patch of major part on the pixel, and whether each the pixel angle point in the test pattern exists.Through adopting two differences of two squares between the patch and (sum of squared difference, SSD) measurement similarity.The bright similarity of novel is big more more for numeral.The method of the angle point in the optional detected image is based on the method that is proposed by Harris and Stephens, and this method is the improvement to the method that is proposed by Moravec.Harris and Stephens improve the Corner Detection Algorithm of Moravec through considering directly with the differential of the angle point mark of directional correlation but not use Moravec vicinity patch.
In computer vision, the widely used differential method that is used for the light stream estimation is by Bruce D.Lucas and Takeo Kanade exploitation.In the local neighborhood of the pixel that the light stream of Lucas-Kanade method hypothesis is under consideration is constant basically, and through criterion of least squares all pixels in this neighborhood is found the solution basic optical flow equation.From the information of several neighborhood pixels, the Lucas-Kanade method generally can solve the inherent ambiguity of optical flow equation through comprehensively.Compare with the pointwise method, this method also is insensitive to picture noise.On the other hand, because this method is pure partial approach, so it can not provide the stream information in the internal unity zone of image.
General introduction
According to characteristic of the present invention, the distinct methods that is used to send the front shock caution signal is provided, said method is used can be installed in the video camera in the motor vehicle.Obtain a plurality of picture frames by the known time interval.Can at least one picture frame, select the image patch.Can between picture frame, follow the tracks of the light stream of a plurality of picture point of patch.Picture point can be fitted at least one model.Based on the match of picture point, can determine whether that expection has a collision, and if expection have, can confirm collision time (TTC).Picture point can be fitted to road surface model, and the part of picture point can be modeled as like imaging from the road surface.Can confirm that expection does not have collision based on the match of picture point and road surface model.Picture point can be fitted to the vertical surface model, and wherein the part of picture point can be modeled as imaging from perpendicular objects.Can confirm collision time TTC based on the match of picture point and vertical surface model.Picture point can be fitted to mixture model, and wherein the first of picture point can be modeled as imaging from the road surface, and the second portion of picture point can be modeled as imaging from vertical in fact or upright object but not object in the traverse road surface.
In picture frame, can detect pedestrian's candidate image, wherein, said patch is selected to comprise pedestrian's candidate image.When best fit model is the vertical surface model, can verify that candidate image is upright pedestrian's image but not a object in the road surface.In picture frame, but the detection of vertical line, and wherein, said patch is selected to comprise this perpendicular line.When best fit model is the vertical surface model, can verify that perpendicular line is image but not the image of object in the road surface of perpendicular objects.
In distinct methods, can give a warning less than threshold value based on collision time.In distinct methods, can confirm the relative scale of patch based on the light stream between the picture frame, and can confirm collision time (TTC) in response to this relative scale and the time interval.This method can avoid before definite relative scale, in patch, carrying out object identification.
According to characteristic of the present invention, the system that comprises video camera and processor is provided.Said system can be used for using the video camera that can be installed in the motor vehicle that the front shock warning is provided.Said system also can be used for obtaining a plurality of picture frames by the known time interval, and at least one that is used at picture frame selected patch; Be used to follow the tracks of the light stream between the picture frame of a plurality of picture point of patch; Be used for picture point is fitted at least one model and determines whether that based on the match of picture point and this at least one model expection has collision, if had by expection then confirm collision time (TTC).Said system also can be used for picture point is fitted to road surface model.Can confirm that expection does not have collision based on the match of picture point and road surface model.
According to other embodiments of the present invention, can select the patch in the picture frame, this patch can corresponding motor vehicle will be in preset time residing position, back at interval.Can keep watch on this patch; , sends object the front shock warning if being imaged in this patch.Light stream between the picture frame of a plurality of picture point that can be through following the tracks of the object in patch confirms that in fact whether object is vertical, upright or not in the road surface.Picture point can be fitted at least one model.The part of picture point can be modeled as imaging from object.Based on the match of picture point and at least one model, determine whether that expection has collision,, expection confirms collision time (TTC) if having., best fit model can send the front shock warning when comprising the vertical surface model.Picture point can be fitted to road surface model.Can confirm that expection does not have collision based on the match of picture point and road surface model.
According to characteristic of the present invention, a kind of system that is used for providing at motor vehicle the front shock warning is provided.Said system comprises the video camera and the processor that can be installed in the motor vehicle.Video camera can be used for obtaining a plurality of picture frames by the known time interval.Processor can be used for selecting the patch in the picture frame, and the corresponding motor vehicle of this patch will be in residing position, back, preset time interval.If object is imaged in the patch, if find to as if upright and/or not in the road surface then can send the front shock warning.Processor also is used in a plurality of picture point of following the tracks of the object in the patch between the picture frame, and picture point is fitted to one or more models.Said model can comprise perpendicular objects model, road surface model and/or mixture model, and mixture model comprises that hypothesis is from one or more picture point on road surface with from one or more picture point of the upright object in the road surface not.Based on the match of picture point and model, determine whether that expection has collision, if having collision, expection confirms collision time (TTC).Processor can be used for sending the front shock warning based on TTC less than threshold value.
Brief description of the drawings
This paper only by way of example mode is described with reference to the drawings the present invention, wherein:
Fig. 1 a and 1b are schematically illustrated according to two images characteristic of the present invention, that catch from the forward sight video camera that is installed in the vehicle during near the metal guardrail when vehicle.
Fig. 2 a illustrate according to characteristic of the present invention, be used for using the video camera be installed in main car (host vehicle) that the method for front shock warning is provided.
Fig. 2 b illustrate according to characteristic of the present invention, in the further details of the step of the definite collision time shown in Fig. 2 a.
Fig. 3 a illustrates according to picture frame (back side of van) characteristic of the present invention, upright surface.
Fig. 3 c illustrate according to characteristic of the present invention, mainly be the rectangular area on road surface.
Fig. 3 b illustrate according to characteristic of the present invention, about the vertical movement δ y of Fig. 3 a as the point of the function of vertical image position (y).
Fig. 3 d illustrate according to characteristic of the present invention, about the vertical movement δ y of Fig. 3 c as the point of the function of vertical image position (y).
Fig. 4 a illustrates according to picture frame characteristic of the present invention, that comprise the image of the metal guardrail with horizontal line and rectangular patches.
Fig. 4 b and 4c illustrate according to characteristic of the present invention, in the more details of the rectangular patches shown in Fig. 4 a.
Fig. 4 d illustrates according to the curve map of vertical movement (δ y) characteristic of the present invention, point with respect to vertical some position (y).
Fig. 5 illustrates another example according to mirage characteristic of the present invention, in picture frame.
Fig. 6 illustrates according to method characteristic of the present invention, that be used to provide the front shock warning trap.
Fig. 7 a and 7b illustrate according to example example feature of the present invention, that be directed against the front shock trap warning that wall triggered.
Fig. 7 c illustrates according to example example feature of the present invention, that be directed against the front shock trap warning that box triggered.
Fig. 7 d illustrates the example of the front shock trap warning that triggers according to side example feature of the present invention, that be directed against automobile.
Fig. 8 a illustrates example according to an aspect of the present invention, that have the object of obvious perpendicular line on box.
Fig. 8 b illustrates example according to an aspect of the present invention, that have the object of obvious perpendicular line on lamppost.
Fig. 9 and 10 illustrate according to an aspect of the present invention, comprise the video camera that is installed in the vehicle or the system of imageing sensor.
Describe in detail
Existing reference at length characteristic of the present invention, its example is shown in the drawings, and wherein identical reference number refers to components identical from start to finish.Characteristic is described with explanation the present invention through reference diagram below.
Before at length explaining characteristic of the present invention, should be appreciated that the present invention is not subject to the application on the details of designs that it is stated in the following description or shown parts in the accompanying drawings and layout.The present invention has other characteristics or can put into practice or carry out by enough different modes.In addition, should also be appreciated that employed wording of this paper and term are that to be used to describe purpose and should not be construed be restrictive.
Through the mode of introducing, embodiment of the present invention relates to front shock warning (FCW) system.According to United States Patent (USP) 7113867, the image of front truck is identified.The width of vehicle can be used to detect ratio or the change among the relative scale S between picture frame, and relative scale is used for confirming the time of collision.Particularly, for example the width of front truck has the length (as for example with pixel or millimeter measured lengths) of in first image and second image, using w (t1) and w (t2) expression respectively.So alternatively, relative scale is S (t)=w (t2)/w (t1).
According to the instruction of United States Patent (USP) 7113867, front shock warning (FCW) system relies on the identification to the image of barrier or object, for example, and like the front truck of in picture frame, being discerned.Forwardly in the collision-warning system, disclosed like United States Patent (USP) 7113867, the ratio change of the size (for example width) of object to be detected (for example vehicle) is used to calculate collision time (TTC).Yet object is at first to be detected and cut apart with scene on every side.Present disclosure has been described the system that uses relative scale to change, and it confirms collision time TTC and possibility of collision based on light stream, if desired, sends the FCW warning.Light stream causes mirage phenomenon (1ooming phenomenon): along with the object that is formed images becomes near more, it is big more that the image of perception seems.According to different characteristic of the present invention, can carry out object detection and/or identification, perhaps can avoid object detection and/or identification.
In the biology system broad research the mirage phenomenon.Mirage is to seemingly a kind of very low-level vision noticing mechanism of people and can trigger natural reaction.In computer vision, there was multiple trial to detect mirage, even has silicon sensor to be designed for the mirage that detects under the pure flat condition of shifting one's love.
Can have changing lighting condition, comprise that carrying out mirage in the actual environment of complex scene and main car of a plurality of objects detects, this mirage detects and comprises that translation is moving and rotate.
Term as used herein " relative scale " refers to the increase (or minimizing) of the relative size of image patch and the correspondence image patch in picture frame subsequently in a picture frame.
Refer now to Fig. 9 and 10, according to an aspect of the present invention, Fig. 9 and 10 illustrates and comprises the video camera that is installed in the vehicle 18 or the system 16 of imageing sensor 12.To the imageing sensor 12 of field of front vision imaging transitive graph picture in real time, these images are hunted down with the time series of picture frame 15.Image processor 14 can be used for handling side by side and/or concurrently picture frame 15 and comes to be many driver assistance system services.Can use and have that plate carries the specific hardware circuit of software and/or the software control algorithm in the storer 13 is realized driver assistance system.Imageing sensor 12 can be monochromatic or black and white, does not promptly have color-separated, and perhaps imageing sensor 12 can be the sense look.Through takeing the example among Figure 10, picture frame 15 is used for serving pedestrian's warning (PW) 20, lane departur warning (LDW) 21, warns (FCW) 22, warns 601 based on front shock warning (FCWL) 209 of image mirage and/or based on the front shock of FCW trap (FCWT) 601 based on the front shock of object detection and tracking according to the instruction of United States Patent (USP) 7113867.Image processor 14 is used for handling picture frame 15 and is used for the mirage based on the image of the field of front vision of the video camera 12 of the front shock warning 209 of image mirage and FCWT 601 with detection.Based on the front shock of image mirage warning 209 and based on front shock warning (FCWT) 601 of trap can with traditional FCW 22 executed in parallel, and detect and self-motion detection executed in parallel with other driver's subsidiary functions, pedestrian detection (PW) 20, lane departur warning (LDW) 21, traffic sign.FCWT 601 can be used for verifying the normal signal from FCW 22.As used herein term " FCW signal " refers to the front shock caution signal.This paper use a technical term interchangeably " FCW signal ", " front shock warning " and " warning ".
Characteristic of the present invention is shown in Fig. 1 of the example that shows light stream or mirage a and the 1b.When vehicle 18 during, show from captive two images that are installed in the forward sight video camera 12 in the vehicle 18 near metal guardrail 30.Image among Fig. 1 a illustrates the visual field and guardrail 30.Image among Fig. 1 b illustrates identical characteristic, and wherein vehicle 18 if observe the little rectangle p 32 (with dashed lines is indicated) in the guardrail, possibly see as if horizontal line 34 is along with vehicle 18 stretches near guardrail 30 to some extent in Fig. 1 b more near metal guardrail 30.
Refer now to Fig. 2 a, its illustrate according to characteristic of the present invention, be used for using the video camera 12 that is installed in main car 18 that the method 201 of front shock warning 209 (FCWL 209) is provided.Method 201 does not rely on the object identification of the object in the field of front vision of vehicle 18.In step 203, obtain a plurality of picture frames 15 by video camera 12.The time interval between the catching of picture frame is Δ t.In step 205, select the patch 32 in the picture frame 15, and in step 207, confirm the relative scale (S) of patch 32.In step 209, confirm collision time (TTC) based on the relative scale between the frame 15 (S) and the time interval (Δ t).
Refer now to Fig. 2 b, its illustrate according to characteristic of the present invention, in the further details of the step 209 of the definite collision time shown in Fig. 2 a.In step 211, can between picture frame 15, follow the tracks of a plurality of picture point in the patch 32.In step 213, picture point can be fitted to one or more models.First model can be the vertical surface model, and it can comprise the object such as pedestrian, vehicle, wall, shrub, tree or lamppost.Second model can be road surface model, and it considers the characteristic of the picture point on the road surface.Mixture model can comprise the one or more picture point from road, and from one or more picture point of upright object.The model of a part of picture point for hypothesis at least comprises upright object can calculate a plurality of collision times (TTC).In step 215, the best-fit of picture point and road surface model, vertical surface model or mixture model makes it possible to select collision time (TTC) value.Based on less than the collision time (TTC) of threshold value and when best fit model is vertical surface model or mixture model, can give a warning.
Alternatively, step 213 also can be included in the detection of the candidate image in the picture frame 15.Candidate image can be the for example perpendicular line of lamppost of pedestrian or perpendicular objects.Under the situation that is pedestrian or perpendicular line, can select patch 32 to comprise candidate image.In case selected patch 32 might be carried out candidate image and be the checking of image of upright pedestrian's image and/or perpendicular line so.This checking can confirm that when best fit model is the vertical surface model candidate image is not the object in the road surface.
Look back Fig. 1 a and 1b, the arrangement of subpixels of the patch 32 from first image shown in Fig. 1 a to second image shown in Fig. 1 b can cause that size increase by 8% or relative scale S increase by 8% (S=1.08) (step 207).Suppose the mistiming Δ t=0.5 second between the image, the equality 1 below collision time (TTC) is available calculates (step 209):
Figure BSA00000631951800081
If the speed of known vehicle 18 is v (v=4.8m/s), then the also available following equality 2 of range-to-go Z calculates:
Z = v * Δt S - 1 = 4.8 * 0.5 1.08 - 1 = 30 m - - - ( 2 )
According to characteristic of the present invention, Fig. 3 b and 3d illustrate the vertical movement δ y as the point of the function of vertical picture position (y).Vertical movement δ y is zero at the horizontal line place, is negative value under horizontal line.The vertical movement δ y of point illustrates with following equality 3.
δy = ΔZ ( y - y 0 ) Z - - - ( 3 )
Equality (3) is about the linear model of y and δ y and in fact has two variablees.Can use two points to find the solution this two variablees.
For vertical surface, because all points are equidistant, like the distance in the image shown in Fig. 3 b, motion is at horizontal line (y 0) locate to be zero and to change with the picture position is linear.For the road surface, point is in image low more then nearer (Z is less), and is shown like following equality 4:
Z = fH y - y 0 - - - ( 4 )
Therefore, image motion δ y not only increases with linear rate, as in the equality 5 below with shown in the figure of Fig. 3 d.
δy = ΔZ ( y - y 0 ) 2 fH - - - ( 5 )
Equality (5) is the constraint secondary equality that in fact has two variablees.
Equally, can use two points to find the solution this two variablees.
Refer now to Fig. 3 a and the 3c of expression pictures different frame 15.In Fig. 3 a and 3c, two rectangular areas are shown in broken lines.Fig. 3 a illustrates upright surface (back of van).Square points is followed the tracks of the point of (step 211), motion with in Fig. 3 b than the motion model on the upright surface shown in the image of image motion (δ y) of the height y of point be complementary (step 213).The do not match motion model on upright surface of the motion of the triangle form point in Fig. 3 a.Refer now to Fig. 3 c, it illustrates mainly is the rectangular area on road surface.Square points is and the point that in Fig. 3 d, is complementary than the road surface model shown in the image of image motion (δ y) of the height y of point.The motion of triangle form point do not match the road surface motion model and be exceptional value (outlier).Therefore in general, the task here is to confirm that which point belongs to model (and belonging to which model) and which point is an exceptional value, this can through as below illustrated robust approximating method carry out.
Refer now to Fig. 4 a, 4b, 4c and 4d, they illustrate the typical situation according to the mixing of two motion models characteristic of the present invention, that be arranged in image.Fig. 4 a illustrates the image that comprises metal guardrail 30 and the picture frame 15 of rectangular patches 32a, and wherein the image of metal guardrail 30 has horizontal line 34.The further details of patch 32a is shown in Fig. 4 b and the 4c.Fig. 4 b illustrates the details of the patch 32a in the picture frame 15 before, and Fig. 4 c illustrates the details as vehicle 18 patch 32a in a picture frame 15 subsequently during more near guardrail 30.In Fig. 4 c and 4d, some picture point are shown in square, triangle and the circle on the vertical obstacle 30, and some picture point are illustrated on the road surface in barrier 30 the place aheads.Trace point in the 32a of rectangular area demonstrates, and some points are in the lower part corresponding to the regional 32a of road model, and some points are in the top corresponding to the regional 32a of upright surface model.Fig. 4 d illustrates a little vertical movement (δ y) than the curve map of vertical some position (y).In Fig. 4 d, have two parts: crooked (parabolical) part 38a and linear segment 38b with the model that is resumed that illustrates.The bottom on the corresponding upright surface 30 of the transition point between part 38a and 38b.This transition point is also through horizontal dotted line 36 marks among Fig. 4 c.Some are arranged through the point shown in the triangle in Fig. 4 b and 4c, they are followed the tracks of but mismatching model, being illustrated through square by trace point of some Matching Model and some are not illustrated as circle by the point of good tracking.
Refer now to Fig. 5, it illustrates another example of the mirage in the picture frame 15.In the picture frame 15 of Fig. 5, in patch 32b, there is not upright surface, have only the clog-free road in the place ahead, and the transition point between two models is sentenced dotted line 50 marks at horizontal line.
The estimation of motion model and collision time (TTC)
The estimation (step 215) of motion model and collision time (TTC) supposes to provide a zone 32, the for example rectangular area in picture frame 15.The example of rectangular area is for example at rectangle 32a and the 32b shown in Fig. 3 and 5.Can select these rectangles such as pedestrian's object or based on the motion of main car 18 based on what detected.
1. trace point (step 211):
(a) rectangular area 32 can be subdivided into 5x20 sub-rectangular grid.
(b) can be each sub-rectangle execution algorithm so that find the angle point of image, for example use Harris and Stephens method, and can follow the tracks of this point.Preferably use the 5x5Harris point, can consider the eigenwert of following matrix,
Σδ x 2 Σδxδy Σδxδy Σδ y 2 - - - ( 6 )
And seek out two strong eigenwerts.
(c) can carry out tracking through best some differences of two squares (SSD) coupling of exhaustive search in having the rectangular search zone of width W and height H.This exhaustive search is very important in when beginning, because its motion before meaning do not adopt, and is more independently on statistics from the measurement of all sub-rectangles.After search, be to use the fine setting of light stream estimation, wherein light stream estimated service life Lukas Kanade method for example.Lukas Kanade method allows sub-pixel motion.
2. the model fitting (step 213) of robust:
(a) two or three points of picked at random from the point that 100 quilts are followed the tracks of.
(b) the right quantity (N that is selected Right) depend on car speed (v), for example provide through following formula:
N Right=min (40, max (5,50-v)) (7)
Wherein v unit is a meter per second.Quantity (the N of tlv triple (triplet) Tlv triple) provide through following formula:
N Tlv triple=50-N Right(8)
(c) for two points, but two models of their matches (step 213).These two points of model hypothesis are on upright object.These two points of second model hypothesis are all on road.
(d) for three points, but their also two models of match.Two point above the model hypothesis on upright object and the 3rd (nethermost) point on road.The uppermost point of second model hypothesis on upright object and below two points on road.
Two models can be found the solution about three points, and this finds the solution first model (equality 3) through using two points, uses y as a result then 0Find the solution second model (equality 5) with the 3rd point.
(e) each model in (d) all provides collision time TTC value (step 215).Each model also must have many good marks that obtain based on 98 other points and model fitting.(Sum of the Clipped Square ofthe Distance SCSD) provides this mark to truncation quadratic sum through the distance between the model sport of the y motion of point and prediction.The SCSD value is converted to the function that is similar to probability:
Wherein N is the quantity (N=98) of point.
(f) based on the speed of TTC value, vehicle 18 and suppose these on static object, can calculate these points apart from Z=v x TTC.According to the x image coordinate of each picture point distance, can calculate the lateral attitude in the world coordinates:
X = xZ f - - - ( 10 )
ΔX = δxZ f - - - ( 11 )
(g) therefore calculate lateral attitude at time T TC.The horizontal mark of scale-of-two require to or the point of tlv triple at least one must be in the path of vehicle 18.
3. the mark of multiframe: can produce new model at each frame 15, each new model has its relevant TTC and mark.Can keep the model of 200 the bests (mark is the highest) from 4 frames before 15, its mid-score is following by weighting:
Mark (n)=α nMark (12)
Wherein n=0.3 is the age (age) of mark, and α=0:95.
4.FCW judge: if real FCW warning is then sent in any one generation in three following conditions:
(a) TTC of model that has a highest score under the TTC threshold value and mark greater than 0.75, and
Figure BSA00000631951800114
(b) TTC of model that has a highest score under the TTC threshold value and
Figure BSA00000631951800121
(c)
How Fig. 3 and 4 has illustrated and has warned for the ground of the some robust in the given rectangle 32 provides FCW.How limiting rectangle depends on as through the application shown in other example feature of Fig. 7 a-7d and 8a, 8b.
The FCW trap of general stationary objects
Refer now to Fig. 6, it illustrates according to method 601 characteristic of the present invention, that be used to provide front shock warning trap (FCWT) 601.In step 203, obtain a plurality of picture frames 15 through video camera 12.In step 605, select the patch 32 in the picture frame 15, the corresponding motor vehicle 18 of this patch will be in residing position, back, preset time interval.Then in step 607, keep watch on patch 32.In determining step 609,, then in step 611, send the front shock warning if general object is imaged in the patch 32 and is detected therein.Otherwise catching set by step of picture frame 203 continues.
Fig. 7 a and 7b illustrate according to example feature of the present invention, the example of FCWT 601 warnings that triggered to wall 70; Example in the warning that triggers according to side example feature of the present invention, that be directed against automobile 72 shown in Fig. 7 d; And, shown in Fig. 7 c according to the example of warning example feature of the present invention, that triggered to box 74a and 74b.Fig. 7 a-7d is the example that does not require stationary objects class-based detection, general before.The dashed rectangle zone is restricted in target a distance, that W=1m is wide, said distance be main car will be behind t=4s residing distance.
Z=vt (16)
w = fW Z - - - ( 17 )
y = fH Z - - - ( 18 )
Wherein v is the speed of vehicle 18, and H is the height of video camera 12, and w and y are respectively the width of rectangle and the upright position in image.This rectangular area is the example of FCW trap.If object is " dropped " in this rectangular area, if then TTC is less than threshold value, the FCW trap can produce warning.Use a plurality of traps to improve performance:
In order to improve verification and measurement ratio, the FCW trap can be copied in 5 zones with 50% lap, to produce the wide total trap area of 3m.
Can select the dynamic position of FCW trap according to yaw rate (yaw rate): can come laterally translation trap area 32 based on the path of the vehicle of confirming according to the dynamic model of the speed of Yaw rate sensor, vehicle 18 and main car 18 18.
Be used to verify the FCW trap of front shock caution signal
Special defects object such as vehicle and pedestrian can use pattern recognition techniques in image 15, to detect.According to the instruction of United States Patent (USP) 7113867, follow the tracks of in time after these objects, and the change in the usage ratio can produce FCW 22 signals.Yet, before giving a warning, importantly use independently technical identification FCW 22 signals.If system 16 will activate brake, use independently technology so, for example method of application 209 (Fig. 2 b) verifies that FCW 22 signals just maybe be particularly important.In the system that radar/vision merges, independently checking can be from radar.In system 16 only, independently verify from vision algorithm independently based on vision.
The detection of object (for example pedestrian, front truck) is not a problem.Can realize very high detection rates and have only low-down error rate.A characteristic of the present invention is to produce the reliable FCW signal that does not have too many false alarm, and too many false alarm will make the driver irritated, or badly can cause the driver unnecessarily to brake.About one of traditional pedestrian FCW system possibly problem be the front shock warning that will avoid wrong because the quantity of the huge and real front shock situation of the pedestrian's in scene quantity is then very little.Even 5% error rate will mean that also the driver possibly receive frequent false alarm, and possibly never experience real warning.
Pedestrian's target is especially challenging for the FCW system, because target is non-rigid, this makes follows the tracks of difficulty (according to the instruction of United States Patent (USP) 7113867), and the ratio change can receive a lot of interference especially.Therefore, the model of robust (method 209) can be used for verifying the front shock warning to the pedestrian.Can confirm rectangular area 32 through pedestrian detecting system 20.According to United States Patent (USP) 7113867, have only and carry out target followings through FCW 22 and just can produce the FCW signal, and the FCW of robust (method 209) has provided than can or cannot the little TTC of predetermined one or more threshold values.Front shock warning FCW 22 can have and the different threshold value of in the model (method 209) of robust, using of threshold value.
One of the factor that possibly increase the quantity of false alarm is, the pedestrian appears in the less structurized road usually, and driver's driving model maybe rather unstable in such road, comprising zig zag and lane change.Therefore possibly comprise some further constraints to sending of warning:
When detecting curb or lane markings, if the pedestrian in curb or/and the distally in track and when any one in the following conditions do not take place, then the FCW signal was prevented from:
1. the pedestrian is passing lane markings or curb (or approaching very fast).To this, the pin that detects the pedestrian maybe be very important.
2. main car 18 is not to pass lane markings or curb (for example, as detecting through LDW 21 systems).
The difficult prediction of driver's intention.If the driver is honest in driving, not activating turn signal and not estimating has other lane markings, has reason so to suppose that the driver will continue directly to march forward.Therefore, if having the pedestrian in the path and TTC under threshold value, then can send the FCW signal.Yet, if the driver turns, so he will to continue to turn or stop to turn and continue to move ahead be same possible.Therefore, when detecting yaw rate, only when hypothesis vehicle 18 will with identical crab angle continue to turn and the pedestrian in the path, and if vehicle is kept straight on and the pedestrian just sends the FCW signal in the path time.
The notion of FCW trap 601 may extend into the object that mainly comprises perpendicular line (or horizontal line).Use possible the problem based on the technology of point to be to such object, good Harris (angle point) point as a rule intersects through the horizontal line with the perpendicular line on the edge of object and distant place background and produces.The vertical movement of these points will be similar to road surface at a distance.
Fig. 8 a and 8b illustrate the example of the object with tangible perpendicular line 82, on the lamppost 80 of said perpendicular line 82 in Fig. 8 b and on the box in Fig. 8 a 84.In trap area 32, detect perpendicular line 82.Can between image, follow the tracks of the straight line 82 that is detected.Can suppose perpendicular objects through pursuing frame ground pairing straight line 82 and calculating the right TTC model of each straight line, the SCSD based on other straight lines 82 provides mark then, so that carry out the estimation of robust.Because the quantity of straight line maybe be less, the possible line of normally testing all combinations is right.Only using has the straight line of important lap right.With regard to horizontal line, the same in the time of with the use point, the tlv triple line has also provided two models.
The employed indefinite article of this paper " one (a) ", " one (an) "; Like " image " (" an image "), " rectangular area " (" a rectangular region "); The meaning with " one or more ", i.e. " one or more image " or " one or more rectangular area ".
Although illustrated and described selected characteristic of the present invention, should be appreciated that the present invention is not subject to described characteristic.On the contrary, should be appreciated that can not depart from principle of the present invention changes these characteristics with spirit, scope of the present invention limits through claim and equivalent thereof.

Claims (18)

1. method that is used to provide the front shock warning, this method has been used can be installed in the video camera in the motor vehicle, and said method comprises:
Obtain a plurality of picture frames by the known time interval;
In at least one of said picture frame, select patch;
The light stream of tracking between the picture frame of a plurality of picture point of said patch;
Said picture point is fitted at least one model; And
Based on the match of said picture point and said at least one model,, then confirm collision time (TTC) if expection has collision.
2. the method for claim 1 also comprises:
Said picture point is fitted to road surface model, and at least a portion of wherein said picture point is modeled as imaging from the road surface;
Based on the match of said picture point and said model, confirm that expection does not have collision.
3. the method for claim 1 also comprises:
Said picture point is fitted to the vertical surface model, and at least a portion of wherein said picture point is modeled as imaging from vertical object; And
Based on the match of said picture point and said vertical surface model, confirm said TTC.
4. method as claimed in claim 3 also comprises:
In said picture frame, detect pedestrian's candidate image, wherein select said patch to comprise said pedestrian's said candidate image; And
When best fit model is said vertical surface model, verify that said candidate image is upright pedestrian's the image rather than the image of the object in the road surface.
5. method as claimed in claim 3 also comprises:
Detection of vertical line in said picture frame wherein selects said patch to comprise said perpendicular line;
When best fit model is said vertical surface model, verify that said perpendicular line is the image of vertical object rather than the image of the object in the road surface.
6. the method for claim 1, wherein said at least one model also comprises mixture model, the first of wherein said picture point is modeled as imaging from the road surface, and the second portion of said picture point is modeled as imaging from vertical in fact object.
7. the method for claim 1 also comprises:
Give a warning less than threshold value based on said collision time.
8. one kind comprises the video camera that can be installed in the motor vehicle and the system of processor, and said system can operate provides the front shock warning, and said system can operate:
Obtain a plurality of picture frames by the known time interval;
In at least one of said picture frame, select patch;
The light stream of tracking between the picture frame of a plurality of picture point of said patch;
Said picture point is fitted at least one model; And
Based on the match of said picture point and said at least one model,, then confirm collision time (TTC) if expection has collision.
9. system as claimed in claim 8, can also operate:
Said picture point is fitted to road surface model;
Based on the match of said picture point and said road surface model, confirm that expection does not have collision.
10. method that front shock warning is provided, this method has been used the video camera and the processor that can be installed in the motor vehicle, and said method comprises:
Obtain a plurality of picture frames by the known time interval;
Select the patch in the picture frame, the corresponding said motor vehicle of said patch will be in residing position, back, preset time interval; And
Keep watch on said patch, send the front shock warning if object is imaged in the said patch.
11. method as claimed in claim 10 also comprises:
Confirm whether said object comprises vertical in fact part.
12. method as claimed in claim 11 is wherein said definite through following operation execution:
Light stream between the picture frame of a plurality of picture point of tracking in said patch; And
Said picture point is fitted at least one model.
13. method as claimed in claim 11, at least a portion of wherein said picture point are modeled as imaging from vertical object; And
Based on the match of said picture point and said at least one model,, then confirm collision time (TTC) if expection has collision.
14. method as claimed in claim 11, wherein said at least one model comprises road surface model, and said method also comprises:
Said picture point is fitted to road surface model;
Based on the match of said picture point and said road surface model, confirm that expection does not have collision.
15. method as claimed in claim 11 also comprises:
When best fit model is the vertical surface model, send said warning.
16. a system that is used for providing at motor vehicle the front shock warning, said system comprises:
Video camera, it can be installed in the said motor vehicle, and said video camera can be operated by the known time interval and obtain a plurality of picture frames;
Processor, it can operate:
Select the patch in the picture frame, the corresponding said motor vehicle of said patch will be in residing position, back, preset time interval;
Keep watch on said patch; And
If object is imaged in the said patch, then send the front shock warning.
17. system as claimed in claim 16, wherein said processor can also operate to confirm whether said object comprises vertical in fact part, and is said definite through following operation execution:
Between said picture frame, follow the tracks of a plurality of picture point of the said object in the said patch;
Said picture point is fitted at least one model; And
Based on the match of said picture point and said at least one model,, then confirm collision time (TTC) if expection has collision.
18. system as claimed in claim 16, wherein said processor can be operated based on TTC and send the front shock warning less than threshold value.
CN201110404574.1A 2010-12-07 2011-12-07 The advanced warning system of front shock warning is carried out to trap and pedestrian Active CN102542256B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710344179.6A CN107423675B (en) 2010-12-07 2011-12-07 Advanced warning system for forward collision warning of traps and pedestrians

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US42040510P 2010-12-07 2010-12-07
US61/420,405 2010-12-07

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201710344179.6A Division CN107423675B (en) 2010-12-07 2011-12-07 Advanced warning system for forward collision warning of traps and pedestrians

Publications (2)

Publication Number Publication Date
CN102542256A true CN102542256A (en) 2012-07-04
CN102542256B CN102542256B (en) 2017-05-31

Family

ID=46349111

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201710344179.6A Active CN107423675B (en) 2010-12-07 2011-12-07 Advanced warning system for forward collision warning of traps and pedestrians
CN201110404574.1A Active CN102542256B (en) 2010-12-07 2011-12-07 The advanced warning system of front shock warning is carried out to trap and pedestrian

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201710344179.6A Active CN107423675B (en) 2010-12-07 2011-12-07 Advanced warning system for forward collision warning of traps and pedestrians

Country Status (1)

Country Link
CN (2) CN107423675B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8917169B2 (en) 1993-02-26 2014-12-23 Magna Electronics Inc. Vehicular vision system
US8993951B2 (en) 1996-03-25 2015-03-31 Magna Electronics Inc. Driver assistance system for a vehicle
US9008369B2 (en) 2004-04-15 2015-04-14 Magna Electronics Inc. Vision system for vehicle
US9436880B2 (en) 1999-08-12 2016-09-06 Magna Electronics Inc. Vehicle vision system
CN105981042A (en) * 2014-01-17 2016-09-28 Kpit技术有限责任公司 Vehicle detection system and method thereof
US9555803B2 (en) 2002-05-03 2017-01-31 Magna Electronics Inc. Driver assistance system for vehicle
US10071676B2 (en) 2006-08-11 2018-09-11 Magna Electronics Inc. Vision system for vehicle
CN109716255A (en) * 2016-09-18 2019-05-03 深圳市大疆创新科技有限公司 For operating movable object with the method and system of avoiding barrier
CN111508275A (en) * 2013-07-15 2020-08-07 大众汽车有限公司 Apparatus and method for displaying traffic condition in vehicle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040175019A1 (en) * 2003-03-03 2004-09-09 Lockheed Martin Corporation Correlation based in frame video tracker
CN1654245A (en) * 2004-02-10 2005-08-17 丰田自动车株式会社 Deceleration control apparatus and method for a vehicle
WO2005098782A1 (en) * 2004-04-08 2005-10-20 Mobileye Technologies Limited Collision warning system
US7113867B1 (en) * 2000-11-26 2006-09-26 Mobileye Technologies Limited System and method for detecting obstacles to vehicle motion and determining time to contact therewith using sequences of images
CN101261681A (en) * 2008-03-31 2008-09-10 北京中星微电子有限公司 Road image extraction method and device in intelligent video monitoring
US20100191391A1 (en) * 2009-01-26 2010-07-29 Gm Global Technology Operations, Inc. multiobject fusion module for collision preparation system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3515926B2 (en) * 1999-06-23 2004-04-05 本田技研工業株式会社 Vehicle periphery monitoring device
US7089114B1 (en) * 2003-07-03 2006-08-08 Baojia Huang Vehicle collision avoidance system and method
JP4304517B2 (en) * 2005-11-09 2009-07-29 トヨタ自動車株式会社 Object detection device
EP1837803A3 (en) * 2006-03-24 2008-05-14 MobilEye Technologies, Ltd. Headlight, taillight and streetlight detection
US8050459B2 (en) * 2008-07-25 2011-11-01 GM Global Technology Operations LLC System and method for detecting pedestrians

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7113867B1 (en) * 2000-11-26 2006-09-26 Mobileye Technologies Limited System and method for detecting obstacles to vehicle motion and determining time to contact therewith using sequences of images
US20040175019A1 (en) * 2003-03-03 2004-09-09 Lockheed Martin Corporation Correlation based in frame video tracker
CN1654245A (en) * 2004-02-10 2005-08-17 丰田自动车株式会社 Deceleration control apparatus and method for a vehicle
WO2005098782A1 (en) * 2004-04-08 2005-10-20 Mobileye Technologies Limited Collision warning system
CN101261681A (en) * 2008-03-31 2008-09-10 北京中星微电子有限公司 Road image extraction method and device in intelligent video monitoring
US20100191391A1 (en) * 2009-01-26 2010-07-29 Gm Global Technology Operations, Inc. multiobject fusion module for collision preparation system

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8917169B2 (en) 1993-02-26 2014-12-23 Magna Electronics Inc. Vehicular vision system
US8993951B2 (en) 1996-03-25 2015-03-31 Magna Electronics Inc. Driver assistance system for a vehicle
US9436880B2 (en) 1999-08-12 2016-09-06 Magna Electronics Inc. Vehicle vision system
US9834216B2 (en) 2002-05-03 2017-12-05 Magna Electronics Inc. Vehicular control system using cameras and radar sensor
US11203340B2 (en) 2002-05-03 2021-12-21 Magna Electronics Inc. Vehicular vision system using side-viewing camera
US10683008B2 (en) 2002-05-03 2020-06-16 Magna Electronics Inc. Vehicular driving assist system using forward-viewing camera
US10351135B2 (en) 2002-05-03 2019-07-16 Magna Electronics Inc. Vehicular control system using cameras and radar sensor
US10118618B2 (en) 2002-05-03 2018-11-06 Magna Electronics Inc. Vehicular control system using cameras and radar sensor
US9555803B2 (en) 2002-05-03 2017-01-31 Magna Electronics Inc. Driver assistance system for vehicle
US9643605B2 (en) 2002-05-03 2017-05-09 Magna Electronics Inc. Vision system for vehicle
US9609289B2 (en) 2004-04-15 2017-03-28 Magna Electronics Inc. Vision system for vehicle
US10187615B1 (en) 2004-04-15 2019-01-22 Magna Electronics Inc. Vehicular control system
US9948904B2 (en) 2004-04-15 2018-04-17 Magna Electronics Inc. Vision system for vehicle
US10015452B1 (en) 2004-04-15 2018-07-03 Magna Electronics Inc. Vehicular control system
US11847836B2 (en) 2004-04-15 2023-12-19 Magna Electronics Inc. Vehicular control system with road curvature determination
US10110860B1 (en) 2004-04-15 2018-10-23 Magna Electronics Inc. Vehicular control system
US11503253B2 (en) 2004-04-15 2022-11-15 Magna Electronics Inc. Vehicular control system with traffic lane detection
US10735695B2 (en) 2004-04-15 2020-08-04 Magna Electronics Inc. Vehicular control system with traffic lane detection
US9008369B2 (en) 2004-04-15 2015-04-14 Magna Electronics Inc. Vision system for vehicle
US10306190B1 (en) 2004-04-15 2019-05-28 Magna Electronics Inc. Vehicular control system
US9428192B2 (en) 2004-04-15 2016-08-30 Magna Electronics Inc. Vision system for vehicle
US10462426B2 (en) 2004-04-15 2019-10-29 Magna Electronics Inc. Vehicular control system
US9736435B2 (en) 2004-04-15 2017-08-15 Magna Electronics Inc. Vision system for vehicle
US9191634B2 (en) 2004-04-15 2015-11-17 Magna Electronics Inc. Vision system for vehicle
US10787116B2 (en) 2006-08-11 2020-09-29 Magna Electronics Inc. Adaptive forward lighting system for vehicle comprising a control that adjusts the headlamp beam in response to processing of image data captured by a camera
US11148583B2 (en) 2006-08-11 2021-10-19 Magna Electronics Inc. Vehicular forward viewing image capture system
US11396257B2 (en) 2006-08-11 2022-07-26 Magna Electronics Inc. Vehicular forward viewing image capture system
US11623559B2 (en) 2006-08-11 2023-04-11 Magna Electronics Inc. Vehicular forward viewing image capture system
US10071676B2 (en) 2006-08-11 2018-09-11 Magna Electronics Inc. Vision system for vehicle
US11951900B2 (en) 2006-08-11 2024-04-09 Magna Electronics Inc. Vehicular forward viewing image capture system
CN111508275A (en) * 2013-07-15 2020-08-07 大众汽车有限公司 Apparatus and method for displaying traffic condition in vehicle
CN105981042B (en) * 2014-01-17 2019-12-06 Kpit技术有限责任公司 Vehicle detection system and method
CN105981042A (en) * 2014-01-17 2016-09-28 Kpit技术有限责任公司 Vehicle detection system and method thereof
CN109716255A (en) * 2016-09-18 2019-05-03 深圳市大疆创新科技有限公司 For operating movable object with the method and system of avoiding barrier

Also Published As

Publication number Publication date
CN102542256B (en) 2017-05-31
CN107423675B (en) 2021-07-16
CN107423675A (en) 2017-12-01

Similar Documents

Publication Publication Date Title
US10940818B2 (en) Pedestrian collision warning system
CN102542256A (en) Advanced warning system for giving front conflict alert to pedestrians
US11087148B2 (en) Barrier and guardrail detection using a single camera
US11741696B2 (en) Advanced path prediction
US11915491B2 (en) Controlling host vehicle based on detected door opening events
US9251708B2 (en) Forward collision warning trap and pedestrian advanced warning system
US10274598B2 (en) Navigation based on radar-cued visual imaging
CN112580456A (en) System and method for curb detection and pedestrian hazard assessment
JP3857698B2 (en) Driving environment recognition device
JP2007309799A (en) On-board distance measuring apparatus
Belaroussi et al. Vehicle attitude estimation in adverse weather conditions using a camera, a GPS and a 3D road map

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: WUBISHI VISUAL TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: MOBILEYE TECHNOLOGIES LTD.

Effective date: 20141120

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20141120

Address after: Israel Jerusalem

Applicant after: MOBILEYE TECHNOLOGIES LTD.

Address before: Cyprus Nicosia

Applicant before: Mobileye Technologies Ltd.

GR01 Patent grant
GR01 Patent grant