US9330343B2 - Image analysis apparatus mounted to vehicle - Google Patents

Image analysis apparatus mounted to vehicle Download PDF

Info

Publication number
US9330343B2
US9330343B2 US14/411,112 US201314411112A US9330343B2 US 9330343 B2 US9330343 B2 US 9330343B2 US 201314411112 A US201314411112 A US 201314411112A US 9330343 B2 US9330343 B2 US 9330343B2
Authority
US
United States
Prior art keywords
vehicle
speed
focus
learning
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/411,112
Other versions
US20150146930A1 (en
Inventor
Hiroki Nakano
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Denso Corp
Original Assignee
Denso Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Denso Corp filed Critical Denso Corp
Assigned to DENSO CORPORATION reassignment DENSO CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NAKANO, HIROKI
Publication of US20150146930A1 publication Critical patent/US20150146930A1/en
Application granted granted Critical
Publication of US9330343B2 publication Critical patent/US9330343B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06K9/6262
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06K9/00798
    • G06K9/00993
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/96Management of image or video recognition tasks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Definitions

  • the present invention relates to an image analysis apparatus mounted to a vehicle, and in particular to an image analysis apparatus mounted to a vehicle, which performs image analysis based on the position of a focus of expansion.
  • Patent Literature 1 An example of such a system is disclosed in Patent Literature 1.
  • the system related to Patent Literature 1 is an in-vehicle type system that analyzes picked-up image data obtained from an in-vehicle camera to calculate the position of a focus of expansion (FOE) to thereby estimate a posture of the in-vehicle camera.
  • the focus of expansion refers to a point where a group of parallel lines are concentrated in a perspective drawing method.
  • picked-up image data are analyzed taking account of the posture of the in-vehicle camera.
  • the system enables calculation of a running state of the vehicle in relation to the road, or a distance to a vehicle running in a forward direction.
  • Patent Literature 1 JP-A-H07-077431
  • steep edges in luminance variation are extracted from picked-up image data to estimate a region defined by road division lines (e.g., white lines, Botts' dots, etc.) shown in the picked-up image data. Then, the edges corresponding to the road division lines are linearly approximated to obtain two straight lines, followed by calculating an intersection position of the two straight lines. For example, candidates of the focus of expansion are calculated on the basis of a weighted time average of the intersection position.
  • road division lines e.g., white lines, Botts' dots, etc.
  • probability evaluation is performed for the candidates of the focus of expansion by comparing the candidates with a focus-of-expansion position learned in the past to thereby reject those candidates which have low probability. Then, the candidate that has not been rejected is used as a focus of expansion to thereby learn a focus-of-expansion position. For example, the information of the learned focus-of-expansion position is used in estimating edges that are most probable as road division lines.
  • a focus-of-expansion position is learned while the vehicle runs.
  • a vehicle 100 is often placed on a chassis dynamometer 200 to perform a simulated run.
  • a focus-of-expansion position may be learned in spite of the fact that the vehicle 100 does not run on a road, and thus error learning of a focus-of-expansion position may occur.
  • stains on the wall 210 , or the shadows cast from nearby constructions onto the wall 210 may be erroneously estimated as road division lines, causing error learning of a focus-of-expansion position.
  • an image analysis apparatus mounted to a vehicle.
  • the image analysis apparatus includes a camera, a learning means, a speed detecting means and a controlling means.
  • the camera picks up an image of a region ahead of the vehicle and generates image data that show a picked-up image of the region.
  • the learning means analyzes the image data generated by the camera and learns a focus-of-expansion position.
  • the controlling means allows the learning means to start a performance of learning a focus-of-expansion position on condition that a state where the speed of the vehicle detected by the speed detecting means exceeds a reference speed has continued for not less than a predetermined duration of time.
  • the learning means may be configured to learn a focus-of-expansion position on the basis of an estimation result for road division lines that are shown in image data. Further, the controlling means may be configured to stop the learning performance of the learning means when the speed of the vehicle detected by the speed detecting means has become not more than a reference speed or a learning-performance inhibition speed that is set in advance to a speed lower than the reference speed.
  • FIG. 1 is a block diagram illustrating a configuration of a vehicle control system provided with an image analysis apparatus related to an embodiment of the present invention
  • FIG. 2 is a block diagram illustrating a correlation between a plurality of processes performed by a control unit of the image analysis apparatus
  • FIG. 3 is a flow chart illustrating a learning control process performed by the control unit
  • FIG. 4 is a graph explaining operation modes of the control unit, being correlated to vehicle speed.
  • FIG. 5 is a diagram illustrating a state of a vehicle subjected to a simulated run on a chassis dynamometer.
  • FIGS. 1 to 4 hereinafter is described an embodiment of an image analysis apparatus related to the present invention.
  • the image analysis apparatus related to the invention should not be construed as being limited to the embodiment described below, but may be applied with various modes.
  • FIG. 1 illustrates a configuration of a vehicle control system 1 .
  • the vehicle control system 1 includes an image analysis apparatus 10 as an in-vehicle type electronic machine, a vehicle control apparatus 20 and a wheel-speed sensor 30 .
  • the image analysis apparatus 10 , the vehicle control apparatus 20 and the wheel-speed sensor 30 are individually connected to an in-vehicle network and configured to enable mutual communication.
  • the in-vehicle network is connected with various sensors (not shown), such as an acceleration sensor, that can detect a running state of the vehicle, in a manner of providing detection values.
  • the image analysis apparatus 10 includes a camera 11 , a communication interface 15 and a control unit 17 .
  • the camera 11 picks up an image of a field of view ahead of the vehicle that is equipped with the vehicle control system 1 (so-called own vehicle) to generate picked-up image data as image data that show a picked-up image of the field of view and sequentially input picked-up image data to the control unit 17 .
  • a monocular camera is used as the camera 11 , but a stereo camera may be used instead.
  • the communication interface 15 is controlled by the control unit 17 and configured to enable two-wary communication with communication nodes, such as the vehicle control apparatus 20 and the wheel-speed sensor 30 , via the in-vehicle network.
  • the control unit 17 carries out overall control of the image analysis apparatus 10 .
  • the control unit 17 includes a CPU 17 A, a ROM 17 B and a RAM 17 C.
  • the CPU 17 A executes various processes according to programs stored in the ROM 17 B to thereby realize various functions as the image analysis apparatus 10 .
  • the RAM 17 C is used as a working memory when the programs are executed by the CPU 17 A.
  • the control unit 17 performs a focus-of-expansion learning process PR 1 , a learning control process PR 2 , a road-division-line estimation process PR 3 , a running-state estimation process PR 4 , and the like shown in FIG. 2 , according to the programs stored in the ROM 17 B.
  • the focus-of-expansion learning process PR 1 is a process for learning a focus of expansion (FOE) in picked-up image data according to a well-known technique.
  • a learned focus-of-expansion position is stored in the ROM 17 B as a parameter indicating a camera posture.
  • the ROM 17 B of the present embodiment is configured by an electrically data rewritable flash memory.
  • the learning control process PR 2 is a process for controlling the execution of the focus-of-expansion learning process PR 1 .
  • the control unit 17 executes the learning control process PR 2 to control the start/termination of the focus-of-expansion learning process PR 1 .
  • the road-division-line estimation process PR 3 is a process for estimating a region defined by road division lines that are shown in picked-up image data.
  • edges as candidates of road division lines, are extracted from picked-up image data. Then, based on a positional relationship of the directions of these edges with the learned focus of expansion, edges having a probability of being the road division lines of the road on which the vehicle runs are determined.
  • the region defined by the road division lines of the road on which the vehicle runs is estimated. If a focus of expansion has not been learned, a focus-of-expansion position calculated from installation parameters of the camera 11 , for example, is used as an index to estimate road division lines.
  • a focus-of-expansion position can be learned on the basis of the road division lines estimated through the road-division-line estimation process PR 3 . For example, an intersection that appears on an extension of two estimated road division lines is detected as a candidate. Then, an error between the position of the detected candidate and the learned focus-of-expansion position is used for the evaluation as to the probability of the candidate's being a focus of expansion. If the error is large and thus the probability is low, the candidate is rejected. If the error is small and thus the probability is high, the candidate is used as a focus of expansion to thereby learn and update the focus-of-expansion position stored in the ROM 17 B.
  • the running-state estimation process PR 4 is a process for analyzing picked-up image data using the learned focus-of-expansion position as an index to estimate a running state of the vehicle in relation to the road, or a positional relationship with a different vehicle running in a forward direction. Since the running-state estimation process PR 4 is well known, the process is only briefly described. As an example of the running-state estimation process PR 4 , there is a process for estimating the direction or position of the vehicle relative to the running lane, on the basis of the road division lines (e.g., white lines or Bott's dots) estimated from picked-up image data.
  • road division lines e.g., white lines or Bott's dots
  • the running-state estimation process PR 4 there is a process for retrieving and detecting, with reference to a focus-of-expansion position, a vehicle running in a forward direction and shown in picked-up image data, or estimating a positional relationship with a detected different vehicle in a forward direction (e.g., distance from the vehicle equipped with the system to a vehicle in a forward direction).
  • Information resulting from the estimation in the running-state estimation process PR 4 is provided, for use in vehicle control, to the vehicle control apparatus 20 via the communication interface 15 and the in-vehicle network.
  • the information includes a running state of the vehicle in relation to the road, and a positional relationship with a different vehicle running in a forward direction.
  • vehicle control here is used in a broad sense as a control over the devices equipped in the vehicle.
  • the vehicle control apparatus 20 can perform the vehicle control based on the information obtained from the image analysis apparatus 10 , the vehicle control including, for example: a process of outputting an audible warning to the vehicle occupants when the vehicle crosses road division lines during the run, or when the vehicle approaches a different vehicle in a forward direction; or a process of controlling braking to keep a proper inter-vehicle distance to a vehicle in a forward direction.
  • a learned value of a focus-of-expansion position is used in estimating road division lines or estimating a running state, occurrence of error learning of a focus-of-expansion position is not preferable.
  • the camera 11 may pick up an image such as of the stains on the wall 210 ahead of the vehicle or the shadows cast from nearby constructions onto the wall 210 . These stains and shadows may induce error learning of a focus-of-expansion position.
  • a position which is greatly deviated from a truly correct focus-of-expansion position may be learned as a focus-of-expansion position.
  • the learned value may no longer be restored to a correct focus-of-expansion position through the learning process performed afterward, or a long duration of time may be taken for the restoration.
  • the road-division-line estimation process PR 3 uses the learned value of a focus-of-expansion position.
  • the focus-of-expansion position obtained through error learning is used as a basis, it may be difficult to determine correct edges that serve as road division lines. If a correct focus of expansion can be detected as a candidate of a focus of expansion in the focus-of-expansion learning process PR 1 , the correct focus of expansion cannot be used for learning due to the deviation of the position of the candidate from a learned focus-of-expansion position.
  • a process as shown in FIG. 3 is performed as the learning control process PR 2 .
  • the focus-of-expansion learning process PR 1 is ensured not to be started.
  • the control unit 17 starts the learning control process PR 2 shown in FIG. 3 when the ignition switch is turned on, and repeatedly performs the process until the ignition switch is turned off.
  • the control unit 17 compares a detection value of a vehicle speed with a reference speed set in advance in a design stage.
  • the vehicle speed in this case is derived from the wheel-speed sensor 30 via the in-vehicle network and the communication interface 15 .
  • the control unit 17 determines whether or not the detection value of the vehicle speed exceeds the reference value (step S 110 ).
  • the reference speed may be determined by a designer from a view point of whether or not the reference speed enables proper learning of a focus-of-expansion position. The learning of a focus-of-expansion position can be properly performed on a road with good visibility. Accordingly, the reference speed may be set, for example, to about 50 km per hour.
  • step S 110 If the detection value of the vehicle speed is determined not to exceed the reference value (No at step S 110 ), the control unit 17 repeatedly performs the determination step until the detection value of the vehicle speed exceeds the reference speed. Then, if the detection value of the vehicle speed is determined to exceed the reference value (Yes at step S 110 ), control proceeds to step S 120 . Then, from this point, as an original point, onward, measurement of time is started (see FIG. 4 ).
  • the control unit 17 determines whether or not the measured time, which is a duration of time elapsed from the execution of step S 120 , exceeds a specified duration of time (step S 130 ). If the measured time is determined not to exceed the specified duration of time (No at step S 130 ), it is determined whether or not the detection value of the vehicle speed derived from the wheel-speed sensor 30 has been restored to not more than the reference speed (step S 140 ). If it is determined that the detection value of the vehicle speed has not been restored to not more than the reference value (No at step S 140 ), steps S 130 and S 140 are repeatedly performed until the measured time exceeds the specified duration of time, or until the detection value of the vehicle speed is restored to not more than the reference speed.
  • the control unit 17 starts the focus-of-expansion learning process PR 1 (step S 150 ).
  • the learning control process PR 2 is temporarily halted without starting the focus-of-expansion learning process PR 1 .
  • the specified duration of time is determined by a designer of the image analysis apparatus 10 , taking account of the duration of time of the simulated run of the vehicle on the chassis dynamometer 200 in a vehicle inspection. Specifically, if an affirmative determination is made at step S 110 during a simulated run of the vehicle on the chassis dynamometer 200 , a duration of time based on which an affirmative determination is not made at step S 130 during the simulated run is determined to be the specified duration of time.
  • the designer of the image analysis apparatus 10 may keep statistics on the durations of time of simulated runs of the vehicle on the chassis dynamometer 200 in a vehicle inspection, and determine a specified duration of time, so that the probability of making an affirmative determination at step S 130 will be sufficiently low in a simulated run of the vehicle on the chassis dynamometer 200 .
  • the duration of time of a simulated run on the chassis dynamometer 200 in a vehicle inspection is about one minute.
  • a specified duration of time may be determined to be about two to three minutes.
  • the vehicle speed becomes not more than the reference speed, as indicated by the dashed line in FIG. 4 , prior to the excess of an elapsed time over the specified duration of time, the elapsed time being a duration of time from when the vehicle speed has exceeded the reference speed, in a simulated run on the chassis dynamometer 200 . Accordingly, an affirmative determination is made at step S 140 so that, resultantly, the focus-of-expansion learning process is not started. On the other hand, when the vehicle runs on a road, the focus-of-expansion learning process is resultantly started with a high probability.
  • the control unit 17 determines whether or not termination conditions of the focus-of-expansion learning process PR 1 have been met (step S 160 ).
  • step S 160 it is determined whether or not the detection value of the vehicle speed derived from the wheel-speed sensor 30 has become not more than a learning termination speed (e.g., 50 km per hour) determined in advance within a speed range of not more than the reference speed. If the detection value of the vehicle speed is not more than the learning termination speed, the termination conditions may be determined to have been met. If the detection value of the vehicle speed is larger than the learning termination speed, the termination conditions may be determined not to have been met. The termination conditions may be determined by the designer of the image analysis apparatus 10 .
  • the control unit 17 repeatedly performs the determination step of S 160 until the termination conditions are met. If the termination conditions are determined to have been met (Yes at step S 160 ), the focus-of-expansion learning process PR 1 started at step S 150 is terminated (step S 170 ) to temporarily halt the learning control process PR 2 . The control unit 17 repeatedly performs the learning control process PR 2 in such a procedure.
  • the vehicle control system 1 of the present embodiment has so far been described.
  • the camera 11 picks up an image of a region ahead of the vehicle.
  • the camera 11 then produces picked-up image data which are analyzed by the control unit 17 to learn a focus-of-expansion position.
  • the learning of the focus-of-expansion position is started on condition that a state where the detection value of the vehicle speed derived from the wheel-speed sensor 30 exceeds the reference speed has continued for a predetermined duration of time or more (step S 150 ).
  • the focus-of-expansion learning can be suppressed from being performed in a state where the vehicle is in a simulated run on a chassis dynamometer.
  • error learning of a focus-of-expansion position is suppressed from occurring during a simulated run of the vehicle.
  • the vehicle control and the focus-of-expansion learning performed later are suppressed from being unfavorably influenced by the error learning that would otherwise have occurred.
  • a focus-of-expansion position is learned and updated on the basis of the information on the road division lines shown in the picked-up image data estimated through the road-division-line estimation process PR 3 and, in estimating road division lines, the information on the learned focus of expansion is used. Accordingly, if the learned focus-of-expansion position is deviated from a correct position by a large degree due to the error learning of the focus-of-expansion position, road division lines can no longer be correctly estimated. In this case, it takes time to learn and update the focus-of-expansion position with a correct value. Further, it may be difficult to learn and update the focus-of-expansion position with a correct value.
  • the occurrence of such a situation can be suppressed by the control of the learning performance described above.
  • the vehicle control system 1 formulated accordingly can realize proper vehicle control on the basis of the information on a focus of expansion.
  • the vehicle In the case where an affirmative determination is made at step S 130 , the vehicle is regarded to run on a road in a period from the start of the time measurement at step S 120 to an affirmative determination made at step S 130 . Accordingly, in the focus-of-expansion learning process PR 1 started at step S 150 , the control unit 17 can perform the learning of a focus-of-expansion position by using the information on the road division lines as well which have been estimated from the picked-up image data obtained in a period from the point of execution of step S 120 to the affirmative determination made at step S 130 .
  • the image analysis apparatus 10 corresponds to an example of the electronic machine installed in the vehicle, while the wheel-speed sensor corresponds to an example of the speed detecting means.
  • the focus-of-expansion learning process PR 1 performed by the control unit 17 corresponds to an example of the process realized by the learning means.
  • the learning control process PR 2 performed by the control unit 17 corresponds to an example of the process realized by the controlling means.

Abstract

An image analysis apparatus picks up an image of a region ahead of a vehicle using a camera, and allows a control unit to analyze picked-up image data generated by the camera to learn a focus-of-expansion position. The control unit controls the learning performance as follows. Specifically, the control unit does not start the learning performance for the focus-of-expansion position until a state where a detection value of a vehicle speed exceeds a reference speed exceeds a specified duration of time. When the state where a detection value of a vehicle speed exceeds a reference speed exceeds the specified duration of time, the learning performance for the focus-of-expansion position is started from this time point. The specified duration of time may be determined on the basis of statistics on the durations of simulated runs of the vehicle performed on a chassis dynamometer in a vehicle inspection.

Description

CROSS REFERENCE TO RELATED APPLICATIONS
This application is a U.S. National Phase Application under 35 U.S.C. 371 of International Application No. PCT/JP2013/067817 filed on Jun. 28, 2013 and published in Japanese as WO 2014/003167 A1 on Jan. 3, 2014. This application is based on and claims the benefit of priority from Japanese Patent Application No. 2012-147001 filed Jun. 29, 2012. The entire disclosures of all of the above applications are incorporated herein by reference.
BACKGROUND
1. Technical Field
The present invention relates to an image analysis apparatus mounted to a vehicle, and in particular to an image analysis apparatus mounted to a vehicle, which performs image analysis based on the position of a focus of expansion.
2. Background Art
Recently, there is provided a system for obtaining various pieces of information regarding running of a vehicle. In such a system, a camera is installed in the vehicle and image data picked up by the camera are used for obtaining the various pieces of information. An example of such a system is disclosed in Patent Literature 1. The system related to Patent Literature 1 is an in-vehicle type system that analyzes picked-up image data obtained from an in-vehicle camera to calculate the position of a focus of expansion (FOE) to thereby estimate a posture of the in-vehicle camera. The focus of expansion refers to a point where a group of parallel lines are concentrated in a perspective drawing method.
According to an in-vehicle system of this type, picked-up image data are analyzed taking account of the posture of the in-vehicle camera. Thus, for example, the system enables calculation of a running state of the vehicle in relation to the road, or a distance to a vehicle running in a forward direction.
Patent Literature 1 JP-A-H07-077431
Technical Problem
For example, in learning a focus-of-expansion position, steep edges in luminance variation are extracted from picked-up image data to estimate a region defined by road division lines (e.g., white lines, Botts' dots, etc.) shown in the picked-up image data. Then, the edges corresponding to the road division lines are linearly approximated to obtain two straight lines, followed by calculating an intersection position of the two straight lines. For example, candidates of the focus of expansion are calculated on the basis of a weighted time average of the intersection position.
For example, probability evaluation is performed for the candidates of the focus of expansion by comparing the candidates with a focus-of-expansion position learned in the past to thereby reject those candidates which have low probability. Then, the candidate that has not been rejected is used as a focus of expansion to thereby learn a focus-of-expansion position. For example, the information of the learned focus-of-expansion position is used in estimating edges that are most probable as road division lines.
A focus-of-expansion position is learned while the vehicle runs. As shown in FIG. 5, when a vehicle is inspected, a vehicle 100 is often placed on a chassis dynamometer 200 to perform a simulated run. In such a vehicle inspection, a focus-of-expansion position may be learned in spite of the fact that the vehicle 100 does not run on a road, and thus error learning of a focus-of-expansion position may occur. For example, when there is a wall 210 ahead of a camera 11, stains on the wall 210, or the shadows cast from nearby constructions onto the wall 210 may be erroneously estimated as road division lines, causing error learning of a focus-of-expansion position.
SUMMARY
In light of such a problem, it is desired to suppress the occurrence of error learning of a focus-of-expansion position in performing a simulated run of a vehicle.
According to a typical example, there is provided an image analysis apparatus mounted to a vehicle. The image analysis apparatus includes a camera, a learning means, a speed detecting means and a controlling means. The camera picks up an image of a region ahead of the vehicle and generates image data that show a picked-up image of the region. The learning means analyzes the image data generated by the camera and learns a focus-of-expansion position. The controlling means allows the learning means to start a performance of learning a focus-of-expansion position on condition that a state where the speed of the vehicle detected by the speed detecting means exceeds a reference speed has continued for not less than a predetermined duration of time.
In the case where a vehicle is subjected a simulated run on a chassis dynamometer such as in a vehicle inspection, there is a low probability that the vehicle is subjected to a simulated run for a long duration of time. In general, a simulated run of a vehicle on a chassis dynamometer is completed in about one minute.
Accordingly, when a learning performance for a focus-of-expansion position is ensured to be started by the learning means on condition that the state where the speed of the vehicle detected by the speed detecting means exceeds a reference speed has continued for not less than a predetermined duration of time, learning of a focus-of-expansion position is suppressed from being performed in a state where the vehicle is subjected to a simulated run on a chassis dynamometer. Thus, error learning of a focus-of-expansion position is suppressed from occurring due to the learning of a focus-of-expansion position in a simulated run of the vehicle.
In the image analysis apparatus, the learning means may be configured to learn a focus-of-expansion position on the basis of an estimation result for road division lines that are shown in image data. Further, the controlling means may be configured to stop the learning performance of the learning means when the speed of the vehicle detected by the speed detecting means has become not more than a reference speed or a learning-performance inhibition speed that is set in advance to a speed lower than the reference speed.
BRIEF DESCRIPTION OF THE DRAWINGS
In the accompanying drawings:
FIG. 1 is a block diagram illustrating a configuration of a vehicle control system provided with an image analysis apparatus related to an embodiment of the present invention;
FIG. 2 is a block diagram illustrating a correlation between a plurality of processes performed by a control unit of the image analysis apparatus;
FIG. 3 is a flow chart illustrating a learning control process performed by the control unit;
FIG. 4 is a graph explaining operation modes of the control unit, being correlated to vehicle speed; and
FIG. 5 is a diagram illustrating a state of a vehicle subjected to a simulated run on a chassis dynamometer.
DESCRIPTION OF EMBODIMENTS
Referring to FIGS. 1 to 4, hereinafter is described an embodiment of an image analysis apparatus related to the present invention. The image analysis apparatus related to the invention should not be construed as being limited to the embodiment described below, but may be applied with various modes.
The image analysis apparatus related to the present invention is implemented being incorporated in a vehicle control system which is mounted to a vehicle. FIG. 1 illustrates a configuration of a vehicle control system 1. As shown in the figure, the vehicle control system 1 includes an image analysis apparatus 10 as an in-vehicle type electronic machine, a vehicle control apparatus 20 and a wheel-speed sensor 30.
In the vehicle control system 1, the image analysis apparatus 10, the vehicle control apparatus 20 and the wheel-speed sensor 30 are individually connected to an in-vehicle network and configured to enable mutual communication. Other than the wheel-speed sensor 30, the in-vehicle network is connected with various sensors (not shown), such as an acceleration sensor, that can detect a running state of the vehicle, in a manner of providing detection values.
The image analysis apparatus 10 includes a camera 11, a communication interface 15 and a control unit 17. The camera 11 picks up an image of a field of view ahead of the vehicle that is equipped with the vehicle control system 1 (so-called own vehicle) to generate picked-up image data as image data that show a picked-up image of the field of view and sequentially input picked-up image data to the control unit 17. In the present embodiment, a monocular camera is used as the camera 11, but a stereo camera may be used instead.
The communication interface 15 is controlled by the control unit 17 and configured to enable two-wary communication with communication nodes, such as the vehicle control apparatus 20 and the wheel-speed sensor 30, via the in-vehicle network.
The control unit 17 carries out overall control of the image analysis apparatus 10. The control unit 17 includes a CPU 17A, a ROM 17B and a RAM 17C. In the control unit 17, the CPU 17A executes various processes according to programs stored in the ROM 17B to thereby realize various functions as the image analysis apparatus 10. The RAM 17C is used as a working memory when the programs are executed by the CPU 17A.
The control unit 17 performs a focus-of-expansion learning process PR1, a learning control process PR2, a road-division-line estimation process PR3, a running-state estimation process PR4, and the like shown in FIG. 2, according to the programs stored in the ROM 17B. The focus-of-expansion learning process PR1 is a process for learning a focus of expansion (FOE) in picked-up image data according to a well-known technique. A learned focus-of-expansion position is stored in the ROM 17B as a parameter indicating a camera posture. For example, the ROM 17B of the present embodiment is configured by an electrically data rewritable flash memory.
Further, the learning control process PR2 is a process for controlling the execution of the focus-of-expansion learning process PR1. The control unit 17 executes the learning control process PR2 to control the start/termination of the focus-of-expansion learning process PR1.
Besides, the road-division-line estimation process PR3 is a process for estimating a region defined by road division lines that are shown in picked-up image data. In the road-division-line estimation process PR3, edges, as candidates of road division lines, are extracted from picked-up image data. Then, based on a positional relationship of the directions of these edges with the learned focus of expansion, edges having a probability of being the road division lines of the road on which the vehicle runs are determined. Thus, the region defined by the road division lines of the road on which the vehicle runs is estimated. If a focus of expansion has not been learned, a focus-of-expansion position calculated from installation parameters of the camera 11, for example, is used as an index to estimate road division lines.
In the focus-of-expansion learning process PR1, a focus-of-expansion position can be learned on the basis of the road division lines estimated through the road-division-line estimation process PR3. For example, an intersection that appears on an extension of two estimated road division lines is detected as a candidate. Then, an error between the position of the detected candidate and the learned focus-of-expansion position is used for the evaluation as to the probability of the candidate's being a focus of expansion. If the error is large and thus the probability is low, the candidate is rejected. If the error is small and thus the probability is high, the candidate is used as a focus of expansion to thereby learn and update the focus-of-expansion position stored in the ROM 17B.
Besides, the running-state estimation process PR4 is a process for analyzing picked-up image data using the learned focus-of-expansion position as an index to estimate a running state of the vehicle in relation to the road, or a positional relationship with a different vehicle running in a forward direction. Since the running-state estimation process PR4 is well known, the process is only briefly described. As an example of the running-state estimation process PR4, there is a process for estimating the direction or position of the vehicle relative to the running lane, on the basis of the road division lines (e.g., white lines or Bott's dots) estimated from picked-up image data. Other than this, as an example of the running-state estimation process PR4, there is a process for retrieving and detecting, with reference to a focus-of-expansion position, a vehicle running in a forward direction and shown in picked-up image data, or estimating a positional relationship with a detected different vehicle in a forward direction (e.g., distance from the vehicle equipped with the system to a vehicle in a forward direction).
Information resulting from the estimation in the running-state estimation process PR4 is provided, for use in vehicle control, to the vehicle control apparatus 20 via the communication interface 15 and the in-vehicle network. The information includes a running state of the vehicle in relation to the road, and a positional relationship with a different vehicle running in a forward direction. The term “vehicle control” here is used in a broad sense as a control over the devices equipped in the vehicle. The vehicle control apparatus 20 can perform the vehicle control based on the information obtained from the image analysis apparatus 10, the vehicle control including, for example: a process of outputting an audible warning to the vehicle occupants when the vehicle crosses road division lines during the run, or when the vehicle approaches a different vehicle in a forward direction; or a process of controlling braking to keep a proper inter-vehicle distance to a vehicle in a forward direction.
Since a learned value of a focus-of-expansion position is used in estimating road division lines or estimating a running state, occurrence of error learning of a focus-of-expansion position is not preferable. However, when focus-of-expansion learning is carried out under the conditions where the vehicle is in a simulated run, being placed on the chassis dynamometer 200 in a vehicle inspection, the camera 11 may pick up an image such as of the stains on the wall 210 ahead of the vehicle or the shadows cast from nearby constructions onto the wall 210. These stains and shadows may induce error learning of a focus-of-expansion position.
Depending on such error learning, a position which is greatly deviated from a truly correct focus-of-expansion position may be learned as a focus-of-expansion position. In the occurrence of such a deviation, the learned value may no longer be restored to a correct focus-of-expansion position through the learning process performed afterward, or a long duration of time may be taken for the restoration.
For example, the road-division-line estimation process PR3 uses the learned value of a focus-of-expansion position. In this case, if the focus-of-expansion position obtained through error learning is used as a basis, it may be difficult to determine correct edges that serve as road division lines. If a correct focus of expansion can be detected as a candidate of a focus of expansion in the focus-of-expansion learning process PR1, the correct focus of expansion cannot be used for learning due to the deviation of the position of the candidate from a learned focus-of-expansion position.
In this regard, in the present embodiment, a process as shown in FIG. 3 is performed as the learning control process PR2. Thus, under the conditions where the vehicle has a high probability of being in a simulated run, being placed on the chassis dynamometer 200, the focus-of-expansion learning process PR1 is ensured not to be started.
The control unit 17 starts the learning control process PR2 shown in FIG. 3 when the ignition switch is turned on, and repeatedly performs the process until the ignition switch is turned off.
Upon start of the learning control process PR2, the control unit 17 compares a detection value of a vehicle speed with a reference speed set in advance in a design stage. The vehicle speed in this case is derived from the wheel-speed sensor 30 via the in-vehicle network and the communication interface 15. As a result of the comparison, the control unit 17 determines whether or not the detection value of the vehicle speed exceeds the reference value (step S110). It should be noted that the reference speed may be determined by a designer from a view point of whether or not the reference speed enables proper learning of a focus-of-expansion position. The learning of a focus-of-expansion position can be properly performed on a road with good visibility. Accordingly, the reference speed may be set, for example, to about 50 km per hour.
If the detection value of the vehicle speed is determined not to exceed the reference value (No at step S110), the control unit 17 repeatedly performs the determination step until the detection value of the vehicle speed exceeds the reference speed. Then, if the detection value of the vehicle speed is determined to exceed the reference value (Yes at step S110), control proceeds to step S120. Then, from this point, as an original point, onward, measurement of time is started (see FIG. 4).
After that, the control unit 17 determines whether or not the measured time, which is a duration of time elapsed from the execution of step S120, exceeds a specified duration of time (step S130). If the measured time is determined not to exceed the specified duration of time (No at step S130), it is determined whether or not the detection value of the vehicle speed derived from the wheel-speed sensor 30 has been restored to not more than the reference speed (step S140). If it is determined that the detection value of the vehicle speed has not been restored to not more than the reference value (No at step S140), steps S130 and S140 are repeatedly performed until the measured time exceeds the specified duration of time, or until the detection value of the vehicle speed is restored to not more than the reference speed.
Then, when the measured time is determined to exceed the specified duration of time (Yes at step S130), the control unit 17 starts the focus-of-expansion learning process PR1 (step S150). On the other hand, when the detection value of the vehicle speed is determined to have been returned to not more than the reference speed prior to exceeding the specified duration of time (Yes at step S140), the learning control process PR2 is temporarily halted without starting the focus-of-expansion learning process PR1.
An explanation on the specified duration of time is provided below. The specified duration of time is determined by a designer of the image analysis apparatus 10, taking account of the duration of time of the simulated run of the vehicle on the chassis dynamometer 200 in a vehicle inspection. Specifically, if an affirmative determination is made at step S110 during a simulated run of the vehicle on the chassis dynamometer 200, a duration of time based on which an affirmative determination is not made at step S130 during the simulated run is determined to be the specified duration of time.
Specifically, the designer of the image analysis apparatus 10 may keep statistics on the durations of time of simulated runs of the vehicle on the chassis dynamometer 200 in a vehicle inspection, and determine a specified duration of time, so that the probability of making an affirmative determination at step S130 will be sufficiently low in a simulated run of the vehicle on the chassis dynamometer 200. For example, the duration of time of a simulated run on the chassis dynamometer 200 in a vehicle inspection is about one minute. Accordingly, a specified duration of time may be determined to be about two to three minutes.
By setting a specified duration of time in this way, it is highly probable that the vehicle speed becomes not more than the reference speed, as indicated by the dashed line in FIG. 4, prior to the excess of an elapsed time over the specified duration of time, the elapsed time being a duration of time from when the vehicle speed has exceeded the reference speed, in a simulated run on the chassis dynamometer 200. Accordingly, an affirmative determination is made at step S140 so that, resultantly, the focus-of-expansion learning process is not started. On the other hand, when the vehicle runs on a road, the focus-of-expansion learning process is resultantly started with a high probability.
When the focus-of-expansion learning process PR1 is started at step S150 under the conditions described above, the control unit 17 determines whether or not termination conditions of the focus-of-expansion learning process PR1 have been met (step S160). At step S160, it is determined whether or not the detection value of the vehicle speed derived from the wheel-speed sensor 30 has become not more than a learning termination speed (e.g., 50 km per hour) determined in advance within a speed range of not more than the reference speed. If the detection value of the vehicle speed is not more than the learning termination speed, the termination conditions may be determined to have been met. If the detection value of the vehicle speed is larger than the learning termination speed, the termination conditions may be determined not to have been met. The termination conditions may be determined by the designer of the image analysis apparatus 10.
Then, when it is determined that the termination conditions have not been met (No at step S160), the control unit 17 repeatedly performs the determination step of S160 until the termination conditions are met. If the termination conditions are determined to have been met (Yes at step S160), the focus-of-expansion learning process PR1 started at step S150 is terminated (step S170) to temporarily halt the learning control process PR2. The control unit 17 repeatedly performs the learning control process PR2 in such a procedure.
The vehicle control system 1 of the present embodiment has so far been described. According to the present embodiment, the camera 11 picks up an image of a region ahead of the vehicle. The camera 11 then produces picked-up image data which are analyzed by the control unit 17 to learn a focus-of-expansion position. Specifically, the learning of the focus-of-expansion position is started on condition that a state where the detection value of the vehicle speed derived from the wheel-speed sensor 30 exceeds the reference speed has continued for a predetermined duration of time or more (step S150).
In a vehicle inspection, for example, there is a low probability that the vehicle is in a simulated run for a long duration of time on a chassis dynamometer. Accordingly, when the focus-of-expansion learning is started under the conditions as provided by the present embodiment, the focus-of-expansion learning can be suppressed from being performed in a state where the vehicle is in a simulated run on a chassis dynamometer. Thus, according to the present embodiment, error learning of a focus-of-expansion position is suppressed from occurring during a simulated run of the vehicle. As a result, the vehicle control and the focus-of-expansion learning performed later are suppressed from being unfavorably influenced by the error learning that would otherwise have occurred.
For example, depending on the focus-of-expansion learning process PR1, a focus-of-expansion position is learned and updated on the basis of the information on the road division lines shown in the picked-up image data estimated through the road-division-line estimation process PR3 and, in estimating road division lines, the information on the learned focus of expansion is used. Accordingly, if the learned focus-of-expansion position is deviated from a correct position by a large degree due to the error learning of the focus-of-expansion position, road division lines can no longer be correctly estimated. In this case, it takes time to learn and update the focus-of-expansion position with a correct value. Further, it may be difficult to learn and update the focus-of-expansion position with a correct value.
According to the present embodiment, the occurrence of such a situation can be suppressed by the control of the learning performance described above. Thus, the vehicle control system 1 formulated accordingly can realize proper vehicle control on the basis of the information on a focus of expansion.
In the case where an affirmative determination is made at step S130, the vehicle is regarded to run on a road in a period from the start of the time measurement at step S120 to an affirmative determination made at step S130. Accordingly, in the focus-of-expansion learning process PR1 started at step S150, the control unit 17 can perform the learning of a focus-of-expansion position by using the information on the road division lines as well which have been estimated from the picked-up image data obtained in a period from the point of execution of step S120 to the affirmative determination made at step S130.
In the foregoing embodiment, the image analysis apparatus 10 corresponds to an example of the electronic machine installed in the vehicle, while the wheel-speed sensor corresponds to an example of the speed detecting means. Further, the focus-of-expansion learning process PR1 performed by the control unit 17 corresponds to an example of the process realized by the learning means. The learning control process PR2 performed by the control unit 17 corresponds to an example of the process realized by the controlling means.
REFERENCE SIGNS LIST
  • 1 . . . Vehicle control system,
  • 10 . . . Image analysis apparatus,
  • 11 . . . Camera,
  • 15 . . . Communication interface,
  • 17 . . . Control unit,
  • 17A . . . CPU,
  • 17B . . . ROM,
  • 17C . . . RAM,
  • 20 . . . Vehicle control apparatus,
  • 30 . . . Wheel-speed sensor,
  • 100 . . . Vehicle,
  • 200 . . . Chassis dynamometer,
  • 210 . . . Wall

Claims (11)

What is claimed is:
1. An image analysis apparatus mounted to a vehicle, comprising:
a camera that picks up an image of a region ahead of a vehicle and generates image data that indicates a picked-up image of the region;
means for analyzing the image data generated by the camera and learning a position of a focus of expansion;
means for detecting a speed of the vehicle;
first means for determining whether or not a state where the speed of the vehicle detected by the detecting means exceeds a reference speed has continued for a predetermined duration of time; and
means for allowing the analyzing means to start a performance of learning a position of the focus of expansion, when the first determining means determines that the state where the speed of the vehicle exceeds the reference speed has continued for the predetermined duration of time.
2. The image analysis apparatus according to claim 1, wherein the analyzing means learns a position of the focus of expansion on the basis of an estimation result for road division lines shown in the image data.
3. The image analysis apparatus according to claim 2, wherein:
the apparatus comprises second means for determining, after start of the learning performance for the position of the focus of expansion by the analyzing means, whether or not the speed of the vehicle detected by the detecting means has become not more than the reference speed or a learning-performance inhibition speed that is set in advance to a speed lower than the reference speed; and
the allowing means stops the learning performance of the analyzing means when the second determining means determines that the speed of the vehicle has become not more than the inhibition speed.
4. The image analysis apparatus according to claim 1, wherein:
the apparatus comprises second means for determining, after start of the learning performance for the position of the focus of expansion by the analyzing means, whether or not the speed of the vehicle detected by the detecting means has become not more than the reference speed or a learning-performance inhibition speed that is set in advance to a speed lower than the reference speed; and
the allowing means stops the learning performance of the analyzing means when the second determining means determines that the speed of the vehicle has become not more than the inhibition speed.
5. An image analysis method, comprising:
picking up an image of a region ahead of a vehicle to generate image data that indicates a picked-up image of the region;
analyzing the generated image data to learn a position of a focus of expansion;
detecting a speed of the vehicle;
determining whether or not a state where a detected speed of the vehicle exceeds a reference speed has continued for a predetermined duration of time; and
starting a performance of learning a position of the focus of expansion when a state where the speed of the vehicle exceeds the reference speed is determined to have continued for the predetermined duration of time.
6. The image analysis apparatus according to claim 1, wherein the predetermined duration of time is greater than a duration of time of a simulated run of the vehicle on a chassis dynamometer in a vehicle inspection.
7. The image analysis apparatus according to claim 1, wherein the predetermined duration of time is at least one minute.
8. The image analysis apparatus according to claim 1, wherein the predetermined duration of time is greater than ten seconds.
9. The image analysis apparatus according to claim 5, wherein the predetermined duration of time is greater than a duration of time of a simulated run of the vehicle on a chassis dynamometer in a vehicle inspection.
10. The image analysis apparatus according to claim 5, wherein the predetermined duration of time is at least one minute.
11. The image analysis apparatus according to claim 5, wherein the predetermined duration of time is greater than ten seconds.
US14/411,112 2012-06-29 2013-06-28 Image analysis apparatus mounted to vehicle Active US9330343B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2012147001A JP2014010636A (en) 2012-06-29 2012-06-29 Electronic apparatus
JP2012-147001 2012-06-29
PCT/JP2013/067817 WO2014003167A1 (en) 2012-06-29 2013-06-28 Vehicle-mounted image analysis device

Publications (2)

Publication Number Publication Date
US20150146930A1 US20150146930A1 (en) 2015-05-28
US9330343B2 true US9330343B2 (en) 2016-05-03

Family

ID=49783301

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/411,112 Active US9330343B2 (en) 2012-06-29 2013-06-28 Image analysis apparatus mounted to vehicle

Country Status (3)

Country Link
US (1) US9330343B2 (en)
JP (1) JP2014010636A (en)
WO (1) WO2014003167A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150288877A1 (en) * 2014-04-08 2015-10-08 Assaf Glazer Systems and methods for configuring baby monitor cameras to provide uniform data sets for analysis and to provide an advantageous view point of babies
USD854074S1 (en) 2016-05-10 2019-07-16 Udisense Inc. Wall-assisted floor-mount for a monitoring camera
USD855684S1 (en) 2017-08-06 2019-08-06 Udisense Inc. Wall mount for a monitoring camera
US10708550B2 (en) * 2014-04-08 2020-07-07 Udisense Inc. Monitoring camera and mount
USD900428S1 (en) 2019-01-28 2020-11-03 Udisense Inc. Swaddle band
USD900429S1 (en) 2019-01-28 2020-11-03 Udisense Inc. Swaddle band with decorative pattern
USD900431S1 (en) 2019-01-28 2020-11-03 Udisense Inc. Swaddle blanket with decorative pattern
USD900430S1 (en) 2019-01-28 2020-11-03 Udisense Inc. Swaddle blanket
US10874332B2 (en) 2017-11-22 2020-12-29 Udisense Inc. Respiration monitor
US11297284B2 (en) * 2014-04-08 2022-04-05 Udisense Inc. Monitoring camera and mount

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109884618B (en) * 2014-02-20 2023-05-26 御眼视觉技术有限公司 Navigation system for a vehicle, vehicle comprising a navigation system and method of navigating a vehicle
US9890242B2 (en) * 2014-03-11 2018-02-13 Synvina C.V. Polyester and method for preparing such a polyester
JP6406886B2 (en) * 2014-06-11 2018-10-17 キヤノン株式会社 Image processing apparatus, image processing method, and computer program
JP6265095B2 (en) * 2014-09-24 2018-01-24 株式会社デンソー Object detection device
KR102627453B1 (en) 2018-10-17 2024-01-19 삼성전자주식회사 Method and device to estimate position
KR20210034253A (en) 2019-09-20 2021-03-30 삼성전자주식회사 Method and device to estimate location
CN111461024B (en) * 2020-04-02 2023-08-25 福建工程学院 Zombie vehicle identification method based on intelligent vehicle-mounted terminal

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0777431A (en) 1993-09-08 1995-03-20 Sumitomo Electric Ind Ltd Camera attitude parameter calculating method
US5638116A (en) 1993-09-08 1997-06-10 Sumitomo Electric Industries, Ltd. Object recognition apparatus and method
US20010016797A1 (en) * 2000-02-22 2001-08-23 Yazaki Corporation Danger deciding apparatus for motor vehicle and environment monitoring apparatus therefor
JP2002259995A (en) 2001-03-06 2002-09-13 Nissan Motor Co Ltd Position detector
US20030214576A1 (en) * 2002-05-17 2003-11-20 Pioneer Corporation Image pickup apparatus and method of controlling the apparatus
US20040057600A1 (en) * 2002-09-19 2004-03-25 Akimasa Niwa Moving body detecting apparatus
US20100172542A1 (en) * 2007-12-06 2010-07-08 Gideon Stein Bundling of driver assistance systems
US20120182426A1 (en) * 2009-09-30 2012-07-19 Panasonic Corporation Vehicle-surroundings monitoring device
US20120308114A1 (en) * 2011-05-31 2012-12-06 Gabriel Othmezouri Voting strategy for visual ego-motion from stereo

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0777431A (en) 1993-09-08 1995-03-20 Sumitomo Electric Ind Ltd Camera attitude parameter calculating method
US5638116A (en) 1993-09-08 1997-06-10 Sumitomo Electric Industries, Ltd. Object recognition apparatus and method
US20010016797A1 (en) * 2000-02-22 2001-08-23 Yazaki Corporation Danger deciding apparatus for motor vehicle and environment monitoring apparatus therefor
JP2002259995A (en) 2001-03-06 2002-09-13 Nissan Motor Co Ltd Position detector
US20030214576A1 (en) * 2002-05-17 2003-11-20 Pioneer Corporation Image pickup apparatus and method of controlling the apparatus
US20040057600A1 (en) * 2002-09-19 2004-03-25 Akimasa Niwa Moving body detecting apparatus
US20100172542A1 (en) * 2007-12-06 2010-07-08 Gideon Stein Bundling of driver assistance systems
US20120182426A1 (en) * 2009-09-30 2012-07-19 Panasonic Corporation Vehicle-surroundings monitoring device
US20120308114A1 (en) * 2011-05-31 2012-12-06 Gabriel Othmezouri Voting strategy for visual ego-motion from stereo

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
International Preliminary Report on Patentability (in Japanese with English Translation) for PCT/JP2013/067817, issued Dec. 31, 2014; ISA/JP.
International Search Report (in Japanese with English Translation) for PCT/JP2013/067817, mailed Sep. 17, 2013; ISA/JP.
Office Action mailed Feb. 23, 2016 issued in corresponding Japanese Application No. 2012-147001 with English translation.
Office Action mailed Sep. 8, 2015 issued in the corresponding JP application No. 2012-147001 in Japanese with English translation.

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11297284B2 (en) * 2014-04-08 2022-04-05 Udisense Inc. Monitoring camera and mount
US9530080B2 (en) * 2014-04-08 2016-12-27 Joan And Irwin Jacobs Technion-Cornell Institute Systems and methods for configuring baby monitor cameras to provide uniform data sets for analysis and to provide an advantageous view point of babies
US20170078620A1 (en) * 2014-04-08 2017-03-16 Assaf Glazer Systems and methods for configuring baby monitor cameras to provide uniform data sets for analysis and to provide an advantageous view point of babies
US10165230B2 (en) * 2014-04-08 2018-12-25 Udisense Inc. Systems and methods for configuring baby monitor cameras to provide uniform data sets for analysis and to provide an advantageous view point of babies
US20150288877A1 (en) * 2014-04-08 2015-10-08 Assaf Glazer Systems and methods for configuring baby monitor cameras to provide uniform data sets for analysis and to provide an advantageous view point of babies
US11785187B2 (en) * 2014-04-08 2023-10-10 Udisense Inc. Monitoring camera and mount
US20190306465A1 (en) * 2014-04-08 2019-10-03 Udisense Inc. Systems and methods for configuring baby monitor cameras to provide uniform data sets for analysis and to provide an advantageous view point of babies
US10645349B2 (en) * 2014-04-08 2020-05-05 Udisense Inc. Systems and methods for configuring baby monitor cameras to provide uniform data sets for analysis and to provide an advantageous view point of babies
US10708550B2 (en) * 2014-04-08 2020-07-07 Udisense Inc. Monitoring camera and mount
US20220182585A1 (en) * 2014-04-08 2022-06-09 Udisense Inc. Monitoring camera and mount
USD854074S1 (en) 2016-05-10 2019-07-16 Udisense Inc. Wall-assisted floor-mount for a monitoring camera
USD855684S1 (en) 2017-08-06 2019-08-06 Udisense Inc. Wall mount for a monitoring camera
US10874332B2 (en) 2017-11-22 2020-12-29 Udisense Inc. Respiration monitor
USD900431S1 (en) 2019-01-28 2020-11-03 Udisense Inc. Swaddle blanket with decorative pattern
USD900430S1 (en) 2019-01-28 2020-11-03 Udisense Inc. Swaddle blanket
USD900429S1 (en) 2019-01-28 2020-11-03 Udisense Inc. Swaddle band with decorative pattern
USD900428S1 (en) 2019-01-28 2020-11-03 Udisense Inc. Swaddle band

Also Published As

Publication number Publication date
WO2014003167A1 (en) 2014-01-03
JP2014010636A (en) 2014-01-20
US20150146930A1 (en) 2015-05-28

Similar Documents

Publication Publication Date Title
US9330343B2 (en) Image analysis apparatus mounted to vehicle
US20150294453A1 (en) Image analysis apparatus mounted to vehicle
US20230079730A1 (en) Control device, scanning system, control method, and program
US10767994B2 (en) Sensor output correction apparatus
US9690996B2 (en) On-vehicle image processor
KR101961571B1 (en) Object recognition device using plurality of object detection means
CN110834642B (en) Vehicle deviation identification method and device, vehicle and storage medium
CN109099920B (en) Sensor target accurate positioning method based on multi-sensor association
WO2006101004A1 (en) Vehicle-use image processing system, vehicle-use image processing method, vehicle-use image processing program, vehicle, and method of formulating vehicle-use image processing system
JP2014006243A (en) Abnormality diagnostic device, abnormality diagnostic method, imaging apparatus, moving body control system and moving body
JP6936098B2 (en) Object estimation device
CN111947669A (en) Method for using feature-based positioning maps for vehicles
US20210240991A1 (en) Information processing method, information processing device, non-transitory computer-readable recording medium recording information processing program, and information processing system
KR101628547B1 (en) Apparatus and Method for Checking of Driving Load
JP2010003253A (en) Motion estimation device
US20100023263A1 (en) Position Detection Method And Position Detection Apparatus For A Preceding Vehicle And Data Filtering Method
JP4069919B2 (en) Collision determination device and method
JP6333437B1 (en) Object recognition processing device, object recognition processing method, and vehicle control system
JP4151631B2 (en) Object detection device
JP7334632B2 (en) Object tracking device and object tracking method
CN113167810A (en) Electronic device, correction method, and program
JP2005329765A (en) Run lane recognition device
WO2022230281A1 (en) Outside environment recognition device and outside environment recognition system
WO2022130709A1 (en) Object identification device and object identification method
JP2017072914A (en) Object recognition apparatus

Legal Events

Date Code Title Description
AS Assignment

Owner name: DENSO CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NAKANO, HIROKI;REEL/FRAME:035672/0916

Effective date: 20150127

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8