US6873911B2 - Method and system for vehicle operator assistance improvement - Google Patents

Method and system for vehicle operator assistance improvement Download PDF

Info

Publication number
US6873911B2
US6873911B2 US10/356,742 US35674203A US6873911B2 US 6873911 B2 US6873911 B2 US 6873911B2 US 35674203 A US35674203 A US 35674203A US 6873911 B2 US6873911 B2 US 6873911B2
Authority
US
United States
Prior art keywords
automobile
control input
vehicle
obstacle
obstacle vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US10/356,742
Other versions
US20030187578A1 (en
Inventor
Hikaru Nishira
Taketoshi Kawabe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nissan Motor Co Ltd
Original Assignee
Nissan Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2002025181A external-priority patent/JP3714258B2/en
Priority claimed from JP2002243212A external-priority patent/JP3832403B2/en
Application filed by Nissan Motor Co Ltd filed Critical Nissan Motor Co Ltd
Assigned to NISSAN MOTOR CO., LTD. reassignment NISSAN MOTOR CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KAWABE, TAKETOSHI, NISHIRA, HIKARU
Publication of US20030187578A1 publication Critical patent/US20030187578A1/en
Application granted granted Critical
Publication of US6873911B2 publication Critical patent/US6873911B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60TVEHICLE BRAKE CONTROL SYSTEMS OR PARTS THEREOF; BRAKE CONTROL SYSTEMS OR PARTS THEREOF, IN GENERAL; ARRANGEMENT OF BRAKING ELEMENTS ON VEHICLES IN GENERAL; PORTABLE DEVICES FOR PREVENTING UNWANTED MOVEMENT OF VEHICLES; VEHICLE MODIFICATIONS TO FACILITATE COOLING OF BRAKES
    • B60T7/00Brake-action initiating means
    • B60T7/12Brake-action initiating means for automatic initiation; for initiation not subject to will of driver or passenger
    • B60T7/16Brake-action initiating means for automatic initiation; for initiation not subject to will of driver or passenger operated by remote control, i.e. initiating means not mounted on vehicle
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/013Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over
    • B60R21/0134Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting collisions, impending collisions or roll-over responsive to imminent contact with an obstacle, e.g. using radar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9318Controlling the steering
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/93185Controlling the brakes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9319Controlling the accelerator
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/932Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles using own vehicle data, e.g. ground speed, steering wheel direction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9324Alternative operation using ultrasonic waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9327Sensor installation details
    • G01S2013/93271Sensor installation details in the front of the vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9327Sensor installation details
    • G01S2013/93272Sensor installation details in the back of the vehicles

Definitions

  • FIGS. 7A and 7B illustrate in broken line grid a set of proposed pairs of control longitudinal and lateral inputs.
  • FIG. 9 is a view illustrating varying of recommended control input with time and a recommended trajectory within a screen of a display.
  • FIG. 13 illustrates one manner of displaying two different recommendations of control input to meet different maneuvers.
  • FIG. 29 is a graph illustrating increasing of a weighting factor with time.
  • FIG. 36 illustrates an example of driving situation, which the present invention is applicable to.
  • the automobile 10 is provided with a camera 12 .
  • the camera 12 is mounted on the automobile 10 , for example, in the vicinity of the internal rear-view mirror in order to detect the presence of lane markings on a road.
  • a signal image processor 14 estimates the presence of the adjacent lane or lanes, if any, on the road.
  • the automobile 10 is provided with front radar 16 .
  • the front radar 16 is mounted on the automobile 10 , for example, in the middle of the front grille in order to detect the locations of obstacle is vehicle/s in front.
  • the automobile 10 is provided with rear radar 16 .
  • the rear radar 16 is mounted on the automobile 10 , for example in the middle of the rear grille in order to detect the locations of obstacle vehicle/s in rear.
  • the ESS includes the camera 12 , image processor 14 , front radar 16 , rear radar 18 and side sensors 20 .
  • the predictor equation (17) accounts for interaction between the automobile A and the obstacle vehicles B, C and D. Accounting for such interaction may be omitted in a driving situation illustrated in FIG. 18 .
  • each vehicle operator can keep driving by looking ahead only so that a change in behavior of the automobile A will not have any influence on behaviors of the obstacle vehicles B, C and D. This is the case where the following predictor equations (18) and (19) may be used.
  • the longitudinal model component we considered the longitudinal model component.
  • the lane change model component we consider the lane change model component. For various reasons, the vehicle operator determines lane change.
  • a lane change model component for passing the preceding vehicle we consider a lane change model component for passing the preceding vehicle, and explain how to implement it as a predictor model.
  • the lane change model component explained here is made of a first subcomponent to determine whether or not a vehicle operator has decided to make lane change, and a second subcomponent to determine whether or not the execution of lane change is possible.
  • the determination function f LC (x A , x D ) expressed by the equation (23) means a “time headway”to the automobile A the obstacle vehicle D is following. Under this condition, when the determination function f LC (x A , x D ) exceeds the threshold to, the automobile A can change lane to a position in front of the obstacle vehicle D.
  • the determination function f LC (x A , x D ) expressed by the equation (23) cannot be used without modification.
  • processing as mentioned above is carried out to make a determination as to lane change.
  • the determination indicates that it is allowed to change lane, such a vehicle is processed accordingly.
  • the longitudinal control input u x (t) and the lateral control input u y (t) are given.
  • the initial values of X and Y are given by the map 28 .
  • the time integral of the predictor equations (17) and (27) will give predicted future values X(t) and Y(t) when the vehicle operator applies the longitudinal and lateral control inputs u x (t) and u y (t) to the automobile A.
  • Recommendation Component 34
  • the character i at the shoulder of each of u x and u y indicates the positive real number of the whole 1, 2, . . . N. N indicates the number by which the evauating period [t 0 t f ] is divided.
  • N indicates the number by which the evauating period [t 0 t f ] is divided.
  • the microprocessor prepares a set of proposed pairs of control inputs for examination to determine relevance with respect to given maneuver(s).
  • a set of proposed pairs of control inputs for examination to determine relevance with respect to given maneuver(s).
  • manner of preparing the set of proposed pairs of control inputs There are various examples of manner of preparing the set of proposed pairs of control inputs. Here, we explain one representative example of such manner below.
  • the microprocessor determines whether or not the entire proposed pairs of control input have been selected. If this is not the case, the control logic returns to box 74 . If the computed results have been stored with respect to all of the prepared proposed pairs of control inputs, the control logic goes to box 82 .
  • the microprocessor within the computing device 24 performs reading operation of the signals from the sensing devices 16 , 18 , 20 , 14 , and 22 (see FIG. 1 ).
  • the microprocessor inputs the functional J [u x , u y ] for maneuver(s) from the evaluation function generator 32 .
  • the evaluation function generator 32 sets a functional J to meet operator demand for driving with less acceleration feel at vehicle speeds around a desired value of vehicle speed v d A .
  • FIG. 14 illustrates another manner of informing the vehicle operator of the automobile A.
  • arrows 140 and 142 appear in the present driving situation road map to prompt the vehicle operator to acceleration or deceleration.
  • the curve 126 recommends acceleration initially, the arrow 140 appears.
  • the curve 130 recommends deceleration, the arrow 142 appears.
  • the block diagram in FIG. 16 shows, as the additional software components, a control target automatic generator 170 and an actuator commander 180 .
  • the flow chart in FIG. 17 illustrates a control routine 190 .
  • the microprocessor calls the control routine 190 and repeats its execution.
  • the operation of this embodiment is explained along this control routine 190 taking the driving situation in FIG. 16 as an example.
  • FIGS. 20 to 25 C another embodiment is described.
  • This and the first-mentioned embodiments are the same in hardware.
  • this embodiment is different from the first mentioned embodiment in the contents of a behavior predictor 30 and a recommendation generator 34 (see FIG. 2 ).
  • the microprocessor executes the algorithm 220 shown in FIG. 22 to create a new recommended control input u x * (t:t i ). It is to be remembered that the evaluating period is different one step from another. Thus, the evaluating period of the previous control input u x * (t ⁇ t:t i ⁇ t) does not match the present evaluating period in the present step. Thus, is the time scale of the previous control input u x * (t ⁇ t:t i ⁇ t) is corrected to match the present evaluating period.
  • the microprocessor determines whether or not the time t i has reached T f . If the time t i has reached T f , the content of the storage variable u x * (t) is output as the final recommended control input. If not, the logic returns to box 246 .
  • the evaluation function forming component 24 C inputs distance to each of the labeled obstacle vehicles to compute an evaluation function or term evaluating the degree of risk which the obstacle vehicle imparts to the automobile A.
  • the microprocessor computes reaction force F using the equation (83) and determines servomotor command needed to produce the reaction force. After box 304 , the routine comes to an end to complete one cycle operation.
  • the weighting factor w i providing the weighting on the evaluation term w i / i equal is zero upon receiving a grant request for granting a label on the newly incoming obstacle vehicle into the label granting field. Subsequently, the weighting factor is increased from zero at a rate with time.
  • the microprocessor determines whether or not any one of labeled obstacle vehicles is lost by the sensing system. If this is the case, in box 320 , the microprocessor creates estimates, as expressed by the equation (90), using measures immediately before the labeled obstacle vehicle has been lost.
  • the fully drawn curve, the dotted line curve and the one-dot chain line curve illustrate varying of the optimal solution u x * (t) with time before and after the moment t 0 when the following obstacle vehicle C has gone out of the label holding field. It is assumed, here, that the vehicle operator traces the optimal solution u x * (t) by accelerating or decelerating the automobile A. The scenario is that until the moment t bd , the obstacle vehicles B and C travel at fast as the automobile A. Immediately after the moment t bd , the vehicle C slows down and leaves the label holding field at moment t 0 .
  • the fully drawn line illustrates the case where the weighting factor w 1 (t) decreases at varying rate.
  • the dotted ine curve illustrates the case where the weighting factor w 1 (t) decreases at fixed rate.
  • the one-dot chain line curve illustrates the case where the weighting factor w 1 is fixed. From the fully drawn curve, it is appreciated that the optimal solution varies smoothly.
  • FIGS. 36 and 37 Another embodiment can be understood with reference to FIGS. 36 and 37 .
  • the hardware and software components used in this embodiment are the same as those used in the above described embodiment and illustrated in FIG. 27 .
  • the equations (80) to (82) may be used to vary the weighting factors, and the equations (74) to (76) may be used as predictor equations.
  • the evaluating period T of the evaluation index J (73) is varied from zero at a gradual rate to the designed value so as to solve the optimization problem.
  • T ( t ) T 0 (1 ⁇ exp ( ⁇ t )) (92)
  • the fully drawn line illustrates the case where the weighting factor increases gradually and the evaluating period increases gradually.
  • the dotted line curve illustrates the case where the weighting factor is fixed and the evaluating period increases gradually.
  • the one-dot chain line curve illustrates the case where the weighting factor is fixed and the evaluating period fixed. From the fully drawn curve, it is appreciated that the optimal solution varies smoothly immediately after the system switch 266 has been turned on.

Abstract

A method improves operator assistance of an automobile. On substantially real time basis, data on the automobile and on intervehicle relationship involving the automobile are collected. The data are processed to determine variables for evaluation. The determined variables are evaluated to recommend control input.

Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates in general to the field of vehicle operation, and more particularly, to a method and system for improving the assistance to a vehicle operator.
2. Description of the Background Art
For each automobile on highways, the drivers cognitive load increase to maintain a safe “headway” to the vehicle it is following and to track a desired trajectory including lane change. Human beings have a finite ability to perceive the vehicle environment in which the vehicle is operating, e.g., the road is conditions, traffic conditions, etc, to attend to elements of the environment, to cognitively process the stimuli taken in, to draw appropriate meaning from perceptions, and to act appropriately upon those perceived meanings. Furthermore, there is great variation within the driving population in both native and developed abilities to drive. Training experience can be used. Unfortunately, there is little formal or informal training in the skills involved in driving, beyond the period when people first apply for their licenses. Driver training programs have not proven to be particularly effective, nor is training continued through the driving career. In fact, most people think of driving as a right rather than a privilege. Further, most think of themselves as good drivers and of “the other person” as the one who creates problems. Unless and until change takes place that encourages drivers to wish to improve their driving skill, it seems that technological solutions designed to minimize cognitive load have the potential for improving the safety of the highway transportation system.
To address these safety concerns, there has been proposed a driver assistance system that attempts to minimize cognitive load in making lane change. The system operates continuously taking in vehicle environment data that encompasses data related to the environment in which the vehicle is operating, e.g., the road conditions, traffic conditions, etc. Sensing devices provide the vehicle environment data. Radar, laser, ultra-sonic and video systems can provide a map of objects near the vehicle and their motion relative to the vehicle. JP-A2001-52297 proposes a system of this category. The map provides present locations and speeds of vehicles, which are evaluated to justify a proposed action, e.g., lane change. The concept behind is to recommend action or actions, which the present environment data allows. Since the data available is limited to what the map provides, the action or actions recommended fail to accomplish a satisfactory level of driving skill. For example, a vehicle operator of improved driving skill would employ accelerating/braking and lane change maneuvers if the present vehicle environment does not allow a lane change. Apparently, s/he sees future vehicle environment upon initiating such maneuvers.
For a variety of reasons, it is desirable to develop a method and system for improving assistance to a vehicle operator, which is fit to and thus accepted by the vehicle operator.
SUMMARY OF THE INVENTION
The present invention provides, in one aspect thereof, a method for improving operator assistance of an automobile, the method comprising:
    • collecting, on substantially real time basis, data on the automobile and on intervehicle relationship involving the automobile;
    • processing the data to determine variables for evaluation; and
    • evaluating the determined variables to recommend control input.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will be apparent from reading of the following description in conjunction with the accompanying drawings.
FIG. 1 is an automobile installed with a method and system for improving assistance to a vehicle operator in accordance with one exemplary implementation of the present invention.
FIG. 2 is a block diagram illustrating the present invention.
FIG. 3 illustrates an example of driving situation.
FIG. 4 is a block diagram of a behavior predictor is component.
FIG. 5 is a flow chart illustrating operation of the behavior predictor component.
FIG. 6 is a flow chart frustrating operation of a recommendation component.
FIGS. 7A and 7B illustrate in broken line grid a set of proposed pairs of control longitudinal and lateral inputs.
FIG. 8 is a flow chart illustrating operation the present invention.
FIG. 9 is a view illustrating varying of recommended control input with time and a recommended trajectory within a screen of a display.
FIG. 10 is an automobile installed with a method and system for improving assistance to a vehicle operator in accordance with another embodiment of the present invention.
FIG. 11 is a block diagram, similar to FIG. 2, illustrating the present invention.
FIG. 12 illustrates an example of driving situation, which the present invention is applicable to.
FIG. 13 illustrates one manner of displaying two different recommendations of control input to meet different maneuvers.
FIG. 14 illustrates another manner of displaying the two different recommendations of control input to meet different maneuvers.
FIG. 15 is an automobile installed with a method and system for improving assistance to a vehicle operator in accordance with another embodiment of the present invention.
FIG. 16 is a block diagram, similar to FIG. 11, illustrating the embodiment of the present.
FIG. 17 is a flow chart illustrating the present invention.
FIG. 18 illustrates an example of driving situation, which the present invention is applicable to.
FIG. 19 illustrates another manner of displaying the recommendation of control input to meet a maneuver including lane change to take route toward a destination.
FIG. 20 illustrates an example of driving situation for explaining the operation of the present invention.
FIGS. 21A, 21B and 21C illustrate three future intervehicle relationships derivable from the illustrated driving situation in FIG. 20.
FIG. 22 is a flow chart illustrating a recommended control input update algorithm.
FIG. 23 is a flow chart illustrating an overall algorithm to recommend control input.
FIGS. 24A-24C illustrate simulation result.
FIGS. 25A-25C illustrate simulation result.
FIG. 26 is an automobile installed with a method and system for improving assistance to a vehicle operator in accordance with the present invention.
FIG. 27 is a block diagram illustrating the present invention.
FIG. 28 illustrates an example of driving situation, which the present invention is applicable to.
FIG. 29 is a graph illustrating increasing of a weighting factor with time.
FIG. 30 is a flow chart illustrating operation of the present invention,
FIG. 31 is a graph illustrating varying of the optimal solution (control input) to the optimization problem.
FIG. 32 illustrates an example of driving situation, which the present invention is applicable to.
FIG. 33 is a graph illustrating decreasing of a weighting factor with time.
FIG. 34 is a flow chart illustrating operation of the present invention.
FIG. 35 is a graph illustrating varying of the optimal solution (control input) to the optimization problem.
FIG. 36 illustrates an example of driving situation, which the present invention is applicable to.
FIG. 37 is a graph illustrating varying of the optimal solution (control input) to the optimization problem.
DETAILED DESCRIPTION OF THE INVENTION
As used throughout this specification, the terms vehicle operator and driver are used interchangeably and each are used to refer to the person operating an automobile. The term automobile is used to refer to the automobile operated by a vehicle operator and installed with a method and system for improving operator assistance. The term obstacle vehicle is used to refer to one of a group of obstacle vehicles located in and coming into a monitored field around the automobile,
Referring to FIGS. 1 and 2, an automobile is generally designated at 10 although it is labeled “A” in each of the driving situations. The automobile 10 is installed with an environment sensing system (ESS) and a state sensing system (SSS). In the embodiment, the ESS detects current locations of a group of obstacle vehicles on a road in a monitored field around the automobile 10 and lane markings to recognize locations of lanes on the road. The SSS detects vehicle speed of the automobile 10.
The automobile 10 is provided with a camera 12. The camera 12 is mounted on the automobile 10, for example, in the vicinity of the internal rear-view mirror in order to detect the presence of lane markings on a road. In response to signals from the camera 12, a signal image processor 14 estimates the presence of the adjacent lane or lanes, if any, on the road. The automobile 10 is provided with front radar 16. The front radar 16 is mounted on the automobile 10, for example, in the middle of the front grille in order to detect the locations of obstacle is vehicle/s in front. The automobile 10 is provided with rear radar 16. The rear radar 16 is mounted on the automobile 10, for example in the middle of the rear grille in order to detect the locations of obstacle vehicle/s in rear. The automobile 10 is provided with two side sensors, only one shown at 20. The side sensors 20 are mounted on the automobile 10, for example in appropriate portions viewing the adjacent lateral traffic conditions in order to detect the locations of obstacle vehicle/s in the adjacent lane/s. Each side sensor 20 may be in the form of an ultrasonic sensor or a camera combined with an image processor. Of course, radar may be used as each side sensor 20. The camera 12 and image processor 14 are used to complement, if need be, the information derived from the front radar 16.
In the embodiment, the ESS includes the camera 12, image processor 14, front radar 16, rear radar 18 and side sensors 20.
The automobile 10 is provided with a vehicle speed sensor that includes a rotary encoder 22. The rotary encoder 22 is mounted on a road wheel of the automobile in order to generate a pulse train having varying period with revolution speed of the road wheel.
In the embodiment, the SSS includes the vehicle speed sensor incorporating the rotary encoder 22.
The automobile 10 is provided with a computing device 24. The computing device 24 includes a microprocessor-based controller that includes a microprocessor in communication with its peripheral devices. The microprocessor is in communication with computer-readable storage medium. As will be appreciated by those skilled in the art, the computer-readable storage medium, for example, may include a random access memory (RAM), a read-only memory (ROM), and/or a keep-alive memory (KAM). The computer-readable storage medium has stored therein data relating to computer-readable instructions is for the microprocessor to perform a method for improving the assistance to the vehicle operator in driving the automobile 10. The microprocessor processes incoming signals from the image processor 14, front radar 16, rear radar 18, side sensors 20 and rotary encoder 22 to recommend control input. An example of a vehicle application area is the field of driver assistance. In the illustrated embodiment, the computing device 24 applies the recommended future control input to a trajectory processor coupled with an interface 42 having a display 26. The trajectory processor includes a microprocessor in communication with its peripheral devices.
With particular reference to FIG. 2, the system includes a map component 28, a behavior predictor component 30, an evaluation component 32, and a recommendation component 34. In FIG. 2, the trajectory processor illustrated at 36. Boxes 38, 40 and 42 represent automobile environment data and automobile state or state data carried by output signals of the ESS (12, 14, 16, 18, and 20) and SSS (22). The map component 30, evaluation component 32 and recommendation component 34 are hardware or software components, respectively. They are illustrated in FIG. 2 as separate elements for purposes of clarity and discussion. It will be appreciated that these components may be integrated into single module within the computing device 24.
FIG. 3 demonstrates a driving situation within a roadway system having two lanes. In this driving situation, the automobile 10, now labeled A, is traveling at a longitudinal speed of vA, and a group of obstacle vehicles, labeled B, C and D, are within the monitored range around the automobile A. In the same lane, the automobile A has the leading vehicle B and the following vehicle C. These obstacle vehicles B and C are traveling at longitudinal speeds of vB and vC, respectively. In the adjacent next right lane 1, the obstacle vehicle D is traveling at a longitudinal vehicle speed of vC. Using the Cartesian coordinate system fixed to the automobile A, the driving situation may be described. The intersection of the x- and y-axes is fixed at a portion on the front bumper of the automobile A. The x-axis extends in the traveling direction of the automobile A. The y-axis extends in the lateral direction of the automobile A.
Map Component 28:
With reference to the above-mentioned driving situation, we explain the system illustrated in FIG. 2 in detail. This system includes the map 28, which is coupled to the ESS and SSS. The ESS provides the environment data 38 and 40, which contain information of: distance to each of the obstacle vehicles B, C and D, relative speed to each obstacle vehicle, and location of lanes on the road. Millimeter radar, if employed, can directly provide information of relative speed to the obstacle vehicle. If other type of radar is used, a derivative filter may be used to provide the derivative of the output signal as information of relative speed. The SSS provides the vehicle state data 42, which contains information of: vehicle speed of the automobile A. In the embodiment, such pieces of information are used as inputs of the map 28 and processed for describing the present location and speed of each of the automobile A and obstacle vehicles B, C and D.
The map 28 recognizes which of the lanes each of the automobile A and obstacle vehicles B, C and D is. Assuming that an obstacle vehicle is in the same lane as the automobile A is, the situation is described as y=0. Assuming that an obstacle vehicle is in the adjacent next lane, the situation is described as y=1, if the adjacent next lane is on the right-hand side of the lane 0, or as y=−1, if the adjacent next lane is on the left-hand side of the lane 0. For example, in the illustrated driving situation in FIG. 3, it gives y=0 for the obstacle vehicles B and C, and y=1 for the obstacle vehicle D. The map 28 computes the position on the x-axis of the nearest one of head and tall of each of the obstacle vehicles B, C, and D. For example, in the illustrated driving situation in FIG. 3, it gives x=xB for the obstacle vehicle B, x=xC for the obstacle vehicle C, and x=xD for the obstacle vehicle D. The map 28 computes a longitudinal vehicle speed of each of the obstacle vehicles based on the vehicle speed of the automobile A and the relative speed to the obstacle vehicle. For example, in the illustrated driving situation, it gives v=vB for the obstacle vehicle B, v=vC for the obstacle vehicle C and v=vD for the obstacle vehicle D.
Thus, the map 28 describes the current driving situation as
    • (x=0, v=vA, y=0) for the automobile A;
    • (x=xB, v=vB, y=0) for the obstacle vehicle B;
    • (x=xC, v=vC, y=0) or the obstacle vehicle C; and
    • (x=xD, v=vD, y=1) for the obstacle vehicle D.
The detailed description on the technique of computing the location of each lane and the distance to each obstacle vehicle is hereby omitted because it belongs to the prior art including, for example, JP-A 9-142236.
Behavior Predictor Component 30:
With reference also to FIGS. 4 and 5, the future behavior of the obstacle vehicles B, C and D may vary with different control input to the automobile A. Knowing in advance how each control input influences the future behavior is important. The behavior predictor 30 presents a predictor equation for simulation to give how control input to the automobile A influences the future behavior of the obstacle vehicles B, C and D. For further discussion, control input may be expressed as a set of longitudinal control input ux(t) and lane change or lateral control input uy(t). In the embodiment, the longitudinal control input ux(t) is expressed by a command for acceleration/deceleration. The lane change or lateral control input ux(t) may be expressed as u y = { - 1 lane change to the left 0 as it is 1 lane change to the right . ( 1 )
We consider a vehicle model applicable to the vehicles in the illustrated driving situation in FIG. 3. The vehicle model is made of a longitudinal model component and a lane change model component. We explain the longitudinal model component below, and the lane change model component later.
First, we consider a first longitudinal model component that a vehicle follows the preceding vehicle in the same lane with time headway kept constant. This longitudinal model component may be expressed as
{dot over (x)}=v
{dot over (v)}=k 1(x p −x−h v)+k 2(v p =v)  (2)
where
    • x and v are the location and the vehicle speed of the vehicle following the preceding vehicle;
    • {dot over (v)} is the vehicle acceleration;
    • xp and vp are the location and the vehicle speed of the preceding vehicle;
    • h is the desired value of time headway that is defined by D/v, D is the sum of the intervehicle spacing and the length of the preceding vehicle; and
    • k1 and k2 are the characteristic parameters expressing the dynamics of the vehicle following the preceding vehicle.
Second, we consider a second longitudinal model component that a vehicle has no preceding vehicle to follow and the vehicle travels at a desired value of vehicle speed. This longitudinal model component may be expressed as
{dot over (x)}=v
{dot over (v)}=k 2(v d −v)  (3)
where
    • vd is the desired value of vehicle speed.
Combining the equations (2) and (3), we have t x = A 0 x + B 0 x p ( 4 ) where x = [ x v ] , A 0 = [ 0 1 - k 1 - hk 1 - k 2 ] , B 0 [ 0 0 k 1 k 2 ] ( 5 ) x p = { ( x p v p ) T for a vehicle having the preceding vehicle ( x + hv v d ) T for a vechicle having no vehicle to follow notation ( 6 )
    • T is the evaluating (or predicting) period.
      The equation (6) clearly indicates that the absence of or failure to detect the preceding vehicle is equivalent to the presence of a virtual preceding vehicle having the state (x+hv vd)T.
In each of the above-mentioned longitudinal model components, the state of preceding vehicle determines rule, which the following vehicle should obey. In a group of vehicles, each vehicle needs description on another vehicle it is preceding. Taking the illustrated driving situation in FIG. 3 as an example, the obstacle vehicle B is leading the automobile A, which is, in turn, leading the obstacle vehicle C. The obstacle vehicles C and D do not lead any vehicle. Let it be defined that xi is the state vector of a vehicle i, xp i the state vector of the preceding vehicle, which the vehicle i is following, and xd i the desired state vector of the vehicle i, where i=A or B or C or D. The illustrated driving situation in FIG. 3 may be described as
xP A=xB, xP B=xd B, xP C=xA, xP D=xd d  (7)
where x d i = [ x i + hv i v d i ] , i = { A , B , C , D } . ( 8 )
Combining the state vectors, we define X p = [ x p A x p B x p C x p D ] , X = [ x A x B x C x D ] . ( 9 )
Describing the relationship (7), we have
X P =EX+E d X d  (10)
where X d = [ x d A x d B x d C x d D ] ( 11 ) E = [ 0 I 0 0 0 0 0 0 I 0 0 0 0 0 0 0 ] , Ed = [ 0 0 0 0 0 I 0 0 0 0 0 0 0 0 0 I ] notation ( 12 )
  • I represents the second order unit matrix, and 0 the second order zero matrix.
The vectors E and Ed, each of which is often called “Interaction Relation Matrix”, express the intervehicle positional relationship. They are updated whenever the intervehicle positional relationship changes. For example, if the automobile A moves to a location in front of the obstacle vehicle D after lane change, the vectors E and Ed are updated to express this new intervehicle positional relationship as follows: E = [ 0 0 0 0 0 0 0 0 0 I 0 0 I 0 0 0 ] , Ed = [ I 0 0 0 0 I 0 0 0 0 0 0 0 0 0 0 ] . ( 13 )
The preceding description shows the first and second longitudinal model component. We now explain a third longitudinal model component appropriate for description of the automobile A. The third longitudinal model component includes vehicle operator longitudinal command ux. The third longitudinal model component may be expressed as
{dot over (x)}A=vA
{dot over (v)}a=ux  (14).
Combining this third longitudinal model into the equation (4) and arraying from A, we have t X = AX + BX p + B A u x where ( 15 ) A = [ A A 0 0 0 0 A 0 0 0 0 0 A 0 0 0 0 0 A 0 ] , B = [ 0 0 0 0 0 B 0 0 0 0 0 B 0 0 0 0 0 B 0 ] , B A = [ b A 0 0 0 ] A A = [ 0 1 0 0 ] , b A = [ 0 1 ] ( 16 )
Incorporating the equation (10) into the equation (15), we have a predictor equation as t X = ( A + BE ) X + BE d X d + B A u x . ( 17 )
The predictor equation (17) defines development of X with time in response to time series pattern of ux, if A, 8 and Xd are given. This development of X in response to time series pattern of ux is nothing but future behavior in the x-direction of the vehicles in response to time series pattern of vehicle operator longitudinal commands. The behavior predictor 30 presents the predictor equation (17), which describes future behavior of the group of the obstacle vehicles B, C and D in the x-direction in response to future vehicle operator longitudinal command.
The predictor equation (17) accounts for interaction between the automobile A and the obstacle vehicles B, C and D. Accounting for such interaction may be omitted in a driving situation illustrated in FIG. 18. In this illustrated driving situation, each vehicle operator can keep driving by looking ahead only so that a change in behavior of the automobile A will not have any influence on behaviors of the obstacle vehicles B, C and D. This is the case where the following predictor equations (18) and (19) may be used. t X = ( A + B E ) X + BE d X d ( 18 ) t x A = A A x A + b A u x where ( 19 ) x = [ x B x C x D ] , X d = [ x d B x d C x d D ] A = [ A 0 0 0 0 A 0 0 0 0 A 0 ] , B = [ B 0 0 0 0 B 0 0 0 0 B 0 ] ( 20 ) E = [ 0 0 0 0 0 0 0 I 0 ] , E d = [ I 0 0 0 I 0 0 0 0 ] . ( 21 )
The predictor equations (18) and (19) provide almost the same result as the predictor equation (17) does in creating a recommended trajectory for guiding the automobile A in the lane 0 to the appropriate point for lane change to the lane 1. If it is required to create the trajectory after the lane change, accounting for the interaction between the automobile A and the obstacle vehicle D cannot be omitted. In this case, too, the predictor equations (18) and (19) may be used to create another trajectory after the lane change by employing a rule to neglect the following obstacle vehicle D. In this manner, the predictor equations (18) and (19) can provide continuous assistance to the vehicle operator.
In the preceding description, we considered the longitudinal model component. Now, we consider the lane change model component. For various reasons, the vehicle operator determines lane change. In the embodiment, we consider a lane change model component for passing the preceding vehicle, and explain how to implement it as a predictor model. The lane change model component explained here is made of a first subcomponent to determine whether or not a vehicle operator has decided to make lane change, and a second subcomponent to determine whether or not the execution of lane change is possible.
First Subcomponent of Lane Change Model Component:
With continuing reference to the driving situation illustrated in FIG. 3, we proceed with our explanation by regarding the automobile A as a vehicle behaving in the same manner as the other obstacle vehicles B, C and D do in accordance with the model. Consider now the driving scenario that the automobile A starts decelerating by catching the preceding obstacle vehicle B. The vehicle a keeps traveling at longitudinal vehicle speed VB that is lower than longitudinal vehicle speed vA at which the automobile was traveling before deceleration. Let it be assumed that the operator of the automobile A has a desired vehicle speed vd A and s/he will take driving maneuver involving lane change at appropriate timing to pass the preceding obstacle vehicle that keeps traveling at a vehicle speed lower than the desired vehicle speed vd A. We now define a variable zA(t) representing the growth of operator will to change lane. The variable zA(t) may be expressed as z A ( t ) = t 0 t ( v d A - v A ) t ( 22 )
where
    • to is the moment when the automobile A starts decelerating.
      A threshold value z0 A is established. The variable zA(t) is compared to the threshold value z0 A. When the variable zA(t) exceeds the threshold value z0 A, it is determined that the vehicle operator of the automobile A has decided to change lane and starts looking the adjacent lane to find a space allowing the lane change.
For each of the obstacle vehicles B, C and D, the automobile A computes the variables zB(t), zC(t) and zD(t).
In the case where the automobile A is equipped with a device to detect direction indicators of the obstacle vehicles, the variable zB is initialized or increased to a value exceeding the threshold value upon recognition that the direction indicator of the preceding vehicle B, for example, clearly shows that the lane change is imminent.
Second Subcomponent of Lane Change Model Component:
In the driving situation illustrated in FIG. 3, the obstacle vehicle D is in the adjacent next lane. An index is computed, on which it is determined whether or not the intervehicle positional relationship allows the automobile A to change lane to a position in front of the obstacle vehicle D. As one example of the index, we introduce a determination function fLC(xA, xD), which may be is expressed as f LC = ( x A , x D ) = x A - x D - 1 2 D ( v A - v D ) 2 s ( v D - v A ) v D ( 23 )
where
    • d is the appropriate value having the dimension of deceleration and indicating the absolute value of the upper limit of a range of deceleration degrees, which the vehicle operator of the obstacle vehicle D experiences during normal braking.
      In the equation (23), the function “s” is used. The function “s” may be expressed as s ( x ) = { 0 x < 0 1 x 0. ( 24 )
      A determination function threshold fLC 0 is established. When the determination function fLC(xA, xD) exceeds the established threshold fLC 0, it is determined that the intervehicle positional relationship allows the automobile A to change lane in front of the obstacle vehicle D.
We now consider the case when the relative vehicle speed is positive (vA≧vB) so that the automobile A is traveling as fast as or faster than the obstacle vehicle D is. When the relative vehicle speed is zero or positive, the determination function fLC(xA, xD) expressed by the equation (23) means a “time headway”to the automobile A the obstacle vehicle D is following. Under this condition, when the determination function fLC(xA, xD) exceeds the threshold to, the automobile A can change lane to a position in front of the obstacle vehicle D.
Next, we consider the case when the relative vehicle speed is negative (vA<vB) so that the obstacle vehicle D is traveling faster than the automobile A is. When the relative vehicle speed is negative, the determination function fLC(xA, xD) expressed by the equation (23) means a “time headway” to the automobile A recognized at the moment immediately after the vehicle speed of obstacle vehicle D has decreased to the vehicle speed of automobile A as a result of deceleration of the obstacle vehicle D at the value d of deceleration. Under this condition, as the absolute value of the negative relative vehicle speed becomes large, a value of the determination function fLC(xA, xD) becomes small, making it hard for the automobile A to change lane.
Let us now consider another driving situation where an obstacle vehicle D* is in the adjacent next lane ahead of the automobile A. In this driving situation, the determination function fLC(xA, xD) expressed by the equation (23) cannot be used without modification. The modified determination function fLC*(xA, xD) may be expressed as f LC * ( x A , x D * ) = x A - x D * - 1 2 d ( v D * - v A ) 2 s ( v A - v D * ) v A . ( 25 )
Let us now consider other driving situation where two obstacle vehicles D and D* are in the adjacent next lane. In this driving situation, both determination functions fLC(xA, xD) and fLC*(xA, xD) are computed and the smaller one of them is compared to the threshold fLC 0 in determining the possibility of lane change.
For each of vehicle, processing as mentioned above is carried out to make a determination as to lane change. When the determination indicates that it is allowed to change lane, such a vehicle is processed accordingly.
For description of a group of vehicle in the driving situation, we introduce a vector Y, which contains the information of lane in which each vehicle is. The vector Y may be expressed as Y = [ y A y B y C y D ] ( 26 )
We now consider an “auto-man” Hi(X, Y), i={B, C, D}. The auto-man contains the first and second subcomponents of lane change model component, which are expressed by the equations (22) and (23), and provides an output uy(t) as expressed by the equation (1). A model expressing varying of the vector Y with time may be expressed as
Y(t+Δt)=Y(t)+H(X(t),Y(t))+Du y  (27)
where H ( X ( t ) , Y ( t ) ) = [ 0 H B ( X ( t ) , Y ( t ) ) H C ( X ( t ) , Y ( t ) ) H D ( X ( t ) , Y ( t ) ) ] , D = [ 1 0 0 0 ] ( 28 ) H i ( X , Y ) = { 1 z i ( t ) > z 0 i and y i = 0 and f LC ( X i , X j ) > f LC 0 j y i = 1 - 1 z i ( t ) > z 0 i and y i = 1 and f LC ( X i , X k ) > f LC 0 k y k = 0 0 otherwise ( 29 )
where
    • j and k are for all j and k;
    • Δt is the update period.
A change in Y causes a change in intervehicle positional relationship in the driving situation, making it necessary to adjust the interaction matrices E and Ed to a new intervehicle positional relationship. Besides, it is necessary to initialize the internal variable z(t) of the auto-man H to 0 (zero).
All of the preceding description on the behavior predictor 30 can be understood with reference to block diagram in FIG. 4 or flow chart in FIG. 5.
The block diagram in FIG. 4 clearly illustrates the longitudinal and lane change behavior predictor equations, which are presented by the behavior predictor 30. An example of how a microprocessor would implement the behavior predictor 30 can be understood with reference to FIG. 5. The flow chart in FIG. 5 illustrates a control routine 50 of operation of the behavior predictor 30.
In box 52, the microprocessor inputs X(t) and Y(t). In the next box 54, the microprocessor inputs Xd(t). In box 56, the microprocessor defines E(X, V) and Ed(X, Y). In box 58, the microprocessor computes Xp. In boxes 60 and 62, the microprocessor computes the behavior predictor equations. In box 64, the microprocessor increases the timer t by Δt. In box 66, the microprocessor determines whether or not the timer t matches the terminal time tf. If this is not the case, the logic returns to box 52. In summary, the behavior predictor 30 presents the predictor equations (17) and (27). The longitudinal control input ux(t) and the lateral control input uy(t) are given. The initial values of X and Y are given by the map 28. Given these data, the time integral of the predictor equations (17) and (27) will give predicted future values X(t) and Y(t) when the vehicle operator applies the longitudinal and lateral control inputs ux(t) and uy(t) to the automobile A.
Evaluation Component 32.
The evaluation component 32 presents an evaluation function, which may be used to evaluate the predicted behavior to determine relevance of control inputs with respect to maneuver(s) designated. The evaluation function generator 32 may be mathematically described as an evaluation index J that is, in this example, a functional with regard to the two control inputs ux(t) and uy(t). The evaluation index J may be expressed in generalized form as J [ u x , u y ] = ψ ( X ( t f ) , Y ( t f ) ) + t 0 t f { L ( X , Y ) + M ( u x , u y ) } t ( 30 )
where
    • to is the present time;
    • tf is the terminal time when prediction is terminated;
    • Ψ is the evaluation function to evaluate the group of vehicles at time tf when evaluation is terminated;
    • L is the evaluation function to evaluate behaviors of the group of vehicles during the evaluating period [t0 tf];
    • M is the evaluation function to evaluate ux(t) and uy(t) during the evaluating period.
We can designate various maneuvers by altering the manner of taking the three different kinds of evaluation functions Ψ, L and M. Simple examples are as follows.
1. To meet operator demand for driving at vehicle speeds around a desired value of vehicle speed vd A, J [ u x , u y ] = t 0 t f q ( v d A - v A ) 2 t . ( 31 )
2. To meet operator demand for advancing the automobile A as far as possible in the adjacent next right lane by the time tf,
J[u x , u y]=−px x A(t f)+p y(y A(t f)−1)  (32)
3. To meet operator demand for driving with less acceleration feel, J [ u x , u y ] = to t f ru x 2 t . ( 33 )
4. To meet operator demand for reaching a point (x0, y0) as soon as possible, J [ u x , u y ] = to t f 1 t and x A ( t f ) = x o , y A ( t f ) = y o . ( 34 )
5. To meet operator demand for driving with sufficient intervehicle spacing in the same lane, J [ u x , u y ] = t 0 t 1 i I ( x A , y A , x i , y i ) t . ( 35 )
where I ( x A , y A , x i , y i ) = δ _ ( y A , y i ) ( x A - x i ) 2 + ɛ δ _ ( y A , y i ) = { 0 if y A = y i 1 if y A = y i . ( 36 )
In the equations (31) to (36), px py, q, and r are positive values weighting the associated evaluations, respectively, and ε is the positive small value for preventing the associated term from becoming infinite. In the equation (34) for the case of 4, the terminal time tf appears explicitly, and the location of the automobile A at the terminal time tf (the terminal conditions) is designated explicitly. The manner of treating the terminal time and the terminal conditions may slightly vary with different maneuvers applied. However, the subsequent processing of the equations (31) to (36) remains basically the same.
At least some of the above-mentioned evaluation functions may be used collectively. An evaluation function in the later described equation (41) is one example, in which the evaluation functions for the cases 1, 3, and 5 are mixed. Mixing the evaluation functions makes it possible to account for different operator demands in tradeoff manner.
Adjusting the values of the weighting parameters q and r determines the order in which the different operator demands are preferentially met. For example, it is required that a future trajectory of the automobile A does not interfere with a future trajectory of any of the other obstacle vehicles B, C and D. This essentially requirement is taken into account by the evaluation function expressed by the equation (35). Accordingly, mixing with this evaluation function of at least one of the evaluation functions expressed by the equations (31) to (34) allows creation of manipulated variables for collision avoidance.
Recommendation Component 34:
Before entering into description on the recommendation component 34, we hereby summarize the preceding description on the map 28, behavior predictor 30, and evaluation 32. The map 28 provides the present data on intervehicle relationship in the form of the vector X(t0) expressed by the equation (9) and the vector Y(t0) expressed by the equation (26). The behavior predictor 30 presents predictor equations (17) and (27). The predictor equation (27) may be replaced by the predictor equations (18) and (19) in certain driving situation. Here, we give a set of proposed time-series pair of control inputs ux i(t) and uy i(t). The character i at the shoulder of each of ux and uy indicates the positive real number of the whole 1, 2, . . . N. N indicates the number by which the evauating period [t0 tf] is divided. Given the set of proposed time-series pair of control inputs {ux i(t), uy i(t)}, the time integral of the predictor equations (17) and (27) predicts future values X(t) and Y(t) indicative of future behavior of the obstacle vehicles group. The evaluation 32 has maneuver or maneuvers. The evaluation 32 evaluates the predicted future values X(t) and Y(t) to determine relevance of each member of the set of proposed time-series pair of control inputs ux i(t) and uy i(t) with respect to the maneuver(s). Based on the determined relevance, the recommendation 34 determines whether or not each member of the set of proposed time-series of pair of control inputs ux i(t) and ux i(t) be recommended. The operation of recommendation generator 34 can be understood with reference to the flow chart in FIG. 6.
The flow chart in FIG. 6 illustrates a control routine 70 of one exemplary implementation of the recommendation 34.
In box 72, the microprocessor prepares a set of proposed pairs of control inputs for examination to determine relevance with respect to given maneuver(s). There are various examples of manner of preparing the set of proposed pairs of control inputs. Here, we explain one representative example of such manner below.
A. First, we divide the evaluating period [t0 tf] by N to provide a time interval 1/N (tf−t0) of a set of proposed time-series pair of control inputs. The set of proposed is time-series pair of control inputs is described as u x ( t o ) , u x { t o + 1 N ( t f - t o ) } , u x { t o + i N ( t f - t o ) } , u x { t f - 1 N ( t f - t o ) } u y ( t o ) , u y { t o + 1 N ( t f - t o ) } , u y { t o + i N ( t f - t o ) } , u y { t f + 1 N ( t f - t o ) } ( 37 )
B. Second, we consider an allowable range of values which each of control inputs ux(t) and uy(t) may take at each of N number of moments within the evaluating period [t0 tf].
As is clear from equation (1), what uy(t) may take are three (3) values −1, 0, 1.
Here, we define the allowable range of values which the control input ux(t) may take at a given moment of the N number of moments within the evaluating period [t0 tf) as
u min ≦U x(t)≦u max.
We quanticize numerical space (umax−umin) by dividing it by n to obtain n number of different values. Generalized form of the n number of different values is u x ( t ) = { u min + 1 n - 1 ( u max - u min ) , j = 0 , , n - 1 } ( 38 )
As illustrated in FIGS. 7A and 7B, we now have (3n)N number of values which the control inputs ux(t) and uy(t) may take during the evaluating period [t0 tf].
Next, we now explain another representative example of the manner of providing the set of proposed pairs of control inputs. In the first mentioned example, all of (3n)N number of values needs to be submitted for examination to determine is relevance of each value. According to this example, we assume some driving scenarios derivable from the present intervehicle positional relationship and pick up some probable values, which are fit to the assumed driving scenarios, out of the whole (3n)N.
Next, we now explain another representative example, which uses mathematical equations expressing control laws rather than setting the time-series values. We provide a number (for example, m) of pairs, each pair containing control law governing operator longitudinal input and logic governing operator lateral input (lane change). Each pair may be expressed as {fi(X, Y) Hi(X, Y)}. The term fi(X, Y) expresses control law governing operator longitudinal input, as expressed, for example, by the second equation of equation (2). The term Hi(X, Y) expresses control logic governing operator lateral input (lane change), as expressed, for example, by equation (29). The notation i is the index (i=1, 2, . . . m). In this case, substituting ux=fi(X, Y) and uy=Hi(X, Y) into the predictor equations (17) and (27) to give closed loops, respectively, enables the time integration in box 76 as different from the explicitly substituting the time-series values ux and uu into them.
In the next box 74, the microprocessor selects one pair from the prepared set of proposed pairs of control inputs.
In box 76, the microprocessor predicts future behavior X(t), Y(t) of all of the vehicles A, B, C and D with respect to the selected proposed pair of control inputs. The microprocessor obtains the result by time integrating the predictor equations (17) and (27) after substituting the selected proposed pair into them, respectively.
In box 78, the microprocessor evaluates the predicted future behavior X(t), Y(t) to determine relevance of the selected pair with respect to given maneuver(s). The microprocessor substitutes the predicted behavior X(t), Y(t) and the selected is pair ux, uy into the evaluation function generator 32, see also to FIG. 2, that is, into the functional J[ux, uy], equation (30), to yield a result, as the relevance of the maneuver(s). We have referred to examples of such maneuvers which may be described into the functional J[ux, uy] as the equations (31) to (35). The relationship between the computing result given by the functional J[ux, uy] and the relevance may be set in any desired manner. In the embodiment, the computing result reduces as the relevance of the selected pair ux, uy with respect to the maneuver(s) rises. The computing result is stored in association with the selected pair of control inputs ux, uy.
In the next box 80, the microprocessor determines whether or not the entire proposed pairs of control input have been selected. If this is not the case, the control logic returns to box 74. If the computed results have been stored with respect to all of the prepared proposed pairs of control inputs, the control logic goes to box 82.
In box 82, the microprocessor extracts, as a recommended pair of control inputs for a future moment within the evaluating period [t0, tf], at least one proposed pair of control inputs out of the prepared set. The extraction is based on the determined relevance of each proposed pair of the prepared set, which are expressed by the stored computed results of the functional J[ux, uy]. In the embodiment, the microprocessor extracts a proposed pair of control inputs having the minimum computed value among the computed results for the future moment for each moment within the evaluating period [t0, tf]. Upon or immediately after the completion of extraction over all of the moments within the evaluating period [t0, tf], the microprocessor outputs the extracted proposed pairs of control inputs as a set of recommended pairs of control inputs. If the prepared set is given by the mathematical model {fi(X, Y) Hi(X, Y)}, the microprocessor performs necessary conversion to give time-series values as the set of recommended pairs of control inputs.
In the description on the flow chart in FIG. 6, we have shown only one of various examples algorithm. Another example is to terminate predicting future behavior upon finding, as a recommended pair of control inputs for a given moment within the evaluating period [t0, tf], a proposed pair of control inputs whose computed value of the functional J[ux, uy] is less than or equal to a predetermined value. Other example is to output plural proposed pairs of control inputs if whose computed values of the functional J[ux, uy] are less than or equal to a predetermined value.
Trajectory Processor 36:
With reference again to FIG. 2, the trajectory processor 36 is coupled with the recommendation generator 34 and also with the prediction generator 30. In order to compute predicted future trajectories of the entire vehicles A, B, C and D, the trajectory processor 36 time integrates the predictor equations (17), (27), presented by the behavior predictor 30, after substituting the set of recommended pairs of control inputs ux(t), uy(t). The computed results are provided to an interface 42. In the embodiment, the interface 42 includes the display 26.
Interface 42:
The interface 42 is coupled to the trajectory processor 36 to form one of various examples of vehicle control applications. In the embodiment, the interface 42 processes the computed results from the trajectory processor 36 to present image and/or voice information to the vehicle operator in a manner to prompt the vehicle operator to applying the set of recommended pairs of control inputs ux(t), uy(t) to the automobile A. An example of image information to be presented includes a trajectory that the automobile is recommended to track with or without future trajectories of the obstacle vehicles B, C and D. An example of is voice information includes verbal guidance to prompt the vehicle operator to applying the set of recommended pairs of control inputs ux(t), uy to the automobile A. Another example of vehicle control application includes controlling of reaction force opposed to manual effort of acceleration to prompt the vehicle operator to prompt the vehicle operator to applying the recommended control input um to the automobile A.
This section provides the description on updating of the set of recommended pair of control inputs. The terminal time tf of the prediction time is finite, making it necessary to repeat at regular intervals processing to create an updated set of recommended pair of control inputs. The vehicle environment around the automobile A changes due to incoming vehicle and/or outgoing vehicle with respect to a detectable area by the onboard sensing devices. The vehicle environment also changes if one of the obstacle vehicles B, C and D should take unexpected behavior. These cases demand updating of the recommended pair of control inputs.
Thus, according to the embodiment of the present invention, we use the latest prediction of behavior of the obstacle vehicles for a criterion in determining whether or not updating is required. This latest prediction may be expressed as
{circumflex over (X)}(t), Ŷ(t)  (39)
Expressing the current behavior of the obstacle vehicles as X(t) and Y(t), we define a deviation e as follows: e = k x ( X ( t ) - X ^ ( t ) ) T ( X ( t ) - X ^ ( t ) ) + k y ( Y ( t ) - Y ^ ( t ) ) T ( Y ( t ) - Y ^ ( t ) ) ( 40 )
where
    • kx and ky are weighting values.
      A deviation threshold eth is established. The microprocessor initiates processing to create an updated set of recommended pairs of control inputs when the deviation e exceeds the threshold deviation eth.
Referring next to FIG. 8, the operation of the previously described embodiment is explained along with the illustrated driving situation in FIG. 3. The flow chart in FIG. 8 illustrates a control routine 90. At regular intervals, the microprocessor calls this control routine 90 and repeats its execution.
In box 92, the microprocessor within the computing device 24 performs reading operation of the signals from the sensing devices 16, 18, 20, 14, and 22 (see FIG. 1).
In box 94, the microprocessor performs the operation of the map creator 28 (see FIG. 2). Specifically, the microprocessor computes present values of state vectors X(t0) and Y(t0). Taking the driving situation in FIG. 3 as an example, the state vectors X(t0) and Y(t0) may be expressed as
X(t 0)=(0R B +{dot over (R)} B R C +{dot over (R)} C R D +{dot over (R)} D]T Y ( t o ) = [ 0 0 0 1 ] ( 41 )
where
    • v is the vehicle speed of the automobile A;
    • Ri is the measure of the intervehicle distance between the automobile A and the obstacle vehicle i (i={B, C, D}); and
    • {dot over (R)} is the measure or estimate of the relative speed between the automobile the obstacle vehicle i (i={B, C, D})
In box 96, the microprocessor determines whether or not updating of the set of recommended pair of control inputs is required. The conditions that demand the updating have been described so that repletion of the conditions is omitted for brevity of description. If the updating is not required, the execution of the routine comes to an end. If the updating is required, the control logic goes to box 98.
In box 98, the behavior predictor 30 (see FIG. 2) is updated. Specifically, the microprocessor creates models of the obstacle vehicles B, C and D, respectively, by setting appropriate values as parameters of each model and setting the initial values of the state vectors as shown in equation (41). What is done here is to initialize the predictor equations (17) and (21).
In box 100, the microprocessor inputs the functional J [ux, uy] for maneuver(s) from the evaluation function generator 32. For example, we assume that the evaluation function generator 32 sets a functional J to meet operator demand for driving with less acceleration feel at vehicle speeds around a desired value of vehicle speed vd A. This functional J may be expressed as J [ u x , u y ] = t o t f [ { i - ( B , C , D ) / ( x A , y A , x 1 , y 1 ) } + { q ( v d A - v A ) 2 + ru x 2 } ] t ( 42 )
where
    • q and r are the appropriate positive values.
In the next box 102, the algorithm of the recommendation generator 34 is activated. As explained before along with the flow chart in FIG. 6, computing the equation (42), a set of recommended pairs of control inputs is generated. FIG. 9 shows a curved line 110 illustrating the recommended varying of longitudinal control input with time within the evaluating period from t0 to tf and another pulse like line 112 illustrating the recommended varying of lateral control input with time within the same evaluating period. What is recommended here is to step on the accelerator for several seconds, to change lane to the right at a moment immediately after the peak of acceleration, and to gradually release the accelerator for deceleration toward the desired vehicle speed after having entered the right lane toward.
With reference back to the flow chart in FIG. 8, in box 104, the microprocessor computes a predicted behavior X(t) and Y(t) (predicted trajectories of each of the vehicles A, B, C and D) by integrating with respect to time the predictor equations (17), (27) after substituting the recommended control inputs ux and uy into them. With the predicted trajectories, the microprocessor updates the existing recommended trajectories.
In box 106, the microprocessor transfers the updated recommended trajectories for presentation at the display 26. After box 106, the execution of the routine comes to an end. One example of presentation at the display 26 is illustrated in FIG. 9.
From the preceding description of the embodiment, it will be appreciated as an advantage that the behavior predictor 30 and evaluation function generator 32 enable the recommendation generator 34 to provide enhanced recommended pairs of control inputs,
Another embodiment of the present invention can be understood with reference to FIGS. 10 to 14. In FIG. 10, the automobile, now generally designated at 10A, is substantially the same as the automobile 10 illustrated in FIG. 10. However, the automobile 10A is equipped with a road map based guidance system 120, which is often referred to as GPS navigation system. The road map based guidance system 120 uses Global Positioning System (GPS) with a combination of computer hardware and software components. The components include a map database, a GPS receiver and CPU. The map database includes an electronic map of the road structure, is stored. This database includes detailed information on lanes and ramps at interchanges on highways as well as a directory of potential travel destinations and business in the region. The GPS receiver picks up GSP signals that locate the automobile position as it travels. The computer CPU works with information received from each component of the system 120 to display the automobile's position along a road. A computing device now designated at 24A is substantially the same as the computing device 24 in FIG. 1 except the provision of component performing an additional function that will be described below.
The block diagram in FIG. 11 illustrates the operation of the embodiment of the present invention. Comparing FIG. 11 with FIG. 2 reveals that the provision of road map based guidance system 120 only is a difference between them. The road map based guidance system 120 provides a map creator 28 and a behavior predictor 30 with additional information on the road. Taking the driving situation in FIG. 12 as an example, the map creator 28 can receive information from the system 120 and recognize that the adjacent next left lane is a ramp road of an interchange, which will merge at a position Xend in the lane ahead of the automobile A.
To illustrate this driving situation, the map creator 28 provides the state vector as X ( t o ) = [ X A X B X C X D X E ] , Y ( t o ) = [ 0 - 1 0 0 0 ] ( 43 )
where X A = [ O V ] , X i = [ R i v + R . i ] , i = { B , C , D , E }
We now consider a behavior predictor 30. We can use the previously described models to describe the automobile A and obstacle vehicles C, D and E. However, an obstacle vehicle B in the ramp road needs another model. One example of the model for obstacle vehicle B is described below.
Let it be assumed that the obstacle vehicle B follows the longitudinal control law expressed by the equation (3) and the lane change control law expressed by the equation (23). In this case, as the vehicle operator will change lane, it is not necessary to calculate the variable zB. It is clear that the vehicle operator will decelerate the vehicle B as it approaches the end of the lane. Taking this into account, a longitudinal model component for the obstacle vehicle B may be expressed as
 {dot over (x)}B=vB v . B = k 2 ( v d B - v B ) - k 1 1 x end - x B + ɛ V B ( 44 )
where
    • ε is the small positive constant.
We now define x d i = [ x i + hv i v d i ] , i = { A , C , D , E } , x d B = [ x i + hv i - 1 x end - x B + ɛ v d B ] , X d = [ x d A x d B x d C x d D x d E ] ( 45 )
Then, we have a predictor equation (in longitudinal direction) as t X = ( A + BE ) X + BE d X d + B A u x where ( 46 ) E = [ 0 0 I 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I 0 ] , E d = [ 0 0 0 0 0 0 I 0 0 0 0 0 I 0 0 0 0 0 I 0 0 0 0 0 0 ] A = [ A A 0 0 0 0 0 A o 0 0 0 0 0 A o 0 0 0 0 0 A o 0 0 0 0 0 A o ] ( 47 ) B = [ 0 0 0 0 0 0 B o 0 0 0 0 0 B o 0 0 0 0 0 B o 0 0 0 0 0 B o ] , B A = [ b A 0 0 0 0 ] ( 48 )
We also have a predictor equation (in lateral direction) as
Y(t+Δt)=Y(t)+H(X(t),Y(t))+Du y  (49)
where H ( X ( t ) , Y ( t ) ) = [ 0 H B ( X ( t ) , Y ( t ) ) H C ( X ( t ) , Y ( t ) ) H D ( X ( t ) , Y ( t ) ) H E ( X ( t ) , Y ( t ) ) ] , D = [ 1 0 0 0 0 ] ( 50 )
As mentioned above, the lane change of the vehicle B is apparent, the variable zB within HB(X, Y) should be initialized to a sufficiently large value for ease of lane change.
It is seen that the predictor equations (46) and (49) are the same, in form, as the predictor equations (17) and (27), respectively.
The functions of evaluation function generator 32, recommendation generator 34 and trajectory processor 36 are substantially the same as those of their counterparts of the previously described in connection with FIG. 2. Thus, the description on them is hereby omitted for brevity of description.
Let us now consider, as the evaluation function generator 34, two following different forms of evaluation index J. J [ u x , u y ] = t o t f { ( i = ( B , C , D , E ) / ( x A , y A , x i , y i ) ) + ( q ( v d A - v A ) 2 + ru x 2 ) } t ( 51 ) J [ u x , u y ] = t o t f { ( i = ( B , C , D , E ) / ( x A , y A , x i , y i ) ) + ( qy B ( t ) 2 + ru x 2 ) } t ( 52 )
It is noted that the equation (51) and the before mentioned equation (42) express the same maneuver. The maneuver expressed by the equation (52) contains accounting for making room ahead to allow the obstacle vehicle B easy to change lane.
With reference now to FIG. 13, the curve 126 illustrates varying of recommended control input ux with time from the present time t0 to future time tf if evaluation index J (51) is used in the illustrated driving situation in FIG. 12. The time tf terminates a evaluating period. Using the recommended control input 126, the trajectory processor 36 integrates the predictor equations (17) and (27) with respect to time to rind a future behavior of the vehicles in the illustrated driving situation in FIG. 12. One example of displaying the future behavior is shown at 128.
In FIG. 13, the curve 130 illustrates varying of recommended control input ux with time from the present time to to future time tf if evaluation index J (52) is used. Using the recommended control input 130, the trajectory processor 36 integrates the predictor equations (17) and (27) with respect to time to find a future behavior of the vehicles in the illustrated driving situation in FIG. 12. The future behavior is displayed at 132.
FIG. 14 illustrates another manner of informing the vehicle operator of the automobile A. In this case, arrows 140 and 142 appear in the present driving situation road map to prompt the vehicle operator to acceleration or deceleration. For example, as the curve 126 recommends acceleration initially, the arrow 140 appears. Next, the curve 130 recommends deceleration, the arrow 142 appears.
With reference now to FIGS. 15 to 19, another embodiment is described. In FIG. 15, the automobile, now generally designated at 10B, is substantially the same as the automobile 10A illustrated in FIG. 10. However, the automobile 10B is different from the automobile 10A in that a throttle 150 of an engine 152, a transmission 154, wheel brakes 156, and a steering actuator 158 ere under control of a computing device 24B. The computing device 243 is substantially the same as the computing device 24A of the automobile 10A. But, the computing device 24B has additional software components.
The block diagram in FIG. 16 shows, as the additional software components, a control target automatic generator 170 and an actuator commander 180.
The control target automatic generator 170 receives information from a road map based guidance system 120 and automatically selects the appropriate one or ones among various evaluation functions for use in a evaluation index generator 32. The control target automatic generator 170 automatically selects the appropriate terminal conditions among various terminal conditions for use in a recommendation generator 34. Using this function, it is now possible to accomplish a maneuver involving a lane change before the road diverges to take route along one branch road toward a destination that is set in the road map based guidance system 120.
The actuator commander 180 computes actuator commands necessary to realize acceleration/deceleration command and lane change command expressed by the recommended control input created at the recommended generator 34. The actuator commands are applied to the appropriate one or ones of actuators for throttle, transmission, brakes and steering.
With reference to FIGS. 17 and 18, the operation of this embodiment is described. The driving situation in FIG. 18 indicates that, in the left lane (lane 0), the automobile A is following an obstacle vehicle B, while, in the right lane, an obstacle vehicle D is following an obstacle vehicle C. At a junction 1 Km ahead of the automobile A, the road it travels diverges into two branches. Taking the right branch is the only option toward the destination.
The flow chart in FIG. 17 illustrates a control routine 190. At regular intervals, the microprocessor calls the control routine 190 and repeats its execution. The operation of this embodiment is explained along this control routine 190 taking the driving situation in FIG. 16 as an example.
The control routine 190 may be regarded as a modification of the previously described control routine 90 illustrated in FIG. 8. Thus, the control routines 190 and 90 have the same boxes 92, 94, 96 and 98 immediately after the start of the routine. Although not identical, boxes 200, 202 and 204 of the control routine 190 may be regarded as substantially the same as the boxes 102, 104 and 106 of the control routine 90. The control routine 190 is different from the control routine 90 in that boxes 192, 194, 196 and 198 have replaced the box 100.
In FIG. 17, in box 94, the state vectors X(t0) and Y(t0) are initialized to describe the illustrated driving situation in FIG. 18. The state vectors X(t0) is initialized as shown in equation (41), and the state vector Y(t0) is initialized as follows: Y ( t o ) = [ 0 0 1 1 ] ( 53 )
The matrices E and Ed are initialized as shown in FIG. (13).
In box 192, the microprocessor inputs the automobile position from the road map based guidance system 120. This is the moment when the microprocessor recognize that the junction is located 1 Km ahead.
In box 194, the microprocessor determines whether or not the lane the automobile A is traveling is appropriate for tracking route to the destination. In the driving situation in FIG. 18, the left lane the automobile A travels is inappropriate so that the logic goes from box 194 to box 198. If the automobile A travels the right lane, the logic goes from box 194 to box 196. In box 196, the microprocessor inputs the same evaluation index J that has been determined by the evaluation index generator 32.
In box 198, the microprocessor picks up and input a new evaluation function and terminal conditions involving lane change as one of items to be evaluated. The evaluation function and terminal conditions are, for example, J [ u x , u y ] = t o t f { ( i = ( B , C , D ) i ( x A , y A , x i , y i ) ) + ( q ( v d A - v A ) 2 + ru x 2 ) } t y(t f)=1  (54)
In this embodiment, as different from the previously described embodiment, the terminal conditions appear explicitly.
In box 200, the microprocessor calls algorithm of recommendation generator 34. The optimum control input ux, uy is determined, which minimizes the function J[ux, uy] (54). In this case, as the terminal conditions are explicit, the optimal control input is selected among proposed control inputs that include lane change to the right.
With reference to FIG. 19, the curves 210 and 212 illustrate the recommended optimal control input best fit to the driving situation in FIG. 18. They clearly teach (1) temporarily accelerating the automobile A until it moves to a point looking an intervehicle spacing in the next adjacent light lane, (2) changing lane, (3) gradually decelerating to increase the distance to the obstacle vehicle C, and (4) moving the automobile A at such a speed to maintain the headway. This scenario may be confirmed from the illustration in FIG. 19.
Returning back to FIG. 17, in block 202, the microprocessor updates the existing control input with the newly recommended control input.
In box 204, the microprocessor transfers the updated recommended control input to the actuator commander 180. The updated recommended control input causes the actuator commander 110 to alter the amount of one or some of actuator commands. The altered amounts of each of such actuator commands are transferred to the associated actuators to update the old amounts of the actuator commands. Each of the actuators operates in accordance with the present amount of the actuator command given until updated with the new one. After block 204, the routine comes to an end. The actuators may operate to fully accomplish the recommended control input. If desired, the actuators may alter the amount of reaction force in order to prompt the vehicle operator to manipulating the steering wheel and/or the accelerator to accomplish the recommended control input.
For brevity of description, the control routine 190 in FIG. 17 does not include box or boxes responsible for data transfer to the trajectory processor 36 and computation of predicted trajectories. If need arises, such blocks may be inserted after the updating job in box 202.
With reference now to FIGS. 20 to 25C, another embodiment is described. This and the first-mentioned embodiments (see FIGS. 1 and 2) are the same in hardware. However, this embodiment is different from the first mentioned embodiment in the contents of a behavior predictor 30 and a recommendation generator 34 (see FIG. 2).
We will now consider vehicle modeling of the illustrated driving situation in FIG. 20. In the left lane of a two-lane road, an automobile A is traveling at vehicle speed of VA and following the preceding obstacle vehicle a that is traveling at vehicle speed of VB. The intervehicle spacing is too far. Thus, the vehicle operator of the automobile A has intention to adjust the intervehicle spacing to a desired distance. In the adjacent right lane, an obstacle vehicle C is traveling at vehicle speed VC. The obstacle vehicle C is turning on a direction indicator to express an intention to change lane to the left. The vehicle speed VC is less than the vehicle speed VB. For brevity. Let it be assumed that the vehicles A and B will keep the present lane so that the only the longitudinal component ux needs to be determined because the lane change or lateral component uy remains 0.
The predictor equations (2) and (3) constitute the behavior predictor 30 of this embodiment. For brevity, the obstacle vehicles B and C are traveling at their desired speeds, respectively, so that they will maintain their desired speeds till recognition of the preceding vehicle.
Accounting for the illustrated three future intervehicle relationships identified q=1, 2 and 3 in FIGS. 21A, 218 and 21C, we have a predictor equation in longitudinal direction.
{dot over (x)}=f(x, u x , q)  (55)
Here, we define
x=(x A v A x B v B , x C v C)T  (56)
and
f(x, u x, 1)=(vA u x v B0v C0)T
f(x, u x, 2)=(vA u x v B0v C k 1(x C −x B −hvc)+k 2(vc −v B))T
f(x, u x, 3)=(vA u x v B0v C k 1(x C −x A −hvc)+k 2(vc −v A))T   (57)
Let us now consider the lane change model of the obstacle vehicle C. We use the determination functions (23) and (25), but we do not compute the variable zA(t) expressed by the equation (22) because the lane change intention is explicit. The shift conditions of q may be described as q : 1 3 if f LC ( x C , x B ) > f 0 f LC ( x C , x A ) > f 0 x C x A q : 1 2 if f LC ( x C , x A ) > f 0 x C < x A ( 58 )
The content of the evaluation index 32 will change depending on a change in the intervehicle positional relationship. To describe the driving situation 20 that has the illustrated three future intervehicle relationships in FIGS. 21A, 21B and 21C, we have J [ u x ( t ) ] = t o t f L ( x , u x , q ) t ( 59 )
Here, we define
L(x, u x, 1)=rL u(u x)+w B L f(x A , x B)
L(x, u x, 2)=rL u(u x)+w C L f(x A , x C)
L(x, u x, 3)=rL u(u x)+w B L f(x A x B)+w C L b(x A , x C)  (60)
where
    • r, wB and wC are the weighting factors, each in the form of a real positive number; and
      Here, we explain what evaluation functions Lu, Lf, and Lb incorporated in the evaluation index J mean.
Lu(ux)=(1/2)ux 2 expresses the demand for less acceleration/deceleration.
Lf(xA, xB)=a(xB−xA)+b/(xB−xA) expresses the demand for a reasonable distance to the preceding vehicle, where a and b are the parameters determining the form of the evaluation function.
Lb(xA, xC)=1/(xa−xC) expresses the demand for a reasonable distance to the following obstacle vehicle.
Solving the lane change model (58) yields a typical optimization control problem defined by the predictor equation (55) and the evaluation index (59). For such typical optimization problem, mathematical conditions (needed conditions for optimization), which the optimal solution (recommended control input) must satisfy, are well known. Thus, the data set structure of proposed control input is limited using such well known conditions. This is beneficial in determining the recommended control input quickly. One implementation is disclosed in T. Ohtsuka, “Continuation/GMRES Method for Fast Algorithm of Nonlinear Receding Horizon Control,” in Proceedings of the 39th IEEE Conference on Decision and Control, pp. 766-771, 2000, which has been incorporated by reference in its entirety.
We now explain the needed conditions for optimization to obtain algorithm.
We now define the following Hamiltonian out of the evaluation function L and the predictor equation f.
H(x, u x λ, q)=L(x, u x , q)+λT(t)f(x, u x , q)  (61)
λ(t) is the vector variable having the same order components as those of the predictor equation. In this case, we describe the needed conditions for the optimization as H ( x , u x , λ , q ) u x = 0 ( 62 ) {dot over (x)}=f(x, u x , q), x(t 0)x 0  (63) λ . = - ( H ( x , u x , λ , q ) x ) T , λ ( t f ) = x 0 ( 64 )
where
    • x0 is the state of vehicle cluster at t=t0
With reference to FIG. 22, we now explain a recommended control input update algorithm 220.
In box 242, the microprocessor loads the previous recommended control input ux *(t:t−Δt) that was given by the previous cycle.
In box 224, using the previous control input ux *(t:t−Δt) as ux, the microprocessor integrates with respect to time the equations (55) and (58) from ti−Δt to ti to yield states x(ti) and q(ti) at time ti.
In box 226, the microprocessor gives the result as ux *(t−Δt:ti−Δt) by shifting ux *(t:t−Δt) to begin with the moment ti. Using the shifted control input ux *(t:t−Δt) and setting x(ti) and q(ti) as initial conditions, the microprocessor integrates the equations (63) and (58) from ti to ti+T. The microprocessor checks q in parallel to checking x and changes the value of q when conditions are met for lane change. Immediately after a change in the value of q, this change is included in the integration of function f for the rest of period of integration. Here, T is the evaluating period of time.
In box 228, using ux *,x*(t) and q*(t), the microprocessor integrates the equation (64) from ti+T to ti to yield the result λ as λ*(t). A change in the value of q*(t) is included in integrating the function f of the equation (64), causing Hamiltonian H to change. Here, T is the evaluating period.
In box 230, using x*+(t), q*(t) and λ*(t), the microprocessor solves the equation (62) with respect to ux to yield the solution ux as a new recommended control input ux *(t: ti) at ti. The microprocessor updates the recommenced control input with the new recommended control input ux *(t:ti).
In the present embodiment, the equation (62) is expressed as H u x = ru x ( t ) + λ 2 ( t ) = 0 ( 65 )
Here, λ2(t) is the second component of the vector variable λ(t). As λ*(t) is given by computation, the recommended control input is given by computing the following equation. u x m ( t : t 1 ) = - 1 r λ 2 * ( t ) ( 66 )
An initial value needs to be set as recommended control input at the initial moment upon starting execution of the flow chart in FIG. 22. The initial value is loaded from stored optimum control inputs for known driving situations when detected driving situation belongs to one of them. A value may be selected out of values of control inputs similar to the optimum control input for a given driving situation and subsequently corrected by repeating execution of the algorithm in FIG. 22 for good approximation to the optimum initial value if, strictly speaking, any available values fail to be the optimum initial value. If an initial value is unknown, an apparent optimum control input at zero evaluating period is used as an initial value of recommended control input for the subsequent correction process to approximate the optimum initial value by increasing the length of the evaluating period. In this case, the algorithm may be illustrated by the flow chart in FIG. 23. This algorithm provides a recommended trajectory over the period of time ts≦t≦tF.
The flow chart in FIG. 23 illustrates the algorithm 240. In box 242, the microprocessor finds state x(t5), q(t5) at the initial moment ti=t5.
In box 244, the microprocessor determines an initial value of recommended control input. Specifically, we obtain an apparent optimum recommended control input ux *(ts) that is constant if the evaluating period is set equal to 0 (zero) by solving the equation (62) with respect to t=ts, aH/aux=0, after setting that x*(t)−x(ts), λ*(t)=0, and q*(t)=q(ts). A storage variable ux *(t) is prepared and initialized as follows.
u x *(t)=u x *(t s), ts≦t≦tF  (67)
In box 246, the microprocessor advances the time t1 by one step a Δt(t1←t1+Δt).
In box 248, the microprocessor updates the evaluating period T. The evaluating period T increases from zero. The microprocessor determines the evaluating period by computing
the following equation
T=T f exp(−α(t−t s))  (68)
where
    • Tf is the maximum value of the evaluating period;
    • α is the appropriate positive real constant number
In box 250, the microprocessor executes the algorithm 220 shown in FIG. 22 to create a new recommended control input ux *(t:ti). It is to be remembered that the evaluating period is different one step from another. Thus, the evaluating period of the previous control input ux *(t−Δt:ti−Δt) does not match the present evaluating period in the present step. Thus, is the time scale of the previous control input ux *(t−Δt:ti−Δt) is corrected to match the present evaluating period. That is, if Tp denotes the previous evaluating period, ux *(t−Δt:ti−Δt) is replaced with ux *((T/Tp)(t−Δt):(ti−Δt), which value is used for the optimization.
In box 252, the microprocessor updates the corresponding portion of the storage variable ux *(t) with the created control input ux *(t:ti), ti≦t≦ti+Δt that is obtained in box 252.
In box 254, the microprocessor determines whether or not the time ti has reached Tf. If the time ti has reached Tf, the content of the storage variable ux *(t) is output as the final recommended control input. If not, the logic returns to box 246.
The vehicle control application utilizing the obtained recommended control input is the same as each of the previously described embodiments.
FIGS. 24A-24C and 25A-25C are simulation results for confirming the effectiveness of the embodiment just described above. FIG. 24A-24C illustrate the case where the initial relative distance to the obstacle vehicle B is 40 meters. FIGS. 25A-25C illustrate the case where the initial relative distance to the obstacle vehicle B is 60 meters. FIGS. 24A and 25A illustrate varying of recommended control input ux with time. FIGS. 24B and 25B illustrate varying of relative distance to each of the obstacle vehicles B and C with time. FIGS. 24C and 25C illustrate varying of velocity (=vehicle speed) of each of the automobile A and obstacle vehicles B and C with time.
What is illustrated in FIGS. 24A-24C includes an initial acceleration of the automobile A to prevent the obstacle vehicle C from change lane. What is illustrated in FIGS. 25A-25C includes an initial deceleration of the automobile A to allow the obstacle vehicle C to change lane.
From the simulation results, it will be well appreciated that the optimum recommended control input is computed with less is computational time and presented quickly.
Another embodiment of the present invention can be understood with reference to FIGS. 26 to 31. In FIG. 26, an automobile, now generally designated at 10C, is substantially the same as the automobile 10 in FIG. 1 except the vehicle control application. The recommended control input ux(t), uy(t) at each time is determined so as to minimize the evaluation index J[ux, uy]. In the vehicle control application of the automobile 10, substituting the recommended control input ux(t) uy(t) into the predictor equation yields recommended trajectory displayed on the screen of the interface 42. In the vehicle control application of FIG. 10C, the recommended control input ux(t) is used to determine a desired accelerator position of an accelerator 260. An accelerator sensor 262, which belongs to a state sensing system, detects an accelerator position of the accelerator 260. A computing device 24C is operatively coupled with the accelerator sensor 262. The computing device 24C is also coupled with a servomotor 264 for altering reaction force opposed to operator manual effort to step on the accelerator 260. When a system switch 266 is turned on, the computing device 24C determines the control signal such that the reaction force prompts the operator to adjusting the accelerator 260 to the desired accelerator position.
The computing device 24C is substantially the same as the computing device 24 in FIG. 1 except the provision of algorithm for controlling the servomotor 264 to control the accelerator pedal reaction force.
With reference to FIG. 27, the computing device 24C has hardware or software components. They include a component for forming an evaluation function 270, and a component 272 for finding a control input. The component 270 includes a label granting component 274, a label eliminating component 276, and a label managing component 278. The component 272 includes a weighting factor changing component 280.
Referring to the driving situation in FIG. 28, the embodiment is further described below.
FIG. 28 illustrates a road having a single lane on one side. The automobile 10C, which is now designated by the reference character A, is traveling along the lane. The automobile A has a label granting field and a label holding field. If a need arises for avoiding undesired repetition of grating label and eliminating label, the label granting field should fall within the label holding field. Turning back to traffic, an obstacle vehicle C had come into the label granting field and is following the automobile A. The driving situation shows that another obstacle vehicle B has just come into the label granting field. The vehicle B is traveling at a speed lower than a speed at which the automobile A is traveling.
In order to improve assistance to operator of the automobile A, the vehicle operator has to turn on the system switch 266. Upon or immediately after turning on the switch 266, the environment sensing system (12, 16, 18, 20) starts detecting obstacles within the label granting field. The label granting component 274 generates a grant request for granting a label on any one of obstacles and/or obstacle vehicles, which the environment sensing system has detected within the label granting field. In order to identify the detected obstacle vehicle, the label has one of different real numbers, for example, 1, 2, 3, . . . The grant requests are applied to the label managing component 278. After receiving the grant requests, the label managing component 278 grant the labels on the associated obstacle vehicles, respectively. The relationship is recorded and held as labeled obstacle vehicles.
The evaluation function forming component 24C inputs distance to each of the labeled obstacle vehicles to compute an evaluation function or term evaluating the degree of risk which the obstacle vehicle imparts to the automobile A.
Subsequently, when it determines that the environment sensing system detects an incoming obstacle vehicle that has come into the label granting field, the label granting component 274 generates a new grant request for application to the label managing component 278. After receiving this grant request, the label managing component 278 grants a new label to the incoming obstacle vehicle. The label managing component 278 records the relationship and holds the record.
Subsequently, when it determines that the environment sensing system loses the labeled vehicle, the label eliminating component 278 generates an elimination request for eliminating the label out of the labeled vehicle. When it determines that the environment sensing system detects an outgoing labeled obstacle vehicle that has left the label holding field, the label eliminating component 278 generates an elimination request for eliminating the label out of the outgoing labeled obstacle vehicle. The elimination requests are applied to the label managing component 278. In response to receiving each of the eliminating requests, the label managing component 278 eliminates the label out of the labeled obstacle vehicle. The label managing component 278 cancels the records on label-eliminated obstacle vehicles.
Let us review the functions of the label granting component 274 and label managing component 278 along with the driving situation in FIG. 28.
In FIG. 28, it is assumed that the label managing component 278 has a record that a label “1” is granted on the obstacle vehicle C. At moment to, the obstacle vehicle B has come into the label granting field. Immediately after the moment t0, the label granting component 274 generates a grant request for granting a label on the obstacle vehicle B. In response to this request, the label managing component 278 grants a label “2” on the obstacle vehicle B. The label managing component 278 holds a record that the label “2” is granted on the obstacle vehicle B. Subsequently, the obstacle vehicles B and C are treated as labeled vehicles, respectively, in forming variables for evaluation.
The following sections provide description on evaluation functions, which are evaluation terms of an evaluation index. The evaluation index may be regarded as an evaluation function.
Now, we consider risk, which a labeled obstacle vehicle imparts to the automobile A. One of measures of risk is a time-to-collision (TTC). As is well known, the TTC is expressed as (xi−xA)/vA with respect to label i. In order to give the minimum when the risk with respect to the label i is optimal, an evaluation term or function is given as l i = ( v A x i - x A ) 2 ( 69 )
Next, we consider evaluating a control input ux to the automobile A and present another evaluation term or function, which is expressed as
l x =u x 2  (70)
Further, we consider evaluating state of the automobile A and present another evaluation term or function, which is expressed as
l v=(v A =v d)  (71)
Using the above-mentioned evaluation terms or functions, we present the following function L, which is a weighted sum of the evaluation terms. The function L is expressed as L = w x / x + w v / v + i w i / i ( 72 )
where
    • wx, wv and we are weighting factors.
We now define an evaluation index or function as J = t t + T L τ ( 73 )
where
    • T is the evaluating period.
In order to determine control input ux(τ) at each time τ so as to minimize the evaluation index J, it is necessary to predict future behaviors of each of vehicles A, B and C in FIG. 28. We now define appropriate predictor equations as follows.
With respect to the automobile A, we define a predictor equation as
{dot over (x)}A=vA
{dot over (v)}A=ux  (74)
With respect to the labeled obstacle vehicle C, we define a predictor equation as
{dot over (x)}C=vC
{dot over (v)} C =k 1(x A =x C −h C v C)+k 2(v A −v C)  (75)
where
    • k1 and k2 are the following characteristic of the labeled obstacle vehicle C;
    • hC is the time headway with respect to the automobile A.
      This predictor equation is formed using a model that the labeled obstacle vehicle C is following the automobile A.
With respect to the labeled obstacle vehicle B, we define a predictor equation as
{dot over (x)}B=VB
{dot over (v)}B=0  (76)
This predictor equation is formed based on a model that the labeled obstacle vehicle B travels at a constant speed.
Solving the predictor equations (74), (75) and (76) yields states of the automobile A and the labeled obstacle vehicles B and C over the estimated period t≦τ≦t+T. With the states given, we can determine control input ux(τ) at each time τ so as to minimize the evaluation index J.
For the performance of algorithm, a need remains for continuous varying of the evaluation equation L (Equation 72) to ensure continuous varying of control input ux(τ) with respect to time. To provide continuous varying of the evaluation equation L, the weighting factor changing component 280 is provided.
With reference to the driving situation in FIG. 28, the operation of the weighting factor changing component 280 is explained.
Before the moment to when the preceding vehicle B is labeled, the evaluation equation L is
L=w x/x +w v/v +w 1/1  (77)
Upon or immediately after the moment to, the evaluation equation L becomes
L=w x/x +w v/v +w 1/1 +w 2/2  (78)
In this case, the term w2/2 causes a discontinuity of the evaluation equation L. In order to avoid this discontinuity, the weighting factor W2 is made time dependent w2(t). Substituting W2(t) into the equation (78), we have
L=w x/x +w v/v +w 1/1 +w 2(t)/2  (79)
Setting W2(t0)=0, we have the continuity from the equation (77) to the equation (78). After the moment t0, the time dependent weighting factor w2(t) increases from zero toward w2 at a gradual rate as illustrated in FIG. 29. Expressing mathematically, we have w 2 ( t ) = { w 2 T t ( t - t 0 ) t 0 t t 0 + T t w 2 t > t 0 + T t ( 80 )
where
    • Tt is the parameter that determines the rate of variation of w2(t).
      Here, if we set a large value as the parameter Tt, the evaluation function L varies gradually, and gradual variation of control input can be expected. However, if the parameter Tt is too long, evaluation for newly detected obstacle vehicle cannot be reflected satisfactorily. Accordingly, in the embodiment, we defines the parameter Tt using the TTC TC as T t = { T t min T c < T c min T t min + T t max - T t min T c max - T c min ( T c - T c min ) T c min T c T c max T t max T c > T c max ( 81 )
      Here, we define TC as T c = { - x 2 - x A v 2 - v A v A > v 2 v A v 2 ( 82 )
      In the formula (81), Tt min and Tt max define the lower and upper limits of an adjustable range of the parameter TC. TC min and TC max are the appropriate values having the dimension of time.
This section provides the description on one example of determining accelerator reaction force F. Let it be that ux *(t) is the value of the optimal solution, with respect to the present moment t, determined so as to minimize the evaluation index or function (73), and θ*(t) is the accelerator angle of the accelerator pedal 260 for accomplishing the vehicle acceleration indicated by ux *(t). Further, the actual accelerator angle of the accelerator pedal is θ(t), and the usual reaction force characteristic of the accelerator pedal 260 is F(θ). Then, the computing device 24C determines servomotor control signal so as to produce reaction force F, which is expressed as
F=F(θ(t))+sat −f f(K*(t)−θ(t))  (83)
where
    • K is the appropriate gain;
    • f is the upper limit of reaction force correction value.
      Here, we define sat - f f ( x ) = { - f x < - f x - f x f f x > f ( 84 )
      If the actual accelerator angle θ(t) is greater than the accelerator angle determined by the optimal solution, the reaction force increases to prompt the vehicle operator to decelerating the automobile A. If the actual accelerator pedal angle θ(t) is less than the accelerator angle determined by the optimal solution, the reaction force reduces to inform the vehicle operator that s/he can accelerate the automobile A.
The flow chart in FIG. 30 illustrates a control routine 290 implementing the above-mentioned embodiment.
In box 292, the microprocessor reads signals from sensing devices 16, 18, 20, 14 and 22 to locate any obstacle and/or obstacle vehicle.
In box 294, the microprocessor determines whether or not there is any new incoming obstacle vehicle within the label granting field. If this is the case, the microprocessor creates an evaluation term (in box 296) and a predictor equation (in box 298) for the newly incoming obstacle vehicle.
In box 300, the microprocessor updates weighting factor(s) by incrementing with respect to a gradually increasing weighting factor and setting zero (0) as a value within respect to a newly appeared weighting factor.
In box 302, the microprocessor computes to solve the optimization problem to determine control input so as to minimize the evaluation index (73).
In box 304, the microprocessor computes reaction force F using the equation (83) and determines servomotor command needed to produce the reaction force. After box 304, the routine comes to an end to complete one cycle operation.
With reference to FIG. 31, the fully drawn curve and the one-dot chain line curve illustrate varying of the optimal solution ux *(t) with time before and after the moment to when the preceding obstacle vehicle B has come into the label granting field. It is assumed, here, that the vehicle operator traces the optimal solution ux *(t) by accelerating or decelerating the automobile A. The fully drawn line illustrates the case where the weighting factor w2(t) increases at a gradual rate. The one-dot chain line curve illustrates the case where the weighting factor w2 is fixed.
From the preceding description, it will be appreciated that the occurrence of discontinuity of the evaluation index J (73) has been avoided by providing the time dependent weighting factor w2(t) that is used in the evaluation equation L (72).
Because the discontinuity of the evaluation index J is avoided, making assumption that the optimal solution is continuous can shorten the computational time. The performance of algorithm is maintained. Smooth variation in accelerator reaction force has been accomplished without causing any objectionable feel to the vehicle operator.
As described above, the weighting factor wi providing the weighting on the evaluation term wi/i equal is zero upon receiving a grant request for granting a label on the newly incoming obstacle vehicle into the label granting field. Subsequently, the weighting factor is increased from zero at a rate with time.
The rate at which the weighting factor is increased is determined by TTC of the automobile with respect to the labeled obstacle vehicle.
If desired, the rate at which the weighting factor is increased is determined by TTC of the automobile with respect to the labeled obstacle vehicle after an initial stage of increasing of the weighting factor and before a final stage thereof. During the initial and final stages, the rate at which the weighting factor is set in the neighborhood of zero.
Another embodiment can be understood with reference to FIGS. 32 to 35. The hardware and software components used in this embodiment are the same as those used in the above described embodiment.
FIG. 32 illustrates a situation at the moment to when the labeled obstacle vehicle C has lost speed and left the label holding field. The evaluation equation L (78) holds immediately before the moment to.
With reference also to FIG. 27, immediately after the moment t0, the label eliminating component 276 generates an elimination request for eliminating the label “1” from the labeled obstacle vehicle C. If the label managing component 278 eliminates the label “1”30 from the obstacle vehicle C immediately in response to the elimination request, the evaluation equation L is rewritten as
L=w x/x +w v/v +w 2/2  (85)
The elimination of the term will causes a reduction in L, amounting to the discontinuity in the evaluation equation. According to this embodiment, therefore, the label managing component 278 will not eliminate the label “1” upon receiving the elimination request. Instead, the label managing component 278 issues command, asking the weighting factor changing component 280 to alter a weighting factor w1 for the label “1”. That is, the evaluation equation L is written as
L(t)=w x/x +w v/v +w 1(t)/1 +w 2/2  (86)
The time dependent weighting factor w1(t) reduces from the designed value w1 toward zero at a gradual rate. The label managing component 278 eliminates the label “1” from the obstacle vehicle C when the weighting factor w1(t) has sufficiently reduced toward zero (0). Using such time dependent weighting factor w1(t), the discontinuity of the evaluation equation L is avoided.
The rate of reduction in the weighting factor w1(t) may be constant in the similar manner as expressed by the equation (80). However, quick response is not required in this case. Thus, the rate is varied to provide a smooth change in the weighting factor w1(t) as illustrated by the fully drawn curve in FIG. 33. Mathematically expressing, the weighting factor w1(t) is given as w 1 ( t ) = { w 1 2 ( 1 - cos ( t - t o ) π T t O t > t o + T t ) t o t t o + T t ( 87 )
If the weighting factor w1(t) is subject to variation as expressed or governed by the equation (87), the following relation holds at the initial and final stage of this transient change from the designed value w, to zero (0). w 1 t t = t 0 = w 1 t t = t 0 + T t = 0 ( 88 )
The equation clearly indicates that the rate of reduction in w1(t) is zero at the initial stage and the final stage. The parameter Tt may not be given as a function of TTC TC, see equation (81) because TTC becomes infinite. Thus, in the present case, the parameter Tt is given as T t = { T t min v A - v C < R d min T t min + T t max - T t min R d max - R d min ( v A - v C - R d min ) R d min v A - v C R d max T t max v A - v C > R d max ( 89 )
where
    • Rd max and Rd min are the appropriate real numbers having the dimension of velocity.
We have to consider the case the separating obstacle vehicle disappears from the detectable range by sensing system of the automobile A before the weighting factor matches zero (0). In this case, in order to continue calculation of the evaluation term (69), estimates are created using measures xC c1 and vC c1, which were detected at moment tc1 immediately before the disappearance of obstacle vehicle C. The estimates are given as
{dot over (x)}C=vC c1 x C(t c1)=xC c1
{dot over (v)}C=0 vC ( t c1)=v C c1  (90)
The flow chart in FIG. 34 illustrates a control routine 310 implementing the above-mentioned embodiment.
In box 312, the microprocessor reads signals from sensing devices 16, 18, 20, 14 and 22 to locate any obstacle and/or obstacle vehicle.
In box 314, the microprocessor determines whether or not there is any outgoing labeled vehicle from the label holding field. If this is the case, in box 316, the microprocessor requests elimination of label from the outgoing vehicle.
In box 318, the microprocessor determines whether or not any one of labeled obstacle vehicles is lost by the sensing system. If this is the case, in box 320, the microprocessor creates estimates, as expressed by the equation (90), using measures immediately before the labeled obstacle vehicle has been lost.
In box 322, the microprocessor updates weighting factor(s) by decreasing with respect to a gradually decreasing weighting factor and leaving the other weighting factor(s) as they are.
In box 324, the microprocessor determines whether or not there is any weighting factor that has changed to zero. If this is the case, in box 326, the microprocessor eliminates the label, its evaluation term, and its predictor equation.
In box 326, the microprocessor computes to solve the optimization problem to determine control input so as to minimize the evaluation index (73).
In box 330, the microprocessor computes reaction force F using the equation (83) and determines servomotor command needed to produce the reaction force. After box 330, the routine comes to an end to complete one cycle operation.
With reference to FIG. 35, the fully drawn curve, the dotted line curve and the one-dot chain line curve illustrate varying of the optimal solution ux *(t) with time before and after the moment t0 when the following obstacle vehicle C has gone out of the label holding field. It is assumed, here, that the vehicle operator traces the optimal solution ux *(t) by accelerating or decelerating the automobile A. The scenario is that until the moment tbd, the obstacle vehicles B and C travel at fast as the automobile A. Immediately after the moment tbd, the vehicle C slows down and leaves the label holding field at moment t0. The fully drawn line illustrates the case where the weighting factor w1(t) decreases at varying rate. The dotted ine curve illustrates the case where the weighting factor w1(t) decreases at fixed rate. The one-dot chain line curve illustrates the case where the weighting factor w1 is fixed. From the fully drawn curve, it is appreciated that the optimal solution varies smoothly.
Another embodiment can be understood with reference to FIGS. 36 and 37. The hardware and software components used in this embodiment are the same as those used in the above described embodiment and illustrated in FIG. 27.
FIG. 36 illustrates a situation when the automobile A travels as fast as the obstacle vehicle B and C within the label granting field. Under this condition, we consider the case where the system switch 266 is turned on at the moment t=0.
Immediately after the system switch 266 has been turned on, the environment sensing system detects the obstacle vehicles B and C within the label holding field. The label granting component 274 generates grant requests for granting labels on the obstacle vehicles B and C, respectively. Upon receiving such grant requests, the label managing component 278 grants a label “1” on the obstacle vehicle B and a label “2” on the obstacle vehicle C.
Since both of the obstacle vehicles B and C are regarded as new incoming vehicles, an evaluation equation L(t) is given as
L(x)=w x/x +w v/v +w 1(t)/1 +w 2(t)/2  (91)
In this embodiment, too, the equations (80) to (82) may be used to vary the weighting factors, and the equations (74) to (76) may be used as predictor equations.
Immediately after the system switch 266 has been turned on, no information as to the optimal solution is available. Under this condition, as mentioned before, the evaluating period T of the evaluation index J (73) is varied from zero at a gradual rate to the designed value so as to solve the optimization problem.
The evaluating period T is given as
T(t)=T 0(1−expt))  (92)
where
    • T(t) is the evaluating period at moment t;
    • T0 is the designed value of the evaluating period;
    • α is the appropriate positive real number.
With reference to FIG. 37, the fully drawn curve, the dotted line curve and the one-dot chain line curve illustrate varying of the optimal solution ux *(t) with time after the moment t=0 when the system switch 266 has been turned on. The fully drawn line illustrates the case where the weighting factor increases gradually and the evaluating period increases gradually. The dotted line curve illustrates the case where the weighting factor is fixed and the evaluating period increases gradually. The one-dot chain line curve illustrates the case where the weighting factor is fixed and the evaluating period fixed. From the fully drawn curve, it is appreciated that the optimal solution varies smoothly immediately after the system switch 266 has been turned on.
While the present invention has been particularly described, in conjunction with various implementations of the present invention, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art in light of the foregoing description. It is therefore contemplated that the appended claims will embrace any such alternatives, modifications and variations as falling within the true scope and spirit of the present invention.
This application claims the priority of Japanese patent applications no. 2002-025181, filed Feb. 1, 2002, and no. 2002-243212, filed Aug. 23, 2002, the disclosure of each of which is hereby incorporated by reference in its entirety.

Claims (53)

1. A method for improving operator assistance of an automobile, the method comprising:
collecting, on substantially real time basis, data on the automobile and on intervehicle relationship involving the automobile;
processing the data to determine variables for evaluation; and
evaluating the determined variables to recommend a control input.
2. The method as claimed in claim 1,
wherein the data processing includes:
predicting future behavior of obstacle vehicle around the automobile in response to a given control input to the automobile; and
wherein the evaluating includes:
correlating the predicted future behavior with the given control input in determining whether or not the given control input is to be recommended.
3. The method as claimed in claim 2, further comprising:
generating a control signal in response to the recommended control input.
4. The method as claimed in claim 3, wherein the control signal prompts the operator to applying the recommended control input to the automobile.
5. The method as claimed in claim 4, wherein the control signal is an actuator command.
6. The method as claimed in claim 4, wherein the collected data include data on locations of the obstacle vehicle(s) around the automobile and lane(s) adjacent the automobile.
7. The method as claimed in claim 6, wherein the collected data include data on vehicle speed of the automobile.
8. The method as claimed in claim 2, wherein the predicting obstacle vehicle behavior includes:
detecting location of the obstacle vehicle around the automobile;
detecting lanes adjacent the automobile;
detecting vehicle speed of the automobile;
determining relative position to the obstacle vehicle and vehicle speed of the obstacle vehicle to create a map; and
presenting a predictor equation expressing time dependent variations of the intervehicle relationship in the map.
9. The method as claimed in claim 8, wherein the evaluating includes:
correlating the time dependent variations of the intervehicle relationship in the map with future time dependent variation of the recommenced control input.
10. The method as claimed in claim 8, wherein the map and the predictor equation are used in the evaluating the determined variables to recommend a control input.
11. The method as claimed in claim 1, wherein the data processing includes:
generating a grant request for granting a label on one of the obstacle vehicles around the automobile;
generating an elimination request for eliminating a label from the one obstacle vehicle;
recording relationship between each of the obstacle vehicles and a label;
updating the relationship in response to generation of the grant request and the elimination request;
providing an evaluation function including a weighted sum of a first evaluation term for control input to the automobile, a second evaluation term for state of the automobile, and a third evaluation term for risk applied to the automobile by the labeled obstacle vehicle; and
modifying weighting on the third term upon receiving one of the grant request and elimination request with respect to labeling of the associated obstacle vehicle.
12. The method as claimed in claim 8, wherein the data collecting includes:
monitoring an environment sensing system of the automobile and a state sensing system of the automobile.
13. The method as claimed in claim 12, wherein the evaluating includes:
computing an evaluation index indicative of the result of evaluation of the future time dependent variation of the intervehicle relationship of the obstacle vehicles with respect to future time dependent variation of any given control input; and
finding a control input with the best score of the computed evaluation index at each of future time.
14. The method as claimed in claim 13, wherein the evaluating includes:
providing terminal conditions indicative of a desired future behavior of the automobile; and wherein
the evaluation index is computed with the restraint defined by the terminal conditions.
15. The method as claimed in claim 13, wherein the predictor equation accounts for interaction between the automobile and the obstacle vehicle.
16. The method as claimed in claim 13, wherein, when the automobile determines lane change intention of one of obstacle vehicles via attitude taken by the obstacle vehicle, the predictor equation for the obstacle vehicle is modified.
17. The method as claimed in claim 13, wherein, when the automobile is equipped with a road map based guidance system, the predictor equation is modified in response to information provided by the road map based guidance system.
18. The method as claimed in claim 13, wherein the evaluation index includes a plurality of different evaluation functions.
19. The method as claimed in claim 18, wherein the plurality of different evaluation functions are selectively included by the evaluation index in response to a desired maneuver preferred by the operator.
20. The method as claimed in claim 18, wherein, when the automobile is equipped with a road map based guidance system, the plurality of different evaluation functions are selectively included by the evaluation index in response to information provided by the road map based guidance system.
21. The method as claimed in claim 14, wherein, when the automobile is equipped with a road map based guidance system, the terminal conditions are created in response to one of control input by the operator and Information provided by the road map based guidance system.
22. The method as claimed in claim 12, wherein the recommended control input is updated, and wherein the recommenced control input before updating is used to predict lane change of one of the obstacle vehicles, and the predicted result of lane change is used to change the form of the predictor equation and the form of the evaluation index.
23. The method as claimed in claim 12, further comprising processing the predictor equation using the recommended control input to predict trajectories of the automobile and the obstacle vehicles.
24. The method as claimed in claim 12, wherein the step of evaluating the determined variables to recommend a control input is repeated on time frame driven basis.
25. The method as claimed in claim 12, wherein the step of evaluating the determined variables to recommend a control input is repeated on event driven basis.
26. The method as claimed In claim 12, wherein the step of evaluating the determined variables to recommend a control input is repeated upon event driven basis including occurrence of error in predicting the trajectories.
27. The method as claimed in claim 1, wherein the generating a grant request includes:
generating a grant request for granting a label on an incoming obstacle vehicle, which has just come into a label granting field around the automobile; and
wherein the generating an elimination request includes:
eliminating an elimination request for eliminating a label from an outgoing obstacle vehicle, which has just left a label holding filed around the automobile and an elimination request for eliminating a label from an obstacle vehicle, which has just disappeared.
28. The method as claimed in claim 11, wherein the updating the relationship includes:
setting a weighting factor providing the weighting on the third term equal to zero upon receiving a grant request for granting a label on the associated obstacle vehicle; and
subsequently increasing the weighting factor from zero at a rate with time.
29. The method as claimed in claim 28, wherein the rate at which the weighting factor is increased is determined by a time-to-collision (ITC) of the automobile with respect to the obstacle vehicle.
30. The method as claimed in claim 28, wherein the rate at which the weighting factor is increased is determined by a time-to-collision (ITC) of the automobile with respect to the obstacle vehicle after an initial stage of increasing of the weighting factor and before a final stage thereof, and during the initial and final stage, the rate at which the weighting factor is set in the neighborhood of zero.
31. The method as claimed in claim 11, wherein the updating the relationship includes:
reducing, at a gradual rate, a weighting factor providing the weighting on the third term toward zero upon receiving an elimination request for eliminating a label from the associated obstacle vehicle; and
subsequently eliminating the label from the associated obstacle vehicle.
32. The method as claimed in claim 31, wherein, when the associated obstacle vehicle disappears before reduction of the weighting factor to zero, the third term estimated by distance to and relative speed to the associated obstacle vehicle immediately before the disappearance.
33. The method as claimed in claim 31, wherein the rate at which the weighting factor is reduced is determined by a time-to-collision (TTC) of the automobile with respect to the obstacle vehicle.
34. The method as claimed in claim 31, wherein the rate at which the weighting factor is reduced is determined by a time-to-collision (TTC) of the automobile with respect to the obstacle vehicle after an initial stage of reducing of the weighting factor and before a final stage thereof, and during the initial and final stage, the rate at which the weighting factor is set in the neighborhood of zero.
35. The method as claimed in claim 11, wherein the control input includes an operator acceleration/deceleration command.
36. The method as claimed in claim 11, wherein the state of the automobile includes vehicle speed of the automobile.
37. The method as claimed in claim 11, wherein the risk increases as the distance to the labeled obstacle vehicle reduces and the relative speed to the labeled vehicle increases.
38. A system for improving operator assistance of an automobile, the system comprising:
sensing devices for collecting, on substantially real time basis, data on the automobile and on intervehicle relationship involving the automobile;
a component for processing the data to determine variables for evaluation; and
a component for evaluating the determined variables to recommend a control input.
39. A method for improving operator assistance of an automobile, the method comprising;
collecting, on substantially real time basis, data on the automobile and on intervehicle relationship involving the automobile;
presenting a behavior predictor equations to each of the automobile and obstacle vehicles forming the intervehicle relationship;
presenting an evaluation index including at least one evaluation function;
determining a solution so as to minimize the evaluation index using the behavior predictor equations; and
recommending the determined solution as a control input.
40. A system for improving operator assistance of an automobile, the system comprising:
means for collecting, on substantially real time basis, data on the automobile and on intervehicle relationship involving the automobile;
means for processing the data to determine variables for evaluation; and
means for evaluating the determined variables to recommend a control input.
41. The system as claimed in claim 38,
wherein the data processing component predicts future behavior of obstacle vehicle around the automobile in response to a given control input to the automobile by detecting location of the obstacle vehicle around the automobile; detecting lanes adjacent the automobile; detecting vehicle speed of the automobile; determining relative position to the obstacle vehicle and vehicle speed of the obstacle vehicle to create a map; and presenting a predictor equation expressing time dependent variations of the intervehicle relationship in the map; and
wherein the evaluating component correlates the predicted future behavior with the given control input in determining whether or not the given control input is to be recommended.
42. The system as claimed in claim 41, wherein the evaluating component correlates the time dependent variations of the intervehicle relationship in the map with future time dependent variation of the recommenced control input.
43. The system as claimed in claim 41, wherein the map and the predictor equation are used in the evaluating the determined variables to recommend a control input.
44. The system as claimed in claim 41, wherein the sensing devices collect the data by monitoring an environment sensing system of the automobile and a state sensing system of the automobile.
45. The system as claimed in claim 44, wherein the evaluating component computes an evaluation index indicative of the result of evaluation of the future time dependent variation of the intervehicle relationship of the obstacle vehicles with respect to future time dependent variation of any given control input; and finds a control input with the best score of the computed evaluation index at each of future time.
46. The system as claimed in claim 45,
wherein the evaluating component provides terminal conditions indicative of a desired future behavior of the automobile; and
wherein the evaluation index is computed with the restraint defined by the terminal conditions.
47. The system as claimed in claim 45, wherein the predictor equation accounts for interaction between the automobile and the obstacle vehicle.
48. The system as claimed in claim 45, wherein, when the automobile determines lane change intention of one of obstacle vehicles via attitude taken by the obstacle vehicle, the predictor equation for the obstacle vehicle is modified.
49. The system as claimed in claim 45, wherein the evaluation index includes a plurality of different evaluation functions.
50. The system as claimed in claim 49, wherein, when the automobile is equipped with a road map based guidance system, the plurality of different evaluation functions are selectively included by the evaluation index in response to information provided by the road map based guidance system.
51. The system as claimed in claim 44, wherein the recommended control input is updated, and wherein the recommenced control input before updating is used to predict lane change of one of the obstacle vehicles, and the predicted result of lane change is used to change the form of the predictor equation and the form of the evaluation index.
52. The system as claimed in claim 44, further comprising means for processing the predictor equation using the recommended control input to predict trajectories of the automobile and the obstacle vehicles.
53. The system as claimed in claim 44, wherein the evaluating component repeats evaluating the determined variables to recommend a control input on event driven basis.
US10/356,742 2002-02-01 2003-02-03 Method and system for vehicle operator assistance improvement Expired - Lifetime US6873911B2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2002025181A JP3714258B2 (en) 2002-02-01 2002-02-01 Recommended operation amount generator for vehicles
JP2002-025181 2002-02-01
JP2002-243212 2002-08-23
JP2002243212A JP3832403B2 (en) 2002-08-23 2002-08-23 Driving assistance device

Publications (2)

Publication Number Publication Date
US20030187578A1 US20030187578A1 (en) 2003-10-02
US6873911B2 true US6873911B2 (en) 2005-03-29

Family

ID=26625673

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/356,742 Expired - Lifetime US6873911B2 (en) 2002-02-01 2003-02-03 Method and system for vehicle operator assistance improvement

Country Status (3)

Country Link
US (1) US6873911B2 (en)
EP (1) EP1332910B1 (en)
DE (1) DE60329876D1 (en)

Cited By (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040080405A1 (en) * 2002-06-12 2004-04-29 Nissan Motor Co., Ltd. Driving assist system for vehicle
US20040182629A1 (en) * 2003-03-20 2004-09-23 Honda Motor Co., Ltd. Apparatus for a vehicle for protection of a colliding object
US20040210364A1 (en) * 2003-04-17 2004-10-21 Fuji Jukogyo Kabushiki Kaisha Vehicle drive assist system
US20040249550A1 (en) * 2003-06-04 2004-12-09 Nissan Motor Co., Ltd Driving assist system for vehicle
US20040262063A1 (en) * 2003-06-11 2004-12-30 Kaufmann Timothy W. Steering system with lane keeping integration
US20050015203A1 (en) * 2003-07-18 2005-01-20 Nissan Motor Co., Ltd. Lane-changing support system
US20050080565A1 (en) * 2003-10-14 2005-04-14 Olney Ross D. Driver adaptive collision warning system
US20050090984A1 (en) * 2003-10-23 2005-04-28 Nissan Motor Co., Ltd. Driving assist system for vehicle
US20050137756A1 (en) * 2003-12-18 2005-06-23 Nissan Motor Co., Ltd. Vehicle driving support system and vehicle driving support program
US20050256630A1 (en) * 2004-05-17 2005-11-17 Nissan Motor Co., Ltd. Lane change assist system
US20070035385A1 (en) * 2005-08-12 2007-02-15 Shunji Miyahara Single camera system and method for range and lateral position measurement of a preceding vehicle
US20070067100A1 (en) * 2005-09-14 2007-03-22 Denso Corporation Merge support system
WO2007048029A2 (en) * 2005-10-21 2007-04-26 Deere & Company Systems and methods for obstacle avoidance
US20070111857A1 (en) * 2005-11-17 2007-05-17 Autoliv Asp, Inc. Fuel saving sensor system
WO2007070159A2 (en) * 2005-12-09 2007-06-21 Gm Global Technology Operations, Inc. Method for detecting or predicting vehicle cut-ins
US20080040039A1 (en) * 2006-05-17 2008-02-14 Denso Corporation Road environment recognition device and method of recognizing road environment
US20080234900A1 (en) * 2007-03-20 2008-09-25 Bennett James D Look ahead vehicle suspension system
US20090051516A1 (en) * 2006-02-23 2009-02-26 Continental Automotive Gmbh Assistance System for Assisting a Driver
US20090273674A1 (en) * 2006-11-09 2009-11-05 Bayerische Motoren Werke Aktiengesellschaft Method of Producing a Total Image of the Environment Surrounding a Motor Vehicle
US20100010699A1 (en) * 2006-11-01 2010-01-14 Koji Taguchi Cruise control plan evaluation device and method
US20100023296A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on vehicle u-turn maneuvers
US20100019880A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on traffic sensing
US20100023180A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on lane-change maneuvers
US20100023197A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on behavioral diagnosis
US20100019964A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition and road condition recognition
US20100023223A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition
US20100023245A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on headway distance
US20100023181A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on vehicle passing maneuvers
US20100023265A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with integrated driving style recognition
US20100023196A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on vehicle launching
US20100023183A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with integrated maneuver-based driving style recognition
US20100036578A1 (en) * 2006-11-10 2010-02-11 Toyota Jidosha Kabushiki Kaisha Automatic operation control apparatus, automatic operation control method,vehicle cruise system, and method for controlling the vehicle cruise system
US20100042282A1 (en) * 2006-11-20 2010-02-18 Toyota Jidosha Kabushiki Kaisha Travel control plan generation system and computer program
US20100117585A1 (en) * 2008-11-12 2010-05-13 Osa Edward Fitch Multi Mode Safety Control Module
US20100152951A1 (en) * 2008-12-15 2010-06-17 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on vehicle accelerating and decelerating
US20100152950A1 (en) * 2008-12-15 2010-06-17 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on vehicle stopping
US20100191433A1 (en) * 2009-01-29 2010-07-29 Valeo Vision Method for monitoring the environment of an automatic vehicle
US20100209887A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operation, Inc. Vehicle stability enhancement control adaptation to driving skill based on vehicle backup maneuver
US20100211270A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Vehicle stability enhancement control adaptation to driving skill based on highway on/off ramp maneuver
US20100209881A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Driving skill recognition based on behavioral diagnosis
US20100209890A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Vehicle stability enhancement control adaptation to driving skill with integrated driving skill recognition
US20100209892A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Driving skill recognition based on manual transmission shift behavior
US20100209886A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Driving skill recognition based on u-turn performance
US20100209883A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Vehicle stability enhancement control adaptation to driving skill based on passing maneuver
US20100209885A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Vehicle stability enhancement control adaptation to driving skill based on lane change maneuver
US20100209884A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Driving skill recognition based on vehicle left and right turns
US20100209889A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Vehicle stability enhancement control adaptation to driving skill based on multiple types of maneuvers
US20100209888A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Vehicle stability enhancement control adaptation to driving skill based on curve-handling maneuvers
US20100209891A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Driving skill recognition based on stop-and-go driving behavior
US20100209882A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Driving skill recognition based on straight-line driving behavior
US20100228419A1 (en) * 2009-03-09 2010-09-09 Gm Global Technology Operations, Inc. method to assess risk associated with operating an autonomic vehicle control system
US20110054793A1 (en) * 2009-08-25 2011-03-03 Toyota Jidosha Kabushiki Kaisha Environment prediction device
US20110144907A1 (en) * 2009-12-10 2011-06-16 Aisin Aw Co., Ltd. Travel guiding apparatus for vehicle, travel guiding method for vehicle, and computer-readable storage medium
US8195341B2 (en) 2008-07-24 2012-06-05 GM Global Technology Operations LLC Adaptive vehicle control system with driving style recognition based on maneuvers at highway on/off ramps
US20120235819A1 (en) * 2011-03-18 2012-09-20 Battelle Memorial Institute Apparatuses and methods of determining if a person operating equipment is experiencing an elevated cognitive load
US20130079991A1 (en) * 2011-08-30 2013-03-28 GM Global Technology Operations LLC Motor vehicle, in particular automobile, and method for controlling a motor vehicle, in particular an automobile
US20130158799A1 (en) * 2010-09-10 2013-06-20 Toyota Jidosha Kabushiki Kaisha Suspension apparatus
US20130184926A1 (en) * 2012-01-17 2013-07-18 Ford Global Technologies, Llc Autonomous lane control system
US20130180500A1 (en) * 2012-01-06 2013-07-18 Fuji Jukogyo Kabushiki Kaisha Idling stop device
US8849512B2 (en) 2010-07-29 2014-09-30 Ford Global Technologies, Llc Systems and methods for scheduling driver interface tasks based on driver workload
US8972106B2 (en) 2010-07-29 2015-03-03 Ford Global Technologies, Llc Systems and methods for scheduling driver interface tasks based on driver workload
US20150158425A1 (en) * 2013-12-11 2015-06-11 Hyundai Motor Company Biologically controlled vehicle and method of controlling the same
US20150260530A1 (en) * 2014-03-11 2015-09-17 Volvo Car Corporation Method and system for determining a position of a vehicle
US9213522B2 (en) 2010-07-29 2015-12-15 Ford Global Technologies, Llc Systems and methods for scheduling driver interface tasks based on driver workload
US20160121892A1 (en) * 2013-06-18 2016-05-05 Continental Automotive Gmbh Method and device for determining a driving state of an external motor vehicle
US20170091272A1 (en) * 2015-09-30 2017-03-30 International Business Machines Corporation Precision Adaptive Vehicle Trajectory Query Plan Optimization
US20180025234A1 (en) * 2016-07-20 2018-01-25 Ford Global Technologies, Llc Rear camera lane detection
US20180061236A1 (en) * 2015-03-18 2018-03-01 Nec Corporation Driving control device, driving control method, and vehicle-to-vehicle communication system
US20180079419A1 (en) * 2015-03-18 2018-03-22 Nec Corporation Driving control device, driving control method and vehicle-to-vehicle communication system
US9956956B2 (en) * 2016-01-11 2018-05-01 Denso Corporation Adaptive driving system
US10029697B1 (en) * 2017-01-23 2018-07-24 GM Global Technology Operations LLC Systems and methods for classifying driver skill level
US10124807B2 (en) 2017-01-23 2018-11-13 GM Global Technology Operations LLC Systems and methods for classifying driver skill level and handling type
US10442427B2 (en) 2017-01-23 2019-10-15 GM Global Technology Operations LLC Vehicle dynamics actuator control systems and methods
US20190359223A1 (en) * 2018-05-22 2019-11-28 International Business Machines Corporation Providing a notification based on a deviation from a determined driving behavior

Families Citing this family (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7805442B1 (en) 2000-12-05 2010-09-28 Navteq North America, Llc Method and system for representation of geographical features in a computer-based system
JP3531640B2 (en) * 2002-01-10 2004-05-31 日産自動車株式会社 Driving operation assist device for vehicles
JP3879696B2 (en) * 2003-04-25 2007-02-14 日産自動車株式会社 Driving assistance device
US8892356B1 (en) 2003-06-19 2014-11-18 Here Global B.V. Method and system for representing traffic signals in a road network database
US9341485B1 (en) * 2003-06-19 2016-05-17 Here Global B.V. Method and apparatus for representing road intersections
JP2005297855A (en) * 2004-04-14 2005-10-27 Toyota Motor Corp Deceleration control device of vehicle
DE102004037704B4 (en) * 2004-08-04 2014-07-10 Daimler Ag Motor vehicle with a preventive protection system
EP1775188B1 (en) * 2004-08-06 2011-05-04 Honda Motor Co., Ltd. Control device for vehicle
DE102004041851A1 (en) * 2004-08-27 2006-03-16 Daimlerchrysler Ag Object acquisition method for use in motor vehicle environment, involves using parameters as input quantities which are acquired by sensors, such that acquired parameters are additionally used for dynamically projecting traffic parameters
JP4483486B2 (en) * 2004-09-01 2010-06-16 マツダ株式会社 Vehicle travel control device
JP4367293B2 (en) * 2004-09-01 2009-11-18 マツダ株式会社 Vehicle travel control device
JP4451315B2 (en) * 2005-01-06 2010-04-14 富士重工業株式会社 Vehicle driving support device
DE102005008974A1 (en) * 2005-02-28 2006-08-31 Robert Bosch Gmbh Estimation of coordinates of object using velocity model, for pedestrian protection system, involves determining vehicle velocity and acceleration for second-order velocity model
KR101235815B1 (en) * 2005-04-25 2013-02-21 가부시키가이샤 지오 기쥬츠켄큐쇼 Imaging position analyzing device, imaging position analyzing method, recording medium, and image data acquiring device
WO2007018188A1 (en) * 2005-08-05 2007-02-15 Honda Motor Co., Ltd. Vehicle control device
JP4792866B2 (en) * 2005-08-05 2011-10-12 アイシン・エィ・ダブリュ株式会社 Navigation system
JP4645516B2 (en) * 2005-08-24 2011-03-09 株式会社デンソー Navigation device and program
US7706978B2 (en) * 2005-09-02 2010-04-27 Delphi Technologies, Inc. Method for estimating unknown parameters for a vehicle object detection system
SE0502819L (en) * 2005-12-13 2006-12-19 Scania Cv Abp Data Generation System
DE102006009656A1 (en) * 2006-03-02 2007-09-06 Robert Bosch Gmbh Driver assistance system with course prediction module
DE102006021177A1 (en) * 2006-05-06 2007-11-08 Bayerische Motoren Werke Ag Method for the follow-up control of a motor vehicle
JP4933962B2 (en) * 2007-06-22 2012-05-16 富士重工業株式会社 Branch entry judgment device
JP2010064725A (en) * 2008-09-15 2010-03-25 Denso Corp On-vehicle captured image display controller, program for on-vehicle captured image display controller, and on-vehicle captured image display system
DE112009004844B4 (en) * 2009-06-02 2015-05-13 Toyota Jidosha Kabushiki Kaisha VEHICLE MONITORING DEVICE ENVIRONMENT
US8686845B2 (en) * 2010-02-25 2014-04-01 Ford Global Technologies, Llc Automotive vehicle and method for advising a driver therein
JP5126336B2 (en) * 2010-05-13 2013-01-23 株式会社デンソー Vehicle speed control device
JP5278419B2 (en) 2010-12-17 2013-09-04 株式会社デンソー Driving scene transition prediction device and vehicle recommended driving operation presentation device
US9771070B2 (en) * 2011-12-09 2017-09-26 GM Global Technology Operations LLC Method and system for controlling a host vehicle
US9620017B2 (en) * 2011-12-14 2017-04-11 Robert Bosch Gmbh Vehicle merge assistance system and method
NL1039416C2 (en) * 2012-02-28 2013-09-02 Valoridat B V TRAFFIC DETECTION DEVICE.
DE102012021973A1 (en) * 2012-11-08 2014-05-08 Valeo Schalter Und Sensoren Gmbh Method for operating a radar sensor of a motor vehicle, driver assistance device and motor vehicle
KR101500361B1 (en) * 2013-06-07 2015-03-10 현대자동차 주식회사 Apparatus and method of determining short term driving tendency
CN103337186B (en) * 2013-06-08 2015-08-26 华中科技大学 A kind of crossing driving assist system for motor vehicle
US9809219B2 (en) * 2014-01-29 2017-11-07 Continental Automotive Systems, Inc. System for accommodating a pedestrian during autonomous vehicle operation
US9776639B2 (en) * 2014-08-05 2017-10-03 Launch Tech Co., Ltd. Method, and apparatus, and system for generating driving behavior guiding information
KR20170016177A (en) * 2015-08-03 2017-02-13 엘지전자 주식회사 Vehicle and control method for the same
US9909894B2 (en) 2016-01-07 2018-03-06 Here Global B.V. Componentized junction models
US20190016339A1 (en) * 2016-02-16 2019-01-17 Honda Motor Co., Ltd. Vehicle control device, vehicle control method, and vehicle control program
JP6768787B2 (en) * 2016-03-15 2020-10-14 本田技研工業株式会社 Vehicle control systems, vehicle control methods, and vehicle control programs
US10234294B2 (en) 2016-04-01 2019-03-19 Here Global B.V. Road geometry matching with componentized junction models
DE102016209232B4 (en) * 2016-05-27 2022-12-22 Volkswagen Aktiengesellschaft Method, device and computer-readable storage medium with instructions for determining the lateral position of a vehicle relative to the lanes of a roadway
JP6616275B2 (en) * 2016-12-15 2019-12-04 株式会社Soken Driving assistance device
US10133275B1 (en) 2017-03-01 2018-11-20 Zoox, Inc. Trajectory generation using temporal logic and tree search
US10671076B1 (en) 2017-03-01 2020-06-02 Zoox, Inc. Trajectory prediction of third-party objects using temporal logic and tree search
KR102262579B1 (en) 2017-03-14 2021-06-09 현대자동차주식회사 Apparatus for changing lane of vehicle, system having the same and method thereof
EP3409553B1 (en) * 2017-06-01 2021-08-04 Honda Research Institute Europe GmbH System and method for automated execution of a maneuver or behavior of a system
CN110799383B (en) * 2017-06-12 2022-07-29 大陆汽车有限公司 Rear portion anticollision safety coefficient
US11378955B2 (en) * 2017-09-08 2022-07-05 Motional Ad Llc Planning autonomous motion
JP6937658B2 (en) 2017-10-17 2021-09-22 日立Astemo株式会社 Predictive controller and method
US10955851B2 (en) 2018-02-14 2021-03-23 Zoox, Inc. Detecting blocking objects
US10414395B1 (en) * 2018-04-06 2019-09-17 Zoox, Inc. Feature-based prediction
US11126873B2 (en) 2018-05-17 2021-09-21 Zoox, Inc. Vehicle lighting state determination
JP7125286B2 (en) * 2018-06-22 2022-08-24 本田技研工業株式会社 Behavior prediction device and automatic driving device
CN111813099B (en) * 2019-03-25 2024-03-05 广州汽车集团股份有限公司 Driving control method and device for unmanned vehicle, computer equipment and vehicle
CN110209754B (en) * 2019-06-06 2020-08-14 广东电网有限责任公司 Road planning navigation system capable of automatically generating survey map

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07104850A (en) 1993-08-28 1995-04-21 Lucas Ind Plc Operator assistance system for vehicle
US5465079A (en) * 1992-08-14 1995-11-07 Vorad Safety Systems, Inc. Method and apparatus for determining driver fitness in real time
JPH09142236A (en) 1995-11-17 1997-06-03 Mitsubishi Electric Corp Periphery monitoring method and device for vehicle, and trouble deciding method and device for periphery monitoring device
US5913375A (en) 1995-08-31 1999-06-22 Honda Giken Kogyo Kabushiki Kaisha Vehicle steering force correction system
US5926126A (en) 1997-09-08 1999-07-20 Ford Global Technologies, Inc. Method and system for detecting an in-path target obstacle in front of a vehicle
DE19821163A1 (en) 1998-05-12 1999-11-18 Volkswagen Ag Driver assist method for vehicle used as autonomous intelligent cruise control
US6026347A (en) 1997-05-30 2000-02-15 Raytheon Company Obstacle avoidance processing method for vehicles using an automated highway system
JP2000108721A (en) 1998-08-04 2000-04-18 Denso Corp Inter-vehicular distance control device and recording medium
JP2001052297A (en) 1999-08-06 2001-02-23 Fujitsu Ltd Method and device for supporting safe travel and recording medium
US6353785B1 (en) * 1999-03-12 2002-03-05 Navagation Technologies Corp. Method and system for an in-vehicle computer architecture
DE10048102A1 (en) 2000-09-28 2002-04-18 Adc Automotive Dist Control Method for operating a driver assistance system for motor vehicles
DE10137292A1 (en) 2001-08-01 2003-03-06 Continental Teves Ag & Co Ohg Driver assistance system and method for its operation
US6577943B2 (en) * 2000-04-21 2003-06-10 Sumitomo Rubber Industries, Ltd. System for distributing road surface information, system for collecting and distributing vehicle information, device for transmitting vehicle information and program for controlling vehicle

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6452297A (en) 1987-08-24 1989-02-28 Hitachi Ltd Semiconductor intetrated circuit with storing part

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5465079A (en) * 1992-08-14 1995-11-07 Vorad Safety Systems, Inc. Method and apparatus for determining driver fitness in real time
US5765116A (en) 1993-08-28 1998-06-09 Lucas Industries Public Limited Company Driver assistance system for a vehicle
JPH07104850A (en) 1993-08-28 1995-04-21 Lucas Ind Plc Operator assistance system for vehicle
US5913375A (en) 1995-08-31 1999-06-22 Honda Giken Kogyo Kabushiki Kaisha Vehicle steering force correction system
JPH09142236A (en) 1995-11-17 1997-06-03 Mitsubishi Electric Corp Periphery monitoring method and device for vehicle, and trouble deciding method and device for periphery monitoring device
US6026347A (en) 1997-05-30 2000-02-15 Raytheon Company Obstacle avoidance processing method for vehicles using an automated highway system
US5926126A (en) 1997-09-08 1999-07-20 Ford Global Technologies, Inc. Method and system for detecting an in-path target obstacle in front of a vehicle
DE19821163A1 (en) 1998-05-12 1999-11-18 Volkswagen Ag Driver assist method for vehicle used as autonomous intelligent cruise control
JP2000108721A (en) 1998-08-04 2000-04-18 Denso Corp Inter-vehicular distance control device and recording medium
US6353785B1 (en) * 1999-03-12 2002-03-05 Navagation Technologies Corp. Method and system for an in-vehicle computer architecture
US6577937B1 (en) * 1999-03-12 2003-06-10 Navigation Technologies Corp. Method and system for an in-vehicle computing architecture
JP2001052297A (en) 1999-08-06 2001-02-23 Fujitsu Ltd Method and device for supporting safe travel and recording medium
US6577943B2 (en) * 2000-04-21 2003-06-10 Sumitomo Rubber Industries, Ltd. System for distributing road surface information, system for collecting and distributing vehicle information, device for transmitting vehicle information and program for controlling vehicle
DE10048102A1 (en) 2000-09-28 2002-04-18 Adc Automotive Dist Control Method for operating a driver assistance system for motor vehicles
DE10137292A1 (en) 2001-08-01 2003-03-06 Continental Teves Ag & Co Ohg Driver assistance system and method for its operation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Continuation/GMRES Method for Fast Algorithm of Nonlinear Receding Horizon Control", Toshiyuki Ohtsuka, Proceedings of the 39th IEEE Conference on Decision and Control, pp. 766-771.

Cited By (143)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040080405A1 (en) * 2002-06-12 2004-04-29 Nissan Motor Co., Ltd. Driving assist system for vehicle
US7006917B2 (en) * 2002-12-06 2006-02-28 Nissan Motor Co., Ltd. Driving assist system for vehicle
US20040182629A1 (en) * 2003-03-20 2004-09-23 Honda Motor Co., Ltd. Apparatus for a vehicle for protection of a colliding object
US7143856B2 (en) * 2003-03-20 2006-12-05 Honda Motor Co., Ltd. Apparatus for a vehicle for protection of a colliding object
US20040210364A1 (en) * 2003-04-17 2004-10-21 Fuji Jukogyo Kabushiki Kaisha Vehicle drive assist system
US7302325B2 (en) * 2003-04-17 2007-11-27 Fuji Jukogyo Kabushiki Kaisha Vehicle drive assist system
US7136755B2 (en) 2003-06-04 2006-11-14 Nissan Motor Co., Ltd. Driving assist system for vehicle
US20040249550A1 (en) * 2003-06-04 2004-12-09 Nissan Motor Co., Ltd Driving assist system for vehicle
US20040262063A1 (en) * 2003-06-11 2004-12-30 Kaufmann Timothy W. Steering system with lane keeping integration
US7510038B2 (en) * 2003-06-11 2009-03-31 Delphi Technologies, Inc. Steering system with lane keeping integration
US20050015203A1 (en) * 2003-07-18 2005-01-20 Nissan Motor Co., Ltd. Lane-changing support system
US20050080565A1 (en) * 2003-10-14 2005-04-14 Olney Ross D. Driver adaptive collision warning system
US7206697B2 (en) * 2003-10-14 2007-04-17 Delphi Technologies, Inc. Driver adaptive collision warning system
US7302344B2 (en) * 2003-10-14 2007-11-27 Delphi Technologies, Inc. Driver adaptive collision warning system
US20070198191A1 (en) * 2003-10-14 2007-08-23 Delphi Technologies, Inc. Driver adaptive collision warning system
US7155342B2 (en) 2003-10-23 2006-12-26 Nissan Motor Co., Ltd. Driving assist system for vehicle
US20050090984A1 (en) * 2003-10-23 2005-04-28 Nissan Motor Co., Ltd. Driving assist system for vehicle
US20050137756A1 (en) * 2003-12-18 2005-06-23 Nissan Motor Co., Ltd. Vehicle driving support system and vehicle driving support program
US20050256630A1 (en) * 2004-05-17 2005-11-17 Nissan Motor Co., Ltd. Lane change assist system
US8219298B2 (en) * 2004-05-17 2012-07-10 Nissan Motor Co., Ltd. Lane change assist system
US20080189012A1 (en) * 2004-06-10 2008-08-07 Delphi Technologies Inc. Steering system with lane keeping integration
US7711464B2 (en) * 2004-06-10 2010-05-04 Gm Global Technology Operations, Inc. Steering system with lane keeping integration
US20070035385A1 (en) * 2005-08-12 2007-02-15 Shunji Miyahara Single camera system and method for range and lateral position measurement of a preceding vehicle
US7545956B2 (en) 2005-08-12 2009-06-09 Visteon Global Technologies, Inc. Single camera system and method for range and lateral position measurement of a preceding vehicle
US20070067100A1 (en) * 2005-09-14 2007-03-22 Denso Corporation Merge support system
US7711485B2 (en) * 2005-09-14 2010-05-04 Denso Corporation Merge support system
US20110071718A1 (en) * 2005-10-21 2011-03-24 William Robert Norris Systems and Methods for Switching Between Autonomous and Manual Operation of a Vehicle
US9043016B2 (en) 2005-10-21 2015-05-26 Deere & Company Versatile robotic control module
US9098080B2 (en) 2005-10-21 2015-08-04 Deere & Company Systems and methods for switching between autonomous and manual operation of a vehicle
AU2006304838B2 (en) * 2005-10-21 2011-07-28 Deere & Company Systems and methods for obstacle avoidance
US20120046820A1 (en) * 2005-10-21 2012-02-23 James Allard Systems and Methods for Obstacle Avoidance
US20070198145A1 (en) * 2005-10-21 2007-08-23 Norris William R Systems and methods for switching between autonomous and manual operation of a vehicle
WO2007048029A3 (en) * 2005-10-21 2007-07-26 Deere & Co Systems and methods for obstacle avoidance
US8020657B2 (en) 2005-10-21 2011-09-20 Deere & Company Systems and methods for obstacle avoidance
US7894951B2 (en) 2005-10-21 2011-02-22 Deere & Company Systems and methods for switching between autonomous and manual operation of a vehicle
US8473140B2 (en) 2005-10-21 2013-06-25 Deere & Company Networked multi-role robotic vehicle
US8874300B2 (en) * 2005-10-21 2014-10-28 Deere & Company Systems and methods for obstacle avoidance
DE112006003007B4 (en) 2005-10-21 2021-09-02 Deere & Company Systems and methods for avoiding obstacles
WO2007048029A2 (en) * 2005-10-21 2007-04-26 Deere & Company Systems and methods for obstacle avoidance
US9429944B2 (en) 2005-10-21 2016-08-30 Deere & Company Versatile robotic control module
US20070111857A1 (en) * 2005-11-17 2007-05-17 Autoliv Asp, Inc. Fuel saving sensor system
US7404784B2 (en) * 2005-11-17 2008-07-29 Autoliv Asp, Inc. Fuel saving sensor system
US7444241B2 (en) * 2005-12-09 2008-10-28 Gm Global Technology Operations, Inc. Method for detecting or predicting vehicle cut-ins
WO2007070159A2 (en) * 2005-12-09 2007-06-21 Gm Global Technology Operations, Inc. Method for detecting or predicting vehicle cut-ins
US20070150196A1 (en) * 2005-12-09 2007-06-28 Grimm Donald K Method for detecting or predicting vehicle cut-ins
WO2007070159A3 (en) * 2005-12-09 2007-11-22 Gm Global Tech Operations Inc Method for detecting or predicting vehicle cut-ins
US20090051516A1 (en) * 2006-02-23 2009-02-26 Continental Automotive Gmbh Assistance System for Assisting a Driver
US8694236B2 (en) * 2006-05-17 2014-04-08 Denso Corporation Road environment recognition device and method of recognizing road environment
US20080040039A1 (en) * 2006-05-17 2008-02-14 Denso Corporation Road environment recognition device and method of recognizing road environment
US20100010699A1 (en) * 2006-11-01 2010-01-14 Koji Taguchi Cruise control plan evaluation device and method
US9224299B2 (en) 2006-11-01 2015-12-29 Toyota Jidosha Kabushiki Kaisha Cruise control plan evaluation device and method
US20090273674A1 (en) * 2006-11-09 2009-11-05 Bayerische Motoren Werke Aktiengesellschaft Method of Producing a Total Image of the Environment Surrounding a Motor Vehicle
US8908035B2 (en) * 2006-11-09 2014-12-09 Bayerische Motoren Werke Aktiengesellschaft Method of producing a total image of the environment surrounding a motor vehicle
US20100036578A1 (en) * 2006-11-10 2010-02-11 Toyota Jidosha Kabushiki Kaisha Automatic operation control apparatus, automatic operation control method,vehicle cruise system, and method for controlling the vehicle cruise system
US9076338B2 (en) * 2006-11-20 2015-07-07 Toyota Jidosha Kabushiki Kaisha Travel control plan generation system and computer program
US20100042282A1 (en) * 2006-11-20 2010-02-18 Toyota Jidosha Kabushiki Kaisha Travel control plan generation system and computer program
US9527363B2 (en) * 2007-03-20 2016-12-27 Enpulz, Llc Look ahead vehicle suspension system
US8285447B2 (en) * 2007-03-20 2012-10-09 Enpulz, L.L.C. Look ahead vehicle suspension system
US20080234900A1 (en) * 2007-03-20 2008-09-25 Bennett James D Look ahead vehicle suspension system
US20150006030A1 (en) * 2007-03-20 2015-01-01 Enpulz, L.L.C. Look ahead vehicle suspension system
US7831407B2 (en) 2008-07-24 2010-11-09 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on vehicle U-turn maneuvers
US20100023196A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on vehicle launching
US20100023296A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on vehicle u-turn maneuvers
US20100019880A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on traffic sensing
US20100023180A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on lane-change maneuvers
US20100023197A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on behavioral diagnosis
US20100019964A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition and road condition recognition
US20100023223A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition
US20100023245A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on headway distance
US20100023181A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on vehicle passing maneuvers
US20100023265A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with integrated driving style recognition
US8195341B2 (en) 2008-07-24 2012-06-05 GM Global Technology Operations LLC Adaptive vehicle control system with driving style recognition based on maneuvers at highway on/off ramps
US20100023183A1 (en) * 2008-07-24 2010-01-28 Gm Global Technology Operations, Inc. Adaptive vehicle control system with integrated maneuver-based driving style recognition
US8170740B2 (en) 2008-07-24 2012-05-01 GM Global Technology Operations LLC Adaptive vehicle control system with driving style recognition based on vehicle launching
CN101633359B (en) * 2008-07-24 2013-05-29 通用汽车环球科技运作公司 Adaptive vehicle control system with driving style recognition
US8280601B2 (en) * 2008-07-24 2012-10-02 GM Global Technology Operations LLC Adaptive vehicle control system with integrated maneuver-based driving style recognition
US8280560B2 (en) * 2008-07-24 2012-10-02 GM Global Technology Operations LLC Adaptive vehicle control system with driving style recognition based on headway distance
US8060260B2 (en) 2008-07-24 2011-11-15 GM Global Technology Operations LLC Adaptive vehicle control system with driving style recognition based on vehicle passing maneuvers
US8260515B2 (en) * 2008-07-24 2012-09-04 GM Global Technology Operations LLC Adaptive vehicle control system with driving style recognition
US20100117585A1 (en) * 2008-11-12 2010-05-13 Osa Edward Fitch Multi Mode Safety Control Module
US8237389B2 (en) 2008-11-12 2012-08-07 Irobot Corporation Multi mode safety control module
US20100152950A1 (en) * 2008-12-15 2010-06-17 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on vehicle stopping
US20100152951A1 (en) * 2008-12-15 2010-06-17 Gm Global Technology Operations, Inc. Adaptive vehicle control system with driving style recognition based on vehicle accelerating and decelerating
US20100191433A1 (en) * 2009-01-29 2010-07-29 Valeo Vision Method for monitoring the environment of an automatic vehicle
US8452506B2 (en) 2009-01-29 2013-05-28 Valeo Vision Method for monitoring the environment of an automatic vehicle
US20100209887A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operation, Inc. Vehicle stability enhancement control adaptation to driving skill based on vehicle backup maneuver
US20100209885A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Vehicle stability enhancement control adaptation to driving skill based on lane change maneuver
US20100209892A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Driving skill recognition based on manual transmission shift behavior
US20100211270A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Vehicle stability enhancement control adaptation to driving skill based on highway on/off ramp maneuver
US20100209890A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Vehicle stability enhancement control adaptation to driving skill with integrated driving skill recognition
US20100209882A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Driving skill recognition based on straight-line driving behavior
US20100209884A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Driving skill recognition based on vehicle left and right turns
US8170725B2 (en) 2009-02-18 2012-05-01 GM Global Technology Operations LLC Vehicle stability enhancement control adaptation to driving skill based on highway on/off ramp maneuver
US20100209888A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Vehicle stability enhancement control adaptation to driving skill based on curve-handling maneuvers
US20100209891A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Driving skill recognition based on stop-and-go driving behavior
US20100209881A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Driving skill recognition based on behavioral diagnosis
US20100209883A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Vehicle stability enhancement control adaptation to driving skill based on passing maneuver
US20100209886A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Driving skill recognition based on u-turn performance
US20100209889A1 (en) * 2009-02-18 2010-08-19 Gm Global Technology Operations, Inc. Vehicle stability enhancement control adaptation to driving skill based on multiple types of maneuvers
US20100228419A1 (en) * 2009-03-09 2010-09-09 Gm Global Technology Operations, Inc. method to assess risk associated with operating an autonomic vehicle control system
US8244408B2 (en) * 2009-03-09 2012-08-14 GM Global Technology Operations LLC Method to assess risk associated with operating an autonomic vehicle control system
US20110054793A1 (en) * 2009-08-25 2011-03-03 Toyota Jidosha Kabushiki Kaisha Environment prediction device
US8364390B2 (en) * 2009-08-25 2013-01-29 Toyota Jidosha Kabushiki Kaisha Environment prediction device
US8504293B2 (en) * 2009-12-10 2013-08-06 Aisin Aw Co., Ltd. Travel guiding apparatus for vehicle, travel guiding method for vehicle, and computer-readable storage medium
US20110144907A1 (en) * 2009-12-10 2011-06-16 Aisin Aw Co., Ltd. Travel guiding apparatus for vehicle, travel guiding method for vehicle, and computer-readable storage medium
US8849512B2 (en) 2010-07-29 2014-09-30 Ford Global Technologies, Llc Systems and methods for scheduling driver interface tasks based on driver workload
US8924079B2 (en) 2010-07-29 2014-12-30 Ford Global Technologies, Llc Systems and methods for scheduling driver interface tasks based on driver workload
US8886397B2 (en) 2010-07-29 2014-11-11 Ford Global Technologies, Llc Systems and methods for scheduling driver interface tasks based on driver workload
US8972106B2 (en) 2010-07-29 2015-03-03 Ford Global Technologies, Llc Systems and methods for scheduling driver interface tasks based on driver workload
US9213522B2 (en) 2010-07-29 2015-12-15 Ford Global Technologies, Llc Systems and methods for scheduling driver interface tasks based on driver workload
US8914192B2 (en) 2010-07-29 2014-12-16 Ford Global Technologies, Llc Systems and methods for scheduling driver interface tasks based on driver workload
US9141584B2 (en) 2010-07-29 2015-09-22 Ford Global Technologies, Llc Systems and methods for scheduling driver interface tasks based on driver workload
US20130158799A1 (en) * 2010-09-10 2013-06-20 Toyota Jidosha Kabushiki Kaisha Suspension apparatus
US9055905B2 (en) * 2011-03-18 2015-06-16 Battelle Memorial Institute Apparatuses and methods of determining if a person operating equipment is experiencing an elevated cognitive load
US20120235819A1 (en) * 2011-03-18 2012-09-20 Battelle Memorial Institute Apparatuses and methods of determining if a person operating equipment is experiencing an elevated cognitive load
US8825302B2 (en) * 2011-08-30 2014-09-02 GM Global Technology Operations LLC Motor vehicle, in particular automobile, and method for controlling a motor vehicle, in particular an automobile
US20130079991A1 (en) * 2011-08-30 2013-03-28 GM Global Technology Operations LLC Motor vehicle, in particular automobile, and method for controlling a motor vehicle, in particular an automobile
US20130180500A1 (en) * 2012-01-06 2013-07-18 Fuji Jukogyo Kabushiki Kaisha Idling stop device
US9644592B2 (en) * 2012-01-06 2017-05-09 Fuji Jukogyo Kabushiki Kaisha Idling stop device
US9187117B2 (en) * 2012-01-17 2015-11-17 Ford Global Technologies, Llc Autonomous lane control system
US20130184926A1 (en) * 2012-01-17 2013-07-18 Ford Global Technologies, Llc Autonomous lane control system
US9616925B2 (en) 2012-01-17 2017-04-11 Ford Global Technologies, Llc Autonomous lane control system
US9616924B2 (en) 2012-01-17 2017-04-11 Ford Global Technologies, Llc Autonomous lane control system
US20160121892A1 (en) * 2013-06-18 2016-05-05 Continental Automotive Gmbh Method and device for determining a driving state of an external motor vehicle
US10246092B2 (en) * 2013-06-18 2019-04-02 Continental Automotive Gmbh Method and device for determining a driving state of an external motor vehicle
US9409517B2 (en) * 2013-12-11 2016-08-09 Hyundai Motor Company Biologically controlled vehicle and method of controlling the same
US20150158425A1 (en) * 2013-12-11 2015-06-11 Hyundai Motor Company Biologically controlled vehicle and method of controlling the same
US20150260530A1 (en) * 2014-03-11 2015-09-17 Volvo Car Corporation Method and system for determining a position of a vehicle
US9644975B2 (en) * 2014-03-11 2017-05-09 Volvo Car Corporation Method and system for determining a position of a vehicle
US20180079419A1 (en) * 2015-03-18 2018-03-22 Nec Corporation Driving control device, driving control method and vehicle-to-vehicle communication system
US20180061236A1 (en) * 2015-03-18 2018-03-01 Nec Corporation Driving control device, driving control method, and vehicle-to-vehicle communication system
US10621869B2 (en) * 2015-03-18 2020-04-14 Nec Corporation Driving control device, driving control method, and vehicle-to-vehicle communication system
US10486701B2 (en) * 2015-03-18 2019-11-26 Nec Corporation Driving control device, driving control method and vehicle-to-vehicle communication system
US10102247B2 (en) * 2015-09-30 2018-10-16 International Business Machines Corporation Precision adaptive vehicle trajectory query plan optimization
US20170091272A1 (en) * 2015-09-30 2017-03-30 International Business Machines Corporation Precision Adaptive Vehicle Trajectory Query Plan Optimization
US9956956B2 (en) * 2016-01-11 2018-05-01 Denso Corporation Adaptive driving system
US20180025234A1 (en) * 2016-07-20 2018-01-25 Ford Global Technologies, Llc Rear camera lane detection
US10762358B2 (en) * 2016-07-20 2020-09-01 Ford Global Technologies, Llc Rear camera lane detection
US10124807B2 (en) 2017-01-23 2018-11-13 GM Global Technology Operations LLC Systems and methods for classifying driver skill level and handling type
US10442427B2 (en) 2017-01-23 2019-10-15 GM Global Technology Operations LLC Vehicle dynamics actuator control systems and methods
US10029697B1 (en) * 2017-01-23 2018-07-24 GM Global Technology Operations LLC Systems and methods for classifying driver skill level
US20190359223A1 (en) * 2018-05-22 2019-11-28 International Business Machines Corporation Providing a notification based on a deviation from a determined driving behavior
US11001273B2 (en) * 2018-05-22 2021-05-11 International Business Machines Corporation Providing a notification based on a deviation from a determined driving behavior

Also Published As

Publication number Publication date
EP1332910A1 (en) 2003-08-06
EP1332910B1 (en) 2009-11-04
DE60329876D1 (en) 2009-12-17
US20030187578A1 (en) 2003-10-02

Similar Documents

Publication Publication Date Title
US6873911B2 (en) Method and system for vehicle operator assistance improvement
US11142204B2 (en) Vehicle control device and vehicle control method
EP3683782B1 (en) Method for assisting a driver, driver assistance system, and vehicle including such driver assistance system
EP3699051A1 (en) Vehicle control device
US20200238980A1 (en) Vehicle control device
EP3699047A1 (en) Vehicle control apparatus
US8428843B2 (en) Method to adaptively control vehicle operation using an autonomic vehicle control system
EP3699049A1 (en) Vehicle control device
CN102834852B (en) Vehicle driving assistance device
US20200353918A1 (en) Vehicle control device
EP2042365B1 (en) Vehicle speed control system
JP3714258B2 (en) Recommended operation amount generator for vehicles
EP3581449A1 (en) Driving assist control device
US20150142207A1 (en) Method and driver assistance device for supporting lane changes or passing maneuvers of a motor vehicle
US11091035B2 (en) Automatic driving system
US20110190972A1 (en) Grid unlock
EP3715204A1 (en) Vehicle control device
CN110466522B (en) Automatic lane changing method, system, vehicle-mounted computer and storage medium
US20210179092A1 (en) Active safety assistance system for pre-adjusting speed and control method using the same
US11285957B2 (en) Traveling control apparatus, traveling control method, and non-transitory computer-readable storage medium storing program
EP3666612A1 (en) Vehicle control device
US11465627B2 (en) Traveling control apparatus, traveling control method, and non-transitory computer-readable storage medium storing program for controlling traveling of a vehicle
EP3738849A1 (en) Vehicle control device
JP3973008B2 (en) Safe driving support device, method and recording medium
US20220375349A1 (en) Method and device for lane-changing prediction of target vehicle

Legal Events

Date Code Title Description
AS Assignment

Owner name: NISSAN MOTOR CO., LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NISHIRA, HIKARU;KAWABE, TAKETOSHI;REEL/FRAME:014158/0006;SIGNING DATES FROM 20030519 TO 20030520

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12