US20080166015A1 - Method for finding paths in video - Google Patents

Method for finding paths in video Download PDF

Info

Publication number
US20080166015A1
US20080166015A1 US11/739,208 US73920807A US2008166015A1 US 20080166015 A1 US20080166015 A1 US 20080166015A1 US 73920807 A US73920807 A US 73920807A US 2008166015 A1 US2008166015 A1 US 2008166015A1
Authority
US
United States
Prior art keywords
target
path
behavior
model
respect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/739,208
Inventor
Niels Haering
Zeeshan Rasheed
Li Yu
Andrew J. Chosak
Geoffrey Egnal
Alan J. Lipton
Haiying Liu
Peter L. Venetianer
Wei Hong Yin
Liang Y. Yu
Zhong Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Objectvideo Inc
Original Assignee
Objectvideo Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/948,751 external-priority patent/US20060066719A1/en
Application filed by Objectvideo Inc filed Critical Objectvideo Inc
Priority to US11/739,208 priority Critical patent/US20080166015A1/en
Assigned to OBJECT VIDEO, INC. reassignment OBJECT VIDEO, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: YU, LIANG Y, EGNAL, GEOFFREY, HAERING, NIELS, LIPTON, ALAN J, RASHEED, ZEESHAN, CHOSAK, ANDREW J, LIU, HAIYING, VENETIANER, PETER L, YIN, WEI HONG, YU, LI, ZHANG, ZHONG
Priority to PCT/US2008/004814 priority patent/WO2009008939A2/en
Priority to TW97114339A priority patent/TW200905575A/en
Publication of US20080166015A1 publication Critical patent/US20080166015A1/en
Assigned to RJF OV, LLC reassignment RJF OV, LLC GRANT OF SECURITY INTEREST IN PATENT RIGHTS Assignors: OBJECTVIDEO, INC.
Priority to US13/354,141 priority patent/US8823804B2/en
Assigned to OBJECTVIDEO, INC. reassignment OBJECTVIDEO, INC. RELEASE OF SECURITY AGREEMENT/INTEREST Assignors: RJF OV, LLC
Priority to US14/455,868 priority patent/US10291884B2/en
Priority to US16/385,814 priority patent/US20190246073A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/782Systems for determining direction or deviation from predetermined direction
    • G01S3/785Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system
    • G01S3/786Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system the desired condition being maintained automatically
    • G01S3/7864T.V. type tracking systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19613Recognition of a predetermined image pattern or behaviour pattern indicating theft or intrusion
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B31/00Predictive alarm systems characterised by extrapolation or other computation using updated historic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20101Interactive definition of point of interest, landmark or seed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source

Definitions

  • the present invention is related to video-based surveillance and monitoring. More specifically, specific embodiments of the invention relate to context-sensitive video-based surveillance and monitoring systems, with applications in market research and/or statistical/contextual target modeling.
  • a video camera will provide a video record of whatever is within the field-of-view of its lens.
  • Such video images may be monitored by a human operator and/or reviewed later by a human operator. Recent progress has allowed such video images to be monitored also by an automated system, improving detection rates and saving human labor.
  • Embodiments of the present invention are directed to enabling the automatic extraction and use of contextual information. Furthermore, embodiments of the present invention may provide contextual information about moving targets. This contextual information may be used to enable context-sensitive event detection, and it may improve target detection, improve tracking and classification, and decrease the false alarm rate of video surveillance systems.
  • the embodiments of the invention may include a system that builds path models from analysis of a plurality of targets observed from a surveillance video sequence.
  • the mature path models may be used to identify whether a target's behavior is consistent with respect to the expected target behavior, to predict a target's subsequent path based on the target's observed behavior and to classify a target's type.
  • the embodiments of the invention may also include building a statistical model of targets' behavior with respect to their path models, which may be used to analyze a target's interaction with scene elements and with other targets.
  • a method of video processing may include automatic extraction and use of contextual information about moving targets in a surveillance video.
  • the contextual information may be gathered in the form of statistical models representing the expected behavior of targets. These models may be used to detect context sensitive events when a target's behavior does not conform to the expected behavior. Furthermore, detection, tracking and classification of targets may also be improved using the contextual information.
  • a system for detecting behavior of a target may include: a target detection engine, adapted to detect at least one target from one or more objects from a video surveillance system recording a scene; a path builder, adapted to create at least one mature path model from analysis of the behavior of a plurality of targets in the scene, wherein the at least one mature path model includes a model of expected target behavior with respect to the at least one path model; a target behavior analyzer, adapted to analyze and identify target behavior with respect to the at least one mature path model; and an alert generator, adapted to generate an alert based on the identified behavior.
  • a computer-based method of target behavior analysis may include the steps of: processing an input video sequence to obtain target information for at least one target from one or more objects from a video surveillance system recording a scene; building at least one mature path model from analysis of the behavior of a plurality of targets in the scene, wherein the at least one mature path model includes a model of expected target behavior with respect to the at least one path model; analyzing and identifying target behavior of a target with respect to the at least one mature path model; and generating an alert based on the identified target behavior.
  • a computer-readable medium may contain instructions that, when executed by a processor, cause the processor to perform operations including:
  • the invention may be embodied in the form of hardware, software, or firmware, or in the form of combinations thereof.
  • a “video” may refer to motion pictures represented in analog and/or digital form.
  • Examples of video include: television, movies, image sequences from a video camera or other observer, and computer-generated image sequences.
  • a “frame” may refer to a particular image or other discrete unit within a video.
  • An “object” may refer to an item of interest in a video. Examples of an object include: a person, a vehicle, an animal, and a physical subject.
  • a “target” may refer to a computer's model of an object.
  • a target may be derived via image processing, and there is a one-to-one correspondence between targets and objects.
  • a “target instance,” or “instance,” may refer to a sighting of an object in a frame.
  • An “activity” may refer to one or more actions and/or one or more composites of actions of one or more objects. Examples of an activity include: entering; exiting;
  • a “location” may refer to a space where an activity may occur.
  • a location may be, for example, scene-based or image-based.
  • Examples of a scene-based location include: a public space; a store; a retail space; an office; a warehouse; a hotel room; a hotel lobby; a lobby of a building; a casino; a bus station; a train station; an airport; a port; a bus; a train; an airplane; and a ship.
  • Examples of an image-based location include: a video image; a line in a video image; an area in a video image; a rectangular section of a video image; and a polygonal section of a video image.
  • An “event” may refer to one or more objects engaged in an activity.
  • the event may be referenced with respect to a location and/or a time.
  • a “computer” may refer to one or more apparatus and/or one or more systems that are capable of accepting a structured input, processing the structured input according to prescribed rules, and producing results of the processing as output.
  • Examples of a computer may include: a computer; a stationary and/or portable computer; a computer having a single processor, multiple processors, or multi-core processors, which may operate in parallel and/or not in parallel; a general purpose computer; a supercomputer; a mainframe; a super mini-computer; a mini-computer; a workstation; a micro-computer; a server; a client; an interactive television; a web appliance; a telecommunications device with internet access; a hybrid combination of a computer and an interactive television; a portable computer; a personal digital assistant (PDA); a portable telephone; application-specific hardware to emulate a computer and/or software, such as, for example, a digital signal processor (DSP), a field-programmable gate array (FPGA), a chip, chips, or a chip set;
  • a “computer-readable medium” may refer to any storage device used for storing data accessible by a computer. Examples of a computer-readable medium may include: a magnetic hard disk; a floppy disk; an optical disk, such as a CD-ROM and a DVD; a magnetic tape; and a memory chip.
  • Software may refer to prescribed rules to operate a computer. Examples of software may include: software; code segments; instructions; applets; pre-compiled code; compiled code; computer programs; and programmed logic.
  • a “computer system” may refer to a system having one or more computers, where each computer may include a computer-readable medium embodying software to operate the computer.
  • Examples of a computer system may include: a distributed computer system for processing information via computer systems linked by a network; two or more computer systems connected together via a network for transmitting and/or receiving information between the computer systems; and one or more apparatuses and/or one or more systems that may accept data, may process data in accordance with one or more stored software programs, may generate results, and typically may include input, output, storage, arithmetic, logic, and control units.
  • a “network” may refer to a number of computers and associated devices that may be connected by communication facilities.
  • a network may involve permanent connections such as cables or temporary connections such as those made through telephone or other communication links.
  • Examples of a network may include: an internet, such as the Internet; an intranet; a local area network (LAN); a wide area network (WAN); and a combination of networks, such as an internet and an intranet.
  • a “sensing device” may refer to any apparatus for obtaining visual information. Examples include: color and monochrome cameras, video cameras, closed-circuit television (CCTV) cameras, charge-coupled device (CCD) sensors, analog and digital cameras, PC cameras, web cameras, and infra-red imaging devices. If not more specifically described, a “camera” refers to any sensing device.
  • CCTV closed-circuit television
  • CCD charge-coupled device
  • a “blob” may refer generally to any object in an image (usually, in the context of video). Examples of blobs include moving objects (e.g., people and vehicles) and stationary objects (e.g., bags, furniture and consumer goods on shelves in a store).
  • moving objects e.g., people and vehicles
  • stationary objects e.g., bags, furniture and consumer goods on shelves in a store.
  • a “target property map” may refer to a mapping of target properties or functions of target properties to image locations.
  • Target property maps are built by recording and modeling a target property or function of one or more target properties at each image location. For instance, a width model at image location (x,y) may be obtained by recording the widths of all targets that pass through the pixel at location (x,y). A model may be used to represent this record and to provide statistical information, which may include the average width of targets at location (x,y), the standard deviation from the average at this location, etc. Collections of such models, one for each image location, are called a target property map.
  • a “path” may refer to an image region, not necessarily connected, that represents the loci of targets: a) whose trajectories start near the start point of the path; b) whose trajectories end near the end point of the path; and c) whose trajectories overlap significantly with the path.
  • FIG. 1 depicts a flowchart of a content analysis system that may include embodiments of the invention
  • FIG. 2 depicts a flowchart describing training of paths, according to an embodiment of the invention
  • FIG. 3 depicts a flowchart describing the training of target property maps according to an embodiment of the invention
  • FIG. 4 depicts a flowchart describing the use of target property maps according to an embodiment of the invention
  • FIG. 5 depicts a block diagram of a system that may be used in implementing some embodiments of the invention.
  • FIG. 6 depicts a block diagram of a system according to embodiments of the present invention.
  • Embodiments of the invention may comprise part of a general surveillance system.
  • a potential embodiment is illustrated in FIG. 1 .
  • Target property information is extracted from the video sequence by detection ( 11 ), tracking ( 12 ) and classification ( 13 ) modules. These modules may utilize known or as yet to be developed techniques.
  • the target property information may be extracted from live video or from previously recorded video.
  • the resulting information is passed to an event detection module ( 14 ) that matches observed target properties against properties deemed threatening by a user. For example, the user may be able to specify such threatening properties by using a graphical user interface (GUI) ( 15 ) or other input/output (I/O) interface with the system.
  • GUI graphical user interface
  • I/O input/output
  • the path builder ( 16 ) monitors and models the data extracted by the up-stream components ( 11 ), ( 12 ), and ( 13 ), and it may further provide information to those components.
  • Data models may be based on target properties, which may include, but which are not limited to, the target's location, width, height, size, speed, direction-of-motion, time of sighting, age, etc. This information may be further filtered, interpolated and/or extrapolated to achieve spatially and temporally smooth and continuous representations.
  • paths may need to be learned by observation before the paths can be used.
  • the path model is labeled “mature” only after a statistically meaningful amount of data has been observed. Queries to path models that have not yet matured are not answered. This strategy leaves the system in a default mode until at least some of the models have matured.
  • a path model When a path model has matured, it may provide information that may be incorporated into the decision making processes of connected algorithmic components.
  • the upstream components ( 11 ), ( 12 ), and ( 13 ) that gather target properties may fail, and it is important that the models are shielded from data that is faulty.
  • One technique for dealing with this problem is to devise algorithms that carefully analyze the quality of the target properties.
  • a simple algorithm may be used that rejects targets and target instances if there is a doubt about their quality. This latter approach likely extends the time until target property maps achieve maturity. However, the prolonged time that many video surveillance systems spend viewing a scene makes this option attractive in that the length of time to maturity is not likely to be problematic.
  • the major components may include initialization of the path model ( 201 ), training of size maps ( 202 ), training of entry/exit maps ( 203 ), and training of path models ( 204 ).
  • Size maps may be generated in Block 202 and may be used by the entry/exit map training algorithm ( 203 ) to associate trajectories with entry/exit regions. Entry/exit regions that are close compared to the normal size of the targets that pass through them are merged. Otherwise they are treated as separate entry/exit regions.
  • Entry/exit maps which may be generated in Block 203 , may in turn form the basis for path models.
  • Entry/exit regions When entry/exit regions have matured they can be used to measure target movement statistics between them. These statistics may be used to form the basis for path models in Block 204 .
  • the size and entry/exit maps are types of target property maps, and they may be trained (built) using a target property map training algorithm, which is described in co-pending, commonly-assigned U.S. Publication No. 2006-0072010A1 (U.S. patent application Ser. No. 10/948,785), filed on Sep. 24, 2004, entitled, “Target Property Maps for Surveillance Systems,” and incorporated herein by reference.
  • the target property map training algorithm may be used several times in the process shown in FIG. 2 . To simplify the description of this process, the target property map training algorithm is explained here in detail and then referenced later in the algorithm detailing the extraction of path models.
  • FIG. 3 depicts a flowchart of an algorithm for building target property maps, according to an embodiment of the invention.
  • the algorithm may begin by appropriately initializing an array corresponding to the size of the target property map (in general, this may correspond to an image size) in Block 301 .
  • a next target may be considered. This portion of the process may begin with initialization of a buffer, which may be a ring buffer, of filtered target instances, in Block 303 .
  • the procedure may then proceed to Block 304 , where a next instance (which may be stored in the buffer) of the target under consideration may be addressed.
  • Block 305 it is determined whether the target is finished; this is the case if all of the target's instances have been considered.
  • Block 309 the process may proceed to Block 309 (to be discussed below). Otherwise, the process may then proceed to Block 306 , to determine if the target is bad; this is the case if this latest instance reveals a severe failure of the target's handling, labeling or identification by the up-stream processes. If this is the case, the process may loop back to Block 302 , to consider the next target. Otherwise, the process may proceed with Block 307 , to determine if the particular instance under consideration is a bad instance; this is the case if the latest instance reveals a limited inconsistency in the target's handling, labeling or identification by the up-stream process. If a bad instance was found, that instance is ignored and the process proceeds to Block 304 , to consider the next target instance. Otherwise, the process may proceed with Block 308 and may update the buffer of filtered target instances, before returning to Block 304 , to consider the next target instance.
  • Block 309 it is determined which, if any, target instances may be considered to be “mature.” According to an embodiment of the invention, if the buffer is found to be full, the oldest target instance in the buffer may be marked “mature.” If all instances of the target have been considered (i.e., if the target is finished), then all target instances in the buffer may be marked “mature.”
  • Target property map models may be updated at the map locations corresponding to the mature target instances.
  • the process may determine, in Block 311 , whether or not each model is mature. In particular, if the number of target instances for a given location is larger than a preset number of instances required for maturity, the map location may be marked “mature.” As discussed above, only mature locations may be used in addressing inquiries.
  • a path model may be initialized at the outset of the process. This may be done, for example, by initializing an array, which may be the size of an image (e.g., of a video frame).
  • the process of FIG. 2 may then proceed to Block 202 , training of size maps.
  • the process of Block 202 uses the target property map training algorithm of FIG. 3 to train one or more size maps.
  • the generic target property training algorithm of FIG. 3 may be changed to perform this particular type of training by modifying Blocks 301 , 308 , and 310 . All three of these blocks, in Block 202 of FIG. 2 , operate on size map instances of the generic target property map objects.
  • Component 308 extracts size information from the target instance stream that enters the path builder (component 16 in FIG. 1 ). Separate size maps may be maintained for each target type and for several time ranges.
  • the process of FIG. 2 may then train entry/exit region maps (Block 203 ).
  • the algorithm of FIG. 3 may be used to perform the map training. To do so, the instantiations of the initialization component ( 301 ), the extraction of target origin and destination information ( 308 ), and the target property model update component ( 310 ) may all be changed to suit this particular type of map training.
  • Component 301 may operate on entry/exit map instances of the generic target property map objects.
  • Component 308 may extract target scene entry and exit information from the target instance stream that enters the path builder (component 16 in FIG. 1 ).
  • Component 309 may determine a set of entry and exit regions that represent a statistically significant number of trajectories.
  • Component 310 may update the entry/exit region model to reflect changes to the shapes and/or target coverage of the entry/exit regions. This process may use information provided by a size map trained in Block 202 to decide whether adjacent entry or exit regions need to be merged. Entry regions that are close to each other may be merged into a single region if the targets that use them are large compared to the distance between them. Otherwise, they may remain separate regions. The same approach may be used for exit regions. This enables maintaining separate paths even when the targets on them appear to be close to each other at a great distance from the camera.
  • the projective transformation that controls image formation is the cause for the apparent close proximity of distant objects.
  • One may use the ratio of target size over entry/exit region distance,
  • target size/distance between regions for example, as it is practically invariant under perspective transformation and thus simplifies the region maintenance algorithm.
  • Separate size maps may be maintained for each target type and for several time ranges.
  • Path models may then be trained, Block 204 .
  • this may begin with initialization of a path data structure.
  • the process may then use the information contained in the entry and exit region map to build a table with a row for each entry region and a column for every exit region in the entry and exit region map.
  • Each trajectory may be associated with an entry region from which it originates and an exit region where it terminates.
  • the set of trajectories associated with an entry/exit region pair is used to define the locus of the path.
  • a path may be determined by taking the intersection of all trajectories in the set, by taking the union of those trajectories, or by defining a path to correspond to some minimum percentage of trajectories in the set.
  • the path data structure combines the information gathered about each path: the start and end points of the path, the number or fraction of trajectories it represents, and two indices into the entry/exit region map that indicate which entry and exit regions in that data structure it corresponds to.
  • Separate path models may be maintained for each type of target and for several time ranges.
  • Path models may be obtained and maintained using information from an existing surveillance system. However, to make path models useful, the path models must also be able to provide information to the system. Path models may allow prediction of a target's destination, given the target's location and its observed trajectory. For example, a target path in a path model for a hardware store may describe that targets leaving the power-tools department tend to stop at the department check-out. In another example, a target path in a path model may describe that targets traveling the path tend to reach the other end of the path within a specific time frame, e.g., two minutes.
  • Path models may also allow classification of a target's path or of the target, based on the path type. For example, targets that are vehicles, pedestrians, trains or airplanes tend to travel, respectively, on roads, sidewalks, railroad tracks or runways.
  • FIG. 6 depicts a block diagram of a system for creating and using path models, according to embodiments of the present invention.
  • the system may include a target detection engine 602 .
  • Target detection engine 602 may detect one or more targets 604 from one or more objects from a video surveillance system recording a scene (not shown).
  • Targets 604 may be provided to path builder 16 for the creation of a mature path model 606 , as described above.
  • Mature path model 606 may include a model of expected target behavior with respect to the mature path model.
  • a target behavior analyzer 608 may analyze and identify behavior of later detected targets 604 with respect to the mature path model 606 .
  • An alert generator 610 may receive the results of the analysis and may generate an alert 612 when a specified behavior is detected and identified. Examples of the use of the system are illustrated below.
  • Path models may also allow analysis of target properties.
  • market research and/or statistical/contextual target modeling may benefit from the following information determined from path models.
  • Information about target dwell times and locations along learned paths may help to determine, e.g., where shoppers spend their time while on-site, on which aisle and/or in front of which products, which products customers compare, and which products they select with or without comparison to other products.
  • Information about relative dwell locations along learned paths may help to determine, e.g., whether customers that were interested in product A also look at product B and with what probability C and dwell time D.
  • Information about target properties associated with paths, dwell locations and times may help to associate, for example, a target type with a target size, or a target's clothing or uniform.
  • Information about interactions of targets on paths with other targets of the same or different type may help detection, for example, of when vehicles stop next to each other while traveling to and from a sensitive site.
  • Information about interactions of targets on a path with scene elements may help to determine, for example, how many (distinct) customers make use of an aisle-barcode reader, or how many vehicles actually stop at a four-way-stop intersection.
  • Information about temporal patterns of target properties on a path may help with determining normal building access patterns after-hours for security applications.
  • Information about deviations from normal target properties along a path due to time of day/week/year, location, target type, and/or traffic density, for instance, normal access pattern information, may help to determine suspicious building access.
  • Gathering statistical data of target behavior on a path may provide a range of target properties on the path, for example, normal speed, size, width, and/or height of moving objects.
  • law enforcement may use this information to determine the normal speed, size, width, and/or height of objects moving on e.g., footpaths, parking lots, roads, water channels, canals, lakes, ports, and/or airport taxiways/runways.
  • the statistical information can be used further to determine deviations from normal object properties in subsequently observed targets.
  • Gathering statistical data of target behavior on a path may provide a range of, for example, normal driving regions, directions, object entry and exit probabilities.
  • traffic planning, reconnaissance or surveillance applications may use this information to determine traffic statistics that can highlight, e.g., choke points, popular access points, underutilized access points, and/or traffic patterns.
  • Gathering statistical data of target behavior on a path may provide higher order statistics of objects. For instance, traffic planners may use this information to determine the expected deviation from normal object behavior. This information can be used further to determine deviations from the expected deviation from normal object behavior.
  • Path models may also allow detection of unusual target properties and/or behavior, such as, for example, when a target deviates from its path. For instance, information about a target's deviation from a path may help to detect targets that travel in parts of the scene not associated with any known path, or to detect targets that enter the scene outside known entry points/regions and/or known exit points/regions. In another example, a target leaving a path at a point other than the exit point/region expected for targets on the path may be detected. This information may help to detect, for example, vehicles that fail to travel between designated checkpoints.
  • Deviation from a path may also be determined by detection of a failure to arrive on time or at the desired location. For instance, security and surveillance applications may use this information to determine whether a person or vehicle passes swiftly and directly between checkpoints. In production process monitoring, this information may be used to determine whether a manufacturing process is functioning as intended.
  • a target joining a path at a point other than the entry point/region expected for targets on the path may be detected. This information may help to detect, for example, customers leaving the premises of a shop without passing a checkout or service desk.
  • Information about a target switching paths may help to detect, for example, targets that travel first on an employee or customer or visitor path, and then switch to a path associated with security guards.
  • Information about a target crossing a path may help to detect, for example, vehicles in a parking lot (each starting from mutually disjoint world locations), that are expected to merge into the exit lanes, rather than crossing them.
  • Information about a target traveling on an infrequently used path may help to detect, for example, access to a dangerous area at a refinery.
  • Information about a target traveling unusually slowly, unusually fast or stopping where targets do not usually stop may help to detect, for example, vehicles that stop between border checkpoints of neighboring countries. In traffic monitoring applications, this information may help to detect vehicles traveling above the speed limit.
  • Information about a target traveling on a path may help to detect, for example, unauthorized access to a closed facility at nighttime, even if the same facility is accessible by day. This information may also allow the comparison of current target behavior with access patterns normal for a particular time of day to detect potential trespassers.
  • Information about a target traveling on a path, but in unusual direction, may help to detect, for example, “ghost drivers” traveling in the wrong direction along a highway.
  • this information may be used to determine that the heading of a target heading is going to bring the target too close to a sensitive site.
  • Information about a target traveling on a path that is not normally associated with targets of the target's type may help to detect, for example, vehicles on a sidewalk or an urban pedestrian area.
  • Information about properties of the target on a certain path that are unusual may help to detect targets whose width, height, size, area, target perimeter length, color (hue, saturation, luminance), texture, compactness, shape and/or time of appearance is unexpected.
  • Information about two or more events may be combined to detect unusual co-occurrences.
  • One or more detected unusual target behaviors may be combined, or with target behaviors detected in the context of a statistical model to detect unusual co-occurrences.
  • surveillance applications may use information of a detected site access with detection of an un-manned guard post to detect an unauthorized access.
  • FIG. 4 depicts a flowchart of an algorithm for querying path models (e.g., by one or more components of a surveillance system) to obtain contextual information, according to an embodiment of the invention.
  • the algorithm of FIG. 4 may begin by considering a next target, in Block 41 . It may then proceed to Block 42 , to determine if the requested path model has been defined.
  • the process may loop back to Block 41 , to consider a next target.
  • the process may then consider a next target instance, in Block 43 . If the instance indicates that the target is finished, in Block 44 , the process may loop back to Block 41 to consider a next target. A target is considered finished if all of its instances have been considered. If the target is not finished, the process may proceed to Block 45 and may determine if the target property map model at the location of the target instance under consideration has matured. If it has not matured, the process may loop back to Block 43 to consider a next target instance. Otherwise, the process may proceed to Block 46 , where target context may be updated. The context of a target may be updated by recording the degree of its conformance with the target property map maintained by this algorithm.
  • Block 46 the process may proceed to Block 47 to determine normalcy properties of the target based on its target context.
  • the context of each target is maintained to determine whether it acted in a manner that is inconsistent with the behavior or observations predicted by the target property map model.
  • the procedure may return to Block 41 to consider a next target.
  • FIG. 5 The computer system of FIG. 5 may include at least one processor 52 , with associated system memory 51 , which may store, for example, operating system software and the like.
  • the system may further include additional memory 53 , which may, for example, include software instructions to perform various applications.
  • the system may also include one or more input/output (I/O) devices 54 , for example (but not limited to), keyboard, mouse, trackball, printer, display, network connection, etc.
  • I/O input/output
  • the present invention may be embodied as software instructions that may be stored in system memory 51 or in additional memory 53 .
  • Such software instructions may also be stored in removable or remote media (for example, but not limited to, compact disks, floppy disks, etc.), which may be read through an I/O device 54 (for example, but not limited to, a floppy disk drive). Furthermore, the software instructions may also be transmitted to the computer system via an I/O device 54 for example, a network connection; in such a case, a signal containing the software instructions may be considered to be a machine-readable medium.

Abstract

A system for detecting behavior of a target may include: a target detection engine, adapted to detect at least one target from one or more objects from a video surveillance system recording a scene; a path builder, adapted to create at least one mature path model from analysis of the behavior of a plurality of targets in the scene, wherein the at least one mature path model includes a model of expected target behavior with respect to the at least one path model; and a target behavior analyzer, adapted to analyze and identify target behavior with respect to the at least one mature path model. The system may further include an alert generator, adapted to generate an alert based on the identified behavior.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation in part of U.S. application Ser. No. 10/948,751, entitled “METHOD FOR FINDING PATHS IN VIDEO,” filed Sep. 24, 2004, the contents of which are incorporated herein in their entirety.
  • FIELD OF THE INVENTION
  • The present invention is related to video-based surveillance and monitoring. More specifically, specific embodiments of the invention relate to context-sensitive video-based surveillance and monitoring systems, with applications in market research and/or statistical/contextual target modeling.
  • BACKGROUND OF THE INVENTION
  • Many businesses and other facilities, such as banks, stores, airports, etc., make use of security systems. Among such systems are video-based systems, in which a sensing device, like a video camera, obtains and records images within its sensory field.
  • For example, a video camera will provide a video record of whatever is within the field-of-view of its lens. Such video images may be monitored by a human operator and/or reviewed later by a human operator. Recent progress has allowed such video images to be monitored also by an automated system, improving detection rates and saving human labor.
  • In many situations it would be desirable to specify the detection of targets using relative modifiers such as fast, slow, tall, flat, wide, narrow, etc., without quantifying these adjectives. Likewise it would be desirable for state-of-the-art surveillance systems to adapt to the peculiarities of the scene, as current systems are unable to do so, even if the same systems have been monitoring the same scene for many years.
  • SUMMARY OF THE INVENTION
  • Embodiments of the present invention are directed to enabling the automatic extraction and use of contextual information. Furthermore, embodiments of the present invention may provide contextual information about moving targets. This contextual information may be used to enable context-sensitive event detection, and it may improve target detection, improve tracking and classification, and decrease the false alarm rate of video surveillance systems.
  • The embodiments of the invention may include a system that builds path models from analysis of a plurality of targets observed from a surveillance video sequence. The mature path models may be used to identify whether a target's behavior is consistent with respect to the expected target behavior, to predict a target's subsequent path based on the target's observed behavior and to classify a target's type. The embodiments of the invention may also include building a statistical model of targets' behavior with respect to their path models, which may be used to analyze a target's interaction with scene elements and with other targets.
  • A method of video processing may include automatic extraction and use of contextual information about moving targets in a surveillance video. The contextual information may be gathered in the form of statistical models representing the expected behavior of targets. These models may be used to detect context sensitive events when a target's behavior does not conform to the expected behavior. Furthermore, detection, tracking and classification of targets may also be improved using the contextual information.
  • In one embodiment, a system for detecting behavior of a target, may include: a target detection engine, adapted to detect at least one target from one or more objects from a video surveillance system recording a scene; a path builder, adapted to create at least one mature path model from analysis of the behavior of a plurality of targets in the scene, wherein the at least one mature path model includes a model of expected target behavior with respect to the at least one path model; a target behavior analyzer, adapted to analyze and identify target behavior with respect to the at least one mature path model; and an alert generator, adapted to generate an alert based on the identified behavior.
  • In another embodiment, a computer-based method of target behavior analysis may include the steps of: processing an input video sequence to obtain target information for at least one target from one or more objects from a video surveillance system recording a scene; building at least one mature path model from analysis of the behavior of a plurality of targets in the scene, wherein the at least one mature path model includes a model of expected target behavior with respect to the at least one path model; analyzing and identifying target behavior of a target with respect to the at least one mature path model; and generating an alert based on the identified target behavior.
  • In another embodiment, a computer-readable medium may contain instructions that, when executed by a processor, cause the processor to perform operations including:
  • processing an input video sequence to obtain target information for at least one target from one or more objects from a video of a scene; building at least one mature path model from analysis of the behavior of a plurality of targets in the scene, wherein said at least one mature path model includes a model of expected target behavior with respect to said at least one path model; and analyzing and identifying target behavior of a target with respect to said at least one mature path model.
  • The invention may be embodied in the form of hardware, software, or firmware, or in the form of combinations thereof.
  • DEFINITIONS
  • The following definitions are applicable throughout this disclosure, including in the above.
  • A “video” may refer to motion pictures represented in analog and/or digital form.
  • Examples of video include: television, movies, image sequences from a video camera or other observer, and computer-generated image sequences.
  • A “frame” may refer to a particular image or other discrete unit within a video.
  • An “object” may refer to an item of interest in a video. Examples of an object include: a person, a vehicle, an animal, and a physical subject.
  • A “target” may refer to a computer's model of an object. A target may be derived via image processing, and there is a one-to-one correspondence between targets and objects.
  • A “target instance,” or “instance,” may refer to a sighting of an object in a frame.
  • An “activity” may refer to one or more actions and/or one or more composites of actions of one or more objects. Examples of an activity include: entering; exiting;
  • stopping; moving; raising; lowering; growing; and shrinking.
  • A “location” may refer to a space where an activity may occur. A location may be, for example, scene-based or image-based. Examples of a scene-based location include: a public space; a store; a retail space; an office; a warehouse; a hotel room; a hotel lobby; a lobby of a building; a casino; a bus station; a train station; an airport; a port; a bus; a train; an airplane; and a ship. Examples of an image-based location include: a video image; a line in a video image; an area in a video image; a rectangular section of a video image; and a polygonal section of a video image.
  • An “event” may refer to one or more objects engaged in an activity. The event may be referenced with respect to a location and/or a time.
  • A “computer” may refer to one or more apparatus and/or one or more systems that are capable of accepting a structured input, processing the structured input according to prescribed rules, and producing results of the processing as output. Examples of a computer may include: a computer; a stationary and/or portable computer; a computer having a single processor, multiple processors, or multi-core processors, which may operate in parallel and/or not in parallel; a general purpose computer; a supercomputer; a mainframe; a super mini-computer; a mini-computer; a workstation; a micro-computer; a server; a client; an interactive television; a web appliance; a telecommunications device with internet access; a hybrid combination of a computer and an interactive television; a portable computer; a personal digital assistant (PDA); a portable telephone; application-specific hardware to emulate a computer and/or software, such as, for example, a digital signal processor (DSP), a field-programmable gate array (FPGA), a chip, chips, or a chip set; an optical computer; a quantum computer; a biological computer; and an apparatus that may accept data, may process data in accordance with one or more stored software programs, may generate results, and typically may include input, output, storage, arithmetic, logic, and control units.
  • A “computer-readable medium” may refer to any storage device used for storing data accessible by a computer. Examples of a computer-readable medium may include: a magnetic hard disk; a floppy disk; an optical disk, such as a CD-ROM and a DVD; a magnetic tape; and a memory chip.
  • “Software” may refer to prescribed rules to operate a computer. Examples of software may include: software; code segments; instructions; applets; pre-compiled code; compiled code; computer programs; and programmed logic.
  • A “computer system” may refer to a system having one or more computers, where each computer may include a computer-readable medium embodying software to operate the computer. Examples of a computer system may include: a distributed computer system for processing information via computer systems linked by a network; two or more computer systems connected together via a network for transmitting and/or receiving information between the computer systems; and one or more apparatuses and/or one or more systems that may accept data, may process data in accordance with one or more stored software programs, may generate results, and typically may include input, output, storage, arithmetic, logic, and control units.
  • A “network” may refer to a number of computers and associated devices that may be connected by communication facilities. A network may involve permanent connections such as cables or temporary connections such as those made through telephone or other communication links. Examples of a network may include: an internet, such as the Internet; an intranet; a local area network (LAN); a wide area network (WAN); and a combination of networks, such as an internet and an intranet.
  • A “sensing device” may refer to any apparatus for obtaining visual information. Examples include: color and monochrome cameras, video cameras, closed-circuit television (CCTV) cameras, charge-coupled device (CCD) sensors, analog and digital cameras, PC cameras, web cameras, and infra-red imaging devices. If not more specifically described, a “camera” refers to any sensing device.
  • A “blob” may refer generally to any object in an image (usually, in the context of video). Examples of blobs include moving objects (e.g., people and vehicles) and stationary objects (e.g., bags, furniture and consumer goods on shelves in a store).
  • A “target property map” may refer to a mapping of target properties or functions of target properties to image locations. Target property maps are built by recording and modeling a target property or function of one or more target properties at each image location. For instance, a width model at image location (x,y) may be obtained by recording the widths of all targets that pass through the pixel at location (x,y). A model may be used to represent this record and to provide statistical information, which may include the average width of targets at location (x,y), the standard deviation from the average at this location, etc. Collections of such models, one for each image location, are called a target property map.
  • A “path” may refer to an image region, not necessarily connected, that represents the loci of targets: a) whose trajectories start near the start point of the path; b) whose trajectories end near the end point of the path; and c) whose trajectories overlap significantly with the path.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Specific embodiments of the invention will now be described in further detail in conjunction with the attached drawings, in which:
  • FIG. 1 depicts a flowchart of a content analysis system that may include embodiments of the invention;
  • FIG. 2 depicts a flowchart describing training of paths, according to an embodiment of the invention;
  • FIG. 3 depicts a flowchart describing the training of target property maps according to an embodiment of the invention;
  • FIG. 4 depicts a flowchart describing the use of target property maps according to an embodiment of the invention;
  • FIG. 5 depicts a block diagram of a system that may be used in implementing some embodiments of the invention; and
  • FIG. 6 depicts a block diagram of a system according to embodiments of the present invention.
  • DETAILED DESCRIPTION OF VARIOUS EMBODIMENTS OF THE INVENTION
  • Embodiments of the invention may comprise part of a general surveillance system. A potential embodiment is illustrated in FIG. 1. Target property information is extracted from the video sequence by detection (11), tracking (12) and classification (13) modules. These modules may utilize known or as yet to be developed techniques. The target property information may be extracted from live video or from previously recorded video. The resulting information is passed to an event detection module (14) that matches observed target properties against properties deemed threatening by a user. For example, the user may be able to specify such threatening properties by using a graphical user interface (GUI) (15) or other input/output (I/O) interface with the system. The path builder (16) monitors and models the data extracted by the up-stream components (11), (12), and (13), and it may further provide information to those components. Data models may be based on target properties, which may include, but which are not limited to, the target's location, width, height, size, speed, direction-of-motion, time of sighting, age, etc. This information may be further filtered, interpolated and/or extrapolated to achieve spatially and temporally smooth and continuous representations.
  • LEARNING PATHS BY OBSERVATION
  • According to some embodiments of the invention, paths may need to be learned by observation before the paths can be used. To signal the validity of a path model, the path model is labeled “mature” only after a statistically meaningful amount of data has been observed. Queries to path models that have not yet matured are not answered. This strategy leaves the system in a default mode until at least some of the models have matured. When a path model has matured, it may provide information that may be incorporated into the decision making processes of connected algorithmic components.
  • The availability of this additional information may help the algorithmic components to make better decisions.
  • Not all targets or their instances are necessarily used for training. The upstream components (11), (12), and (13) that gather target properties may fail, and it is important that the models are shielded from data that is faulty. One technique for dealing with this problem is to devise algorithms that carefully analyze the quality of the target properties.
  • In other embodiments of the invention, a simple algorithm may be used that rejects targets and target instances if there is a doubt about their quality. This latter approach likely extends the time until target property maps achieve maturity. However, the prolonged time that many video surveillance systems spend viewing a scene makes this option attractive in that the length of time to maturity is not likely to be problematic.
  • An overview of an exemplary method for learning path models according to an embodiment of the invention is shown in FIG. 2. The major components may include initialization of the path model (201), training of size maps (202), training of entry/exit maps (203), and training of path models (204).
  • Size maps may be generated in Block 202 and may be used by the entry/exit map training algorithm (203) to associate trajectories with entry/exit regions. Entry/exit regions that are close compared to the normal size of the targets that pass through them are merged. Otherwise they are treated as separate entry/exit regions.
  • Entry/exit maps, which may be generated in Block 203, may in turn form the basis for path models. When entry/exit regions have matured they can be used to measure target movement statistics between them. These statistics may be used to form the basis for path models in Block 204.
  • The size and entry/exit maps are types of target property maps, and they may be trained (built) using a target property map training algorithm, which is described in co-pending, commonly-assigned U.S. Publication No. 2006-0072010A1 (U.S. patent application Ser. No. 10/948,785), filed on Sep. 24, 2004, entitled, “Target Property Maps for Surveillance Systems,” and incorporated herein by reference. The target property map training algorithm may be used several times in the process shown in FIG. 2. To simplify the description of this process, the target property map training algorithm is explained here in detail and then referenced later in the algorithm detailing the extraction of path models.
  • FIG. 3 depicts a flowchart of an algorithm for building target property maps, according to an embodiment of the invention. The algorithm may begin by appropriately initializing an array corresponding to the size of the target property map (in general, this may correspond to an image size) in Block 301. In Block 302, a next target may be considered. This portion of the process may begin with initialization of a buffer, which may be a ring buffer, of filtered target instances, in Block 303. The procedure may then proceed to Block 304, where a next instance (which may be stored in the buffer) of the target under consideration may be addressed. In Block 305, it is determined whether the target is finished; this is the case if all of the target's instances have been considered. If the target is finished, the process may proceed to Block 309 (to be discussed below). Otherwise, the process may then proceed to Block 306, to determine if the target is bad; this is the case if this latest instance reveals a severe failure of the target's handling, labeling or identification by the up-stream processes. If this is the case, the process may loop back to Block 302, to consider the next target. Otherwise, the process may proceed with Block 307, to determine if the particular instance under consideration is a bad instance; this is the case if the latest instance reveals a limited inconsistency in the target's handling, labeling or identification by the up-stream process. If a bad instance was found, that instance is ignored and the process proceeds to Block 304, to consider the next target instance. Otherwise, the process may proceed with Block 308 and may update the buffer of filtered target instances, before returning to Block 304, to consider the next target instance.
  • Following Block 305 (as discussed above), the algorithm may proceed with Block 309, where it is determined which, if any, target instances may be considered to be “mature.” According to an embodiment of the invention, if the buffer is found to be full, the oldest target instance in the buffer may be marked “mature.” If all instances of the target have been considered (i.e., if the target is finished), then all target instances in the buffer may be marked “mature.”
  • The process may then proceed to Block 310, where target property map models may be updated at the map locations corresponding to the mature target instances.
  • Following this map updating, the process may determine, in Block 311, whether or not each model is mature. In particular, if the number of target instances for a given location is larger than a preset number of instances required for maturity, the map location may be marked “mature.” As discussed above, only mature locations may be used in addressing inquiries.
  • Returning, now, to the process of FIG. 2, the target property map training algorithm of FIG. 3 will be referenced in describing the process of training path models. As discussed above, in Block 201, a path model may be initialized at the outset of the process. This may be done, for example, by initializing an array, which may be the size of an image (e.g., of a video frame).
  • The process of FIG. 2 may then proceed to Block 202, training of size maps. In an embodiment of the invention, the process of Block 202 uses the target property map training algorithm of FIG. 3 to train one or more size maps. The generic target property training algorithm of FIG. 3 may be changed to perform this particular type of training by modifying Blocks 301, 308, and 310. All three of these blocks, in Block 202 of FIG. 2, operate on size map instances of the generic target property map objects. Component 308 extracts size information from the target instance stream that enters the path builder (component 16 in FIG. 1). Separate size maps may be maintained for each target type and for several time ranges.
  • The process of FIG. 2 may then train entry/exit region maps (Block 203). Once again, the algorithm of FIG. 3 may be used to perform the map training. To do so, the instantiations of the initialization component (301), the extraction of target origin and destination information (308), and the target property model update component (310) may all be changed to suit this particular type of map training. Component 301 may operate on entry/exit map instances of the generic target property map objects. Component 308 may extract target scene entry and exit information from the target instance stream that enters the path builder (component 16 in FIG. 1). Component 309 may determine a set of entry and exit regions that represent a statistically significant number of trajectories. These regions are deemed to deserve representation and may be annotated with target statistics, such as, but not limited to, the region size and location, the percentage of targets in the scene that enter or exit through the region, etc. Component 310 may update the entry/exit region model to reflect changes to the shapes and/or target coverage of the entry/exit regions. This process may use information provided by a size map trained in Block 202 to decide whether adjacent entry or exit regions need to be merged. Entry regions that are close to each other may be merged into a single region if the targets that use them are large compared to the distance between them. Otherwise, they may remain separate regions. The same approach may be used for exit regions. This enables maintaining separate paths even when the targets on them appear to be close to each other at a great distance from the camera. The projective transformation that controls image formation is the cause for the apparent close proximity of distant objects. One may use the ratio of target size over entry/exit region distance,
  • target size distance between regions ,
  • target size/distance between regions, for example, as it is practically invariant under perspective transformation and thus simplifies the region maintenance algorithm. Separate size maps may be maintained for each target type and for several time ranges.
  • Path models may then be trained, Block 204. According to an embodiment of the invention, this may begin with initialization of a path data structure. The process may then use the information contained in the entry and exit region map to build a table with a row for each entry region and a column for every exit region in the entry and exit region map. Each trajectory may be associated with an entry region from which it originates and an exit region where it terminates. The set of trajectories associated with an entry/exit region pair is used to define the locus of the path. According to various embodiments of the invention, a path may be determined by taking the intersection of all trajectories in the set, by taking the union of those trajectories, or by defining a path to correspond to some minimum percentage of trajectories in the set. The path data structure combines the information gathered about each path: the start and end points of the path, the number or fraction of trajectories it represents, and two indices into the entry/exit region map that indicate which entry and exit regions in that data structure it corresponds to. Separate path models may be maintained for each type of target and for several time ranges.
  • USING PATH MODELS
  • Path models may be obtained and maintained using information from an existing surveillance system. However, to make path models useful, the path models must also be able to provide information to the system. Path models may allow prediction of a target's destination, given the target's location and its observed trajectory. For example, a target path in a path model for a hardware store may describe that targets leaving the power-tools department tend to stop at the department check-out. In another example, a target path in a path model may describe that targets traveling the path tend to reach the other end of the path within a specific time frame, e.g., two minutes.
  • Path models may also allow classification of a target's path or of the target, based on the path type. For example, targets that are vehicles, pedestrians, trains or airplanes tend to travel, respectively, on roads, sidewalks, railroad tracks or runways.
  • FIG. 6 depicts a block diagram of a system for creating and using path models, according to embodiments of the present invention. The system may include a target detection engine 602. Target detection engine 602 may detect one or more targets 604 from one or more objects from a video surveillance system recording a scene (not shown). Targets 604 may be provided to path builder 16 for the creation of a mature path model 606, as described above. Mature path model 606 may include a model of expected target behavior with respect to the mature path model. A target behavior analyzer 608 may analyze and identify behavior of later detected targets 604 with respect to the mature path model 606. An alert generator 610 may receive the results of the analysis and may generate an alert 612 when a specified behavior is detected and identified. Examples of the use of the system are illustrated below.
  • USING PATH MODELS IN MARKET RESEARCH AND/OR STATISTICAL/CONTEXTUAL TARGET MODELING
  • Path models may also allow analysis of target properties. In an exemplary embodiment, market research and/or statistical/contextual target modeling may benefit from the following information determined from path models.
  • Information about target dwell times and locations along learned paths may help to determine, e.g., where shoppers spend their time while on-site, on which aisle and/or in front of which products, which products customers compare, and which products they select with or without comparison to other products.
  • Information about relative dwell locations along learned paths may help to determine, e.g., whether customers that were interested in product A also look at product B and with what probability C and dwell time D.
  • Information about target properties associated with paths, dwell locations and times may help to associate, for example, a target type with a target size, or a target's clothing or uniform.
  • Information about interactions of targets on paths with other targets of the same or different type may help detection, for example, of when vehicles stop next to each other while traveling to and from a sensitive site.
  • Information about interactions of targets on a path with scene elements, such as, e.g., buildings, roads, sidewalks, grass/lawn regions, and/or water regions, may help to determine, for example, how many (distinct) customers make use of an aisle-barcode reader, or how many vehicles actually stop at a four-way-stop intersection.
  • Information about temporal patterns of target properties on a path, such as weekday vs. weekend, morning vs. noon vs. evening vs. nighttime, summer vs. winter, may help with determining normal building access patterns after-hours for security applications.
  • Information about deviations from normal target properties along a path due to time of day/week/year, location, target type, and/or traffic density, for instance, normal access pattern information, may help to determine suspicious building access.
  • In addition, the information described above may be combined in many ways to provide further benefit to market research and/or statistical/contextual target modeling.
  • STATISTICAL MODELING FOR PUBLIC SAFETY AND PLANNING
  • Gathering statistical data of target behavior on a path may provide a range of target properties on the path, for example, normal speed, size, width, and/or height of moving objects. In one application, law enforcement may use this information to determine the normal speed, size, width, and/or height of objects moving on e.g., footpaths, parking lots, roads, water channels, canals, lakes, ports, and/or airport taxiways/runways. The statistical information can be used further to determine deviations from normal object properties in subsequently observed targets.
  • Gathering statistical data of target behavior on a path may provide a range of, for example, normal driving regions, directions, object entry and exit probabilities. In one application, for example, traffic planning, reconnaissance or surveillance applications may use this information to determine traffic statistics that can highlight, e.g., choke points, popular access points, underutilized access points, and/or traffic patterns.
  • Gathering statistical data of target behavior on a path may provide higher order statistics of objects. For instance, traffic planners may use this information to determine the expected deviation from normal object behavior. This information can be used further to determine deviations from the expected deviation from normal object behavior.
  • ANALYSIS AND DETECTION OF UNUSUAL TARGET BEHAVIOR ON A PATH
  • Path models may also allow detection of unusual target properties and/or behavior, such as, for example, when a target deviates from its path. For instance, information about a target's deviation from a path may help to detect targets that travel in parts of the scene not associated with any known path, or to detect targets that enter the scene outside known entry points/regions and/or known exit points/regions. In another example, a target leaving a path at a point other than the exit point/region expected for targets on the path may be detected. This information may help to detect, for example, vehicles that fail to travel between designated checkpoints.
  • Deviation from a path may also be determined by detection of a failure to arrive on time or at the desired location. For instance, security and surveillance applications may use this information to determine whether a person or vehicle passes swiftly and directly between checkpoints. In production process monitoring, this information may be used to determine whether a manufacturing process is functioning as intended.
  • In another example, a target joining a path at a point other than the entry point/region expected for targets on the path may be detected. This information may help to detect, for example, customers leaving the premises of a shop without passing a checkout or service desk.
  • Information about a target switching paths may help to detect, for example, targets that travel first on an employee or customer or visitor path, and then switch to a path associated with security guards.
  • Information about a target crossing a path may help to detect, for example, vehicles in a parking lot (each starting from mutually disjoint world locations), that are expected to merge into the exit lanes, rather than crossing them.
  • Information about a target traveling on an infrequently used path may help to detect, for example, access to a dangerous area at a refinery.
  • Information about a target traveling unusually slowly, unusually fast or stopping where targets do not usually stop may help to detect, for example, vehicles that stop between border checkpoints of neighboring countries. In traffic monitoring applications, this information may help to detect vehicles traveling above the speed limit.
  • Information about a target traveling on a path, but at an unusual time, may help to detect, for example, unauthorized access to a closed facility at nighttime, even if the same facility is accessible by day. This information may also allow the comparison of current target behavior with access patterns normal for a particular time of day to detect potential trespassers.
  • Information about a target traveling on a path, but in unusual direction, may help to detect, for example, “ghost drivers” traveling in the wrong direction along a highway.
  • In another example, this information may be used to determine that the heading of a target heading is going to bring the target too close to a sensitive site.
  • Information about a target traveling on a path that is not normally associated with targets of the target's type may help to detect, for example, vehicles on a sidewalk or an urban pedestrian area.
  • Information about properties of the target on a certain path that are unusual may help to detect targets whose width, height, size, area, target perimeter length, color (hue, saturation, luminance), texture, compactness, shape and/or time of appearance is unexpected.
  • In addition, the information described above may be combined in many ways to provide further benefit to detection of dangerous, unauthorized, suspicious, or otherwise noteworthy behavior.
  • Information about two or more events may be combined to detect unusual co-occurrences. One or more detected unusual target behaviors may be combined, or with target behaviors detected in the context of a statistical model to detect unusual co-occurrences. For instance, surveillance applications may use information of a detected site access with detection of an un-manned guard post to detect an unauthorized access.
  • FIG. 4 depicts a flowchart of an algorithm for querying path models (e.g., by one or more components of a surveillance system) to obtain contextual information, according to an embodiment of the invention.
  • The algorithm of FIG. 4 may begin by considering a next target, in Block 41. It may then proceed to Block 42, to determine if the requested path model has been defined.
  • If not, the information about the target is unavailable, and the process may loop back to Block 41, to consider a next target.
  • If the requested path model is determined to be available, the process may then consider a next target instance, in Block 43. If the instance indicates that the target is finished, in Block 44, the process may loop back to Block 41 to consider a next target. A target is considered finished if all of its instances have been considered. If the target is not finished, the process may proceed to Block 45 and may determine if the target property map model at the location of the target instance under consideration has matured. If it has not matured, the process may loop back to Block 43 to consider a next target instance. Otherwise, the process may proceed to Block 46, where target context may be updated. The context of a target may be updated by recording the degree of its conformance with the target property map maintained by this algorithm. Following Block 46, the process may proceed to Block 47 to determine normalcy properties of the target based on its target context. The context of each target is maintained to determine whether it acted in a manner that is inconsistent with the behavior or observations predicted by the target property map model. Finally, following Block 47, the procedure may return to Block 41 to consider a next target.
  • Some embodiments of the invention, as discussed above, may be embodied in the form of software instructions on a machine-readable medium. Such an embodiment is illustrated in FIG. 5. The computer system of FIG. 5 may include at least one processor 52, with associated system memory 51, which may store, for example, operating system software and the like. The system may further include additional memory 53, which may, for example, include software instructions to perform various applications. The system may also include one or more input/output (I/O) devices 54, for example (but not limited to), keyboard, mouse, trackball, printer, display, network connection, etc. The present invention may be embodied as software instructions that may be stored in system memory 51 or in additional memory 53. Such software instructions may also be stored in removable or remote media (for example, but not limited to, compact disks, floppy disks, etc.), which may be read through an I/O device 54 (for example, but not limited to, a floppy disk drive). Furthermore, the software instructions may also be transmitted to the computer system via an I/O device 54 for example, a network connection; in such a case, a signal containing the software instructions may be considered to be a machine-readable medium.
  • The invention has been described in detail with respect to various embodiments, and it will now be apparent from the foregoing to those skilled in the art that changes and modifications may be made without departing from the invention in its broader aspects. The invention, therefore, as defined in the appended claims, is intended to cover all such changes and modifications as fall within the true spirit of the invention.

Claims (31)

1. A system for detecting behavior of a target, comprising:
a target detection engine, adapted to detect at least one target from one or more objects from a video of a scene;
a path builder, adapted to create at least one mature path model from analysis of the behavior of a plurality of targets in the scene, wherein said at least one mature path model includes a model of expected target behavior with respect to said at least one path model; and
a target behavior analyzer, adapted to analyze and identify target behavior with respect to said at least one mature path model.
2. The system according to claim 1, wherein said target behavior analyzer is adapted to identify behavior inconsistent with the model of expected behavior, comprising detecting at least one of:
a target deviating from the path;
a target traveling off of the path;
a target switching from the path to another path;
a target crossing the path;
a target traveling the path, wherein the path is infrequently traveled;
a target traveling at an unusual speed on the path;
a target stopping at an unusual stopping point on the path;
a target traveling at an unusual time on the path;
a target traveling in an unusual direction on the path;
a target of a type not normally found on the path; or
a target having an unusual physical property on the path.
3. The system according to claim 1, wherein said target behavior analyzer is adapted to predict a target's subsequent path based on said at least one path model and the target's observed behavior.
4. The system according to claim 1, wherein said target behavior analyzer is adapted to classify at least one of the target type, based on a path type, or the path type based on the target type.
5. The system according to claim 1, wherein the target behavior analyzer is further adapted to build a statistical model of target behavior with respect to the path model.
6. The system according to claim 5, wherein the target behavior is analyzed with respect to a statistical model of at least one of:
a target dwell time duration on the path;
a target dwell location on the path;
a target property associated with at least one of: the path, a dwell location on the path, and/or a dwell time on the path;
an interaction of a target with another target of the same type on the path;
an interaction of a target with another target of a different type on the path;
an interaction of a target with an element in the scene;
a temporal pattern of a target property on the path; or
a deviation from normal target properties on the path.
7. The system according to claim 6, wherein the target behavior analyzer is adapted to identify target behavior from a combination of:
at least two detected behaviors inconsistent with a path model;
at least two target detected behaviors with respect to a statistical model; or
at least one detected inconsistent behavior and at least one detected behavior with respect to a statistical model.
8. The system according to claim 1, wherein said video is received from a video surveillance system.
9. The system according to claim 1, wherein said system is implemented in application-specific hardware to emulate at least one of a computer or software.
10. The system according to claim 1, further comprising an alert generator, adapted to generate an alert based on said identified behavior.
11. A computer-based method of target behavior analysis, comprising:
processing an input video sequence to obtain target information for at least one target from one or more objects from a video of a scene;
building at least one mature path model from analysis of the behavior of a plurality of targets in the scene, wherein said at least one mature path model includes a model of expected target behavior with respect to said at least one path model; and
analyzing and identifying target behavior of a target with respect to said at least one mature path model.
12. The method according to claim 11, further comprising:
generating an alert based on said identified target behavior.
13. A computer-readable medium containing instructions that, when executed by a processor, cause the processor to perform the method according to claim 11.
14. A video processing system comprising:
a computer system; and
the computer-readable medium according to claim 13.
15. A video surveillance system comprising:
at least one camera to generate an input video sequence; and
the video processing system according to claim 14.
16. The method according to claim 11, wherein processing an input video sequence comprises processing video from a video surveillance system.
17. The method according to claim 11, wherein said analyzing and identifying target behavior includes identifying target behavior inconsistent with the model of expected behavior, comprising detecting at least one of:
a target deviating from the path;
a target traveling off of the path;
a target switching from the path to another path;
a target crossing the path;
a target traveling the path, wherein the path is infrequently traveled;
a target traveling at an unusual speed on the path;
a target stopping at an unusual stopping point on the path;
a target traveling at an unusual time on the path;
a target traveling in an unusual direction on the path;
a target of a type not normally found on the path; or
a target having an unusual physical property on the path.
18. The method according to claim 11, further comprising predicting a target's subsequent path based on said at least one path model and the target's observed behavior.
19. The method according to claim 11, wherein said analyzing and identifying target behavior includes classifying at least one of the target type, based on a path type, and/or the path type based on the target type.
20. The method according to claim 11, wherein said analyzing and identifying target behavior further includes building a statistical model of target behavior with respect to the path model.
21. The method according to claim 20, wherein said analyzing and identifying target behavior includes analyzing target behavior with respect to a statistical model of at least one of:
a target dwell time duration on the path;
a target dwell location on the path;
a target property associated with at least one of: the path, a dwell location on the path, and/or a dwell time on the path;
an interaction of a target with another target of the same type on the path;
an interaction of a target with another target of a different type on the path;
an interaction of a target with an element in the scene;
a temporal pattern of a target property on the path; or a deviation from normal target properties on the path.
22. The method according to claim 21, wherein said analyzing and identifying target behavior includes identify target behavior from a combination of:
at least two detected behaviors inconsistent with a path model;
at least two target detected behaviors with respect to a statistical model; or
at least one detected inconsistent behavior and at least one detected behavior with respect to a statistical model.
23. A computer-readable medium containing instructions that, when executed by a processor, cause the processor to perform operations comprising:
processing an input video sequence to obtain target information for at least one target from one or more objects from a video of a scene;
building at least one mature path model from analysis of the behavior of a plurality of targets in the scene, wherein said at least one mature path model includes a model of expected target behavior with respect to said at least one path model; and
analyzing and identifying target behavior of a target with respect to said at least one mature path model.
24. The computer-readable medium according to claim 23, the operations further comprising:
generating an alert based on said identified target behavior.
25. The computer-readable medium according to claim 23, wherein processing an input video sequence comprises processing video from a video surveillance system.
26. The computer-readable medium according to claim 23, wherein said analyzing and identifying target behavior includes identifying target behavior inconsistent with the model of expected behavior, comprising detecting at least one of:
a target deviating from the path;
a target traveling off of the path;
a target switching from the path to another path;
a target crossing the path;
a target traveling the path, wherein the path is infrequently traveled;
a target traveling at an unusual speed on the path;
a target stopping at an unusual stopping point on the path;
a target traveling at an unusual time on the path;
a target traveling in an unusual direction on the path;
a target of a type not normally found on the path; or a target having an unusual physical property on the path.
27. The computer-readable medium according to claim 23, the operations further comprising predicting a target's subsequent path based on said at least one path model and the target's observed behavior.
28. The computer-readable medium according to claim 23, wherein said analyzing and identifying target behavior includes classifying at least one of the target type, based on a path type, or the path type based on the target type.
29. The computer-readable medium according to claim 23, wherein said analyzing and identifying target behavior further includes building a statistical model of target behavior with respect to the path model.
30. The computer-readable medium according to claim 29, wherein said analyzing and identifying target behavior includes analyzing target behavior with respect to a statistical model of at least one of:
a target dwell time duration on the path;
a target dwell location on the path;
a target property associated with at least one of: the path, a dwell location on the path, and/or a dwell time on the path;
an interaction of a target with another target of the same type on the path;
an interaction of a target with another target of a different type on the path;
an interaction of a target with an element in the scene;
a temporal pattern of a target property on the path; or a deviation from normal target properties on the path.
31. The computer-readable medium according to claim 30, wherein said analyzing and identifying target behavior includes identify target behavior from a combination of:
at least two detected behaviors inconsistent with a path model;
at least two target detected behaviors with respect to a statistical model; or
at least one detected inconsistent behavior and at least one detected behavior with respect to a statistical model.
US11/739,208 2004-09-24 2007-04-24 Method for finding paths in video Abandoned US20080166015A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US11/739,208 US20080166015A1 (en) 2004-09-24 2007-04-24 Method for finding paths in video
PCT/US2008/004814 WO2009008939A2 (en) 2007-04-24 2008-04-14 Method for finding paths in video
TW97114339A TW200905575A (en) 2007-04-24 2008-04-18 Method for finding paths in video
US13/354,141 US8823804B2 (en) 2004-09-24 2012-01-19 Method for finding paths in video
US14/455,868 US10291884B2 (en) 2004-09-24 2014-08-08 Video processing system using target property map
US16/385,814 US20190246073A1 (en) 2004-09-24 2019-04-16 Method for finding paths in video

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/948,751 US20060066719A1 (en) 2004-09-24 2004-09-24 Method for finding paths in video
US11/739,208 US20080166015A1 (en) 2004-09-24 2007-04-24 Method for finding paths in video

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/948,751 Continuation-In-Part US20060066719A1 (en) 2004-09-24 2004-09-24 Method for finding paths in video

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/354,141 Continuation US8823804B2 (en) 2004-09-24 2012-01-19 Method for finding paths in video

Publications (1)

Publication Number Publication Date
US20080166015A1 true US20080166015A1 (en) 2008-07-10

Family

ID=40229322

Family Applications (4)

Application Number Title Priority Date Filing Date
US11/739,208 Abandoned US20080166015A1 (en) 2004-09-24 2007-04-24 Method for finding paths in video
US13/354,141 Active US8823804B2 (en) 2004-09-24 2012-01-19 Method for finding paths in video
US14/455,868 Active US10291884B2 (en) 2004-09-24 2014-08-08 Video processing system using target property map
US16/385,814 Abandoned US20190246073A1 (en) 2004-09-24 2019-04-16 Method for finding paths in video

Family Applications After (3)

Application Number Title Priority Date Filing Date
US13/354,141 Active US8823804B2 (en) 2004-09-24 2012-01-19 Method for finding paths in video
US14/455,868 Active US10291884B2 (en) 2004-09-24 2014-08-08 Video processing system using target property map
US16/385,814 Abandoned US20190246073A1 (en) 2004-09-24 2019-04-16 Method for finding paths in video

Country Status (3)

Country Link
US (4) US20080166015A1 (en)
TW (1) TW200905575A (en)
WO (1) WO2009008939A2 (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050275728A1 (en) * 2004-06-09 2005-12-15 Mirtich Brian V Method for setting parameters of a vision detector using production line information
US20050276445A1 (en) * 2004-06-09 2005-12-15 Silver William M Method and apparatus for automatic visual detection, recording, and retrieval of events
US20050276461A1 (en) * 2004-06-09 2005-12-15 Silver William M Method and apparatus for automatic visual detection, recording, and retrieval of events
US20090315996A1 (en) * 2008-05-09 2009-12-24 Sadiye Zeyno Guler Video tracking systems and methods employing cognitive vision
US20100241981A1 (en) * 2004-11-12 2010-09-23 Mirtich Brian V System and method for displaying and using non-numeric graphic elements to control and monitor a vision system
US20110169867A1 (en) * 2009-11-30 2011-07-14 Innovative Signal Analysis, Inc. Moving object detection, tracking, and displaying systems
US8103085B1 (en) 2007-09-25 2012-01-24 Cognex Corporation System and method for detecting flaws in objects using machine vision
US8127247B2 (en) 2004-06-09 2012-02-28 Cognex Corporation Human-machine-interface and method for manipulating data in a machine vision system
US20120140042A1 (en) * 2007-01-12 2012-06-07 International Business Machines Corporation Warning a user about adverse behaviors of others within an environment based on a 3d captured image stream
US8237099B2 (en) 2007-06-15 2012-08-07 Cognex Corporation Method and system for optoelectronic detection and location of objects
US8249329B2 (en) 2004-06-09 2012-08-21 Cognex Technology And Investment Corporation Method and apparatus for detecting and characterizing an object
US8311275B1 (en) 2008-06-10 2012-11-13 Mindmancer AB Selective viewing of a scene
USRE44353E1 (en) 2004-11-12 2013-07-09 Cognex Technology And Investment Corporation System and method for assigning analysis parameters to vision detector using a graphical interface
US8885887B1 (en) * 2012-01-23 2014-11-11 Hrl Laboratories, Llc System for object detection and recognition in videos using stabilization
US8891852B2 (en) 2004-06-09 2014-11-18 Cognex Technology And Investment Corporation Method and apparatus for configuring and testing a machine vision detector
US20140358639A1 (en) * 2013-05-30 2014-12-04 Panasonic Corporation Customer category analysis device, customer category analysis system and customer category analysis method
CN104200466A (en) * 2014-08-20 2014-12-10 深圳市中控生物识别技术有限公司 Early warning method and camera
US8929588B2 (en) 2011-07-22 2015-01-06 Honeywell International Inc. Object tracking
US9292187B2 (en) 2004-11-12 2016-03-22 Cognex Corporation System, method and graphical user interface for displaying and controlling vision system operating parameters
US9413956B2 (en) 2006-11-09 2016-08-09 Innovative Signal Analysis, Inc. System for extending a field-of-view of an image acquisition device
US20160253618A1 (en) * 2015-02-26 2016-09-01 Hitachi, Ltd. Method and apparatus for work quality control
US9576214B1 (en) 2012-01-23 2017-02-21 Hrl Laboratories, Llc Robust object recognition from moving platforms by combining form and motion detection with bio-inspired classification
US10139819B2 (en) 2014-08-22 2018-11-27 Innovative Signal Analysis, Inc. Video enabled inspection using unmanned aerial vehicles
US20190012549A1 (en) * 2017-07-10 2019-01-10 Nanjing Yuanjue Information and Technology Company Scene analysis method and visual navigation device
US10417484B2 (en) * 2017-05-30 2019-09-17 Wipro Limited Method and system for determining an intent of a subject using behavioural pattern
US10650547B2 (en) 2018-07-26 2020-05-12 Microsoft Technology Licensing, Llc Blob detection using feature match scores
CN112419367A (en) * 2020-12-02 2021-02-26 中国人民解放军军事科学院国防科技创新研究院 Method and device for identifying specific target object
US20210390711A1 (en) * 2020-06-16 2021-12-16 Sony Corporation Apparatus, method and computer program product for predicting whether an object moving across a surface will reach a target destination
US20210409655A1 (en) * 2020-06-25 2021-12-30 Innovative Signal Analysis, Inc. Multi-source 3-dimensional detection and tracking
CN115394026A (en) * 2022-07-15 2022-11-25 安徽电信规划设计有限责任公司 Intelligent monitoring method and system based on 5G technology

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI397866B (en) * 2009-04-21 2013-06-01 Hon Hai Prec Ind Co Ltd System and method for detecting object
US9213781B1 (en) 2012-09-19 2015-12-15 Placemeter LLC System and method for processing image data
US10373470B2 (en) 2013-04-29 2019-08-06 Intelliview Technologies, Inc. Object detection
WO2014176693A1 (en) * 2013-04-29 2014-11-06 Intelliview Technologies Inc. Object detection
CA2847707C (en) 2014-03-28 2021-03-30 Intelliview Technologies Inc. Leak detection
EP3149909A4 (en) 2014-05-30 2018-03-07 Placemeter Inc. System and method for activity monitoring using video data
US9934453B2 (en) * 2014-06-19 2018-04-03 Bae Systems Information And Electronic Systems Integration Inc. Multi-source multi-modal activity recognition in aerial video surveillance
US10943357B2 (en) 2014-08-19 2021-03-09 Intelliview Technologies Inc. Video based indoor leak detection
US10110856B2 (en) 2014-12-05 2018-10-23 Avigilon Fortress Corporation Systems and methods for video analysis rules based on map data
CA2967495C (en) 2014-12-15 2021-06-08 Miovision Technologies Incorporated System and method for compressing video data
US10043078B2 (en) 2015-04-21 2018-08-07 Placemeter LLC Virtual turnstile system and method
US11334751B2 (en) 2015-04-21 2022-05-17 Placemeter Inc. Systems and methods for processing video data for activity monitoring
US11138442B2 (en) 2015-06-01 2021-10-05 Placemeter, Inc. Robust, adaptive and efficient object detection, classification and tracking
EP3340104B1 (en) * 2016-12-21 2023-11-29 Axis AB A method for generating alerts in a video surveillance system
KR20210149169A (en) 2019-04-09 2021-12-08 아비질론 코포레이션 Anomaly detection method, system and computer readable medium
CN110443975A (en) * 2019-07-31 2019-11-12 深圳奥尼电子股份有限公司 Smart security guard and alarm method and system

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5379236A (en) * 1991-05-23 1995-01-03 Yozan Inc. Moving object tracking method
US5969755A (en) * 1996-02-05 1999-10-19 Texas Instruments Incorporated Motion based event detection system and method
US6542621B1 (en) * 1998-08-31 2003-04-01 Texas Instruments Incorporated Method of dealing with occlusion when tracking multiple objects and people in video sequences
US6628835B1 (en) * 1998-08-31 2003-09-30 Texas Instruments Incorporated Method and system for defining and recognizing complex events in a video sequence
US6643387B1 (en) * 1999-01-28 2003-11-04 Sarnoff Corporation Apparatus and method for context-based indexing and retrieval of image sequences
US6678413B1 (en) * 2000-11-24 2004-01-13 Yiqing Liang System and method for object identification and behavior characterization using video analysis
US20040130620A1 (en) * 2002-11-12 2004-07-08 Buehler Christopher J. Method and system for tracking and behavioral monitoring of multiple objects moving through multiple fields-of-view
US6816184B1 (en) * 1998-04-30 2004-11-09 Texas Instruments Incorporated Method and apparatus for mapping a location from a video image to a map
US20050163346A1 (en) * 2003-12-03 2005-07-28 Safehouse International Limited Monitoring an output from a camera
US6985172B1 (en) * 1995-12-01 2006-01-10 Southwest Research Institute Model-based incident detection system with motion classification
US20060072010A1 (en) * 2004-09-24 2006-04-06 Objectvideo, Inc. Target property maps for surveillance systems
US20060222209A1 (en) * 2005-04-05 2006-10-05 Objectvideo, Inc. Wide-area site-based video surveillance system
US20060279630A1 (en) * 2004-07-28 2006-12-14 Manoj Aggarwal Method and apparatus for total situational awareness and monitoring

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6392704B1 (en) * 1997-11-07 2002-05-21 Esco Electronics Corporation Compact video processing system for remote sensing applications
US8979646B2 (en) * 2002-06-12 2015-03-17 Igt Casino patron tracking and information use
US7587064B2 (en) * 2004-02-03 2009-09-08 Hrl Laboratories, Llc Active learning system for object fingerprinting

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5379236A (en) * 1991-05-23 1995-01-03 Yozan Inc. Moving object tracking method
US6985172B1 (en) * 1995-12-01 2006-01-10 Southwest Research Institute Model-based incident detection system with motion classification
US5969755A (en) * 1996-02-05 1999-10-19 Texas Instruments Incorporated Motion based event detection system and method
US6816184B1 (en) * 1998-04-30 2004-11-09 Texas Instruments Incorporated Method and apparatus for mapping a location from a video image to a map
US6628835B1 (en) * 1998-08-31 2003-09-30 Texas Instruments Incorporated Method and system for defining and recognizing complex events in a video sequence
US6542621B1 (en) * 1998-08-31 2003-04-01 Texas Instruments Incorporated Method of dealing with occlusion when tracking multiple objects and people in video sequences
US6643387B1 (en) * 1999-01-28 2003-11-04 Sarnoff Corporation Apparatus and method for context-based indexing and retrieval of image sequences
US6678413B1 (en) * 2000-11-24 2004-01-13 Yiqing Liang System and method for object identification and behavior characterization using video analysis
US20040130620A1 (en) * 2002-11-12 2004-07-08 Buehler Christopher J. Method and system for tracking and behavioral monitoring of multiple objects moving through multiple fields-of-view
US20050163346A1 (en) * 2003-12-03 2005-07-28 Safehouse International Limited Monitoring an output from a camera
US20060279630A1 (en) * 2004-07-28 2006-12-14 Manoj Aggarwal Method and apparatus for total situational awareness and monitoring
US20060072010A1 (en) * 2004-09-24 2006-04-06 Objectvideo, Inc. Target property maps for surveillance systems
US20060222209A1 (en) * 2005-04-05 2006-10-05 Objectvideo, Inc. Wide-area site-based video surveillance system

Cited By (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8891852B2 (en) 2004-06-09 2014-11-18 Cognex Technology And Investment Corporation Method and apparatus for configuring and testing a machine vision detector
US8127247B2 (en) 2004-06-09 2012-02-28 Cognex Corporation Human-machine-interface and method for manipulating data in a machine vision system
US20050276461A1 (en) * 2004-06-09 2005-12-15 Silver William M Method and apparatus for automatic visual detection, recording, and retrieval of events
US7545949B2 (en) 2004-06-09 2009-06-09 Cognex Technology And Investment Corporation Method for setting parameters of a vision detector using production line information
US20090273668A1 (en) * 2004-06-09 2009-11-05 Cognex Corporation Method for setting parameters of a vision detector using production line information
US8249329B2 (en) 2004-06-09 2012-08-21 Cognex Technology And Investment Corporation Method and apparatus for detecting and characterizing an object
US8290238B2 (en) 2004-06-09 2012-10-16 Cognex Technology And Investment Corporation Method and apparatus for locating objects
US9183443B2 (en) 2004-06-09 2015-11-10 Cognex Technology And Investment Llc Method and apparatus for configuring and testing a machine vision detector
US20050276445A1 (en) * 2004-06-09 2005-12-15 Silver William M Method and apparatus for automatic visual detection, recording, and retrieval of events
US8249297B2 (en) 2004-06-09 2012-08-21 Cognex Technology And Investment Corporation Method and apparatus for automatic visual event detection
US8782553B2 (en) 2004-06-09 2014-07-15 Cognex Corporation Human-machine-interface and method for manipulating data in a machine vision system
US8630478B2 (en) 2004-06-09 2014-01-14 Cognex Technology And Investment Corporation Method and apparatus for locating objects
US8243986B2 (en) * 2004-06-09 2012-08-14 Cognex Technology And Investment Corporation Method and apparatus for automatic visual event detection
US8249296B2 (en) 2004-06-09 2012-08-21 Cognex Technology And Investment Corporation Method and apparatus for automatic visual event detection
US9094588B2 (en) 2004-06-09 2015-07-28 Cognex Corporation Human machine-interface and method for manipulating data in a machine vision system
US9092841B2 (en) 2004-06-09 2015-07-28 Cognex Technology And Investment Llc Method and apparatus for visual detection and inspection of objects
US20050275728A1 (en) * 2004-06-09 2005-12-15 Mirtich Brian V Method for setting parameters of a vision detector using production line information
US8295552B2 (en) 2004-06-09 2012-10-23 Cognex Technology And Investment Corporation Method for setting parameters of a vision detector using production line information
USRE44353E1 (en) 2004-11-12 2013-07-09 Cognex Technology And Investment Corporation System and method for assigning analysis parameters to vision detector using a graphical interface
US20100241981A1 (en) * 2004-11-12 2010-09-23 Mirtich Brian V System and method for displaying and using non-numeric graphic elements to control and monitor a vision system
US8582925B2 (en) 2004-11-12 2013-11-12 Cognex Technology And Investment Corporation System and method for displaying and using non-numeric graphic elements to control and monitor a vision system
US9292187B2 (en) 2004-11-12 2016-03-22 Cognex Corporation System, method and graphical user interface for displaying and controlling vision system operating parameters
US9413956B2 (en) 2006-11-09 2016-08-09 Innovative Signal Analysis, Inc. System for extending a field-of-view of an image acquisition device
US20120140042A1 (en) * 2007-01-12 2012-06-07 International Business Machines Corporation Warning a user about adverse behaviors of others within an environment based on a 3d captured image stream
US9412011B2 (en) 2007-01-12 2016-08-09 International Business Machines Corporation Warning a user about adverse behaviors of others within an environment based on a 3D captured image stream
US10354127B2 (en) 2007-01-12 2019-07-16 Sinoeast Concept Limited System, method, and computer program product for alerting a supervising user of adverse behavior of others within an environment by providing warning signals to alert the supervising user that a predicted behavior of a monitored user represents an adverse behavior
US9208678B2 (en) * 2007-01-12 2015-12-08 International Business Machines Corporation Predicting adverse behaviors of others within an environment based on a 3D captured image stream
US8237099B2 (en) 2007-06-15 2012-08-07 Cognex Corporation Method and system for optoelectronic detection and location of objects
US8103085B1 (en) 2007-09-25 2012-01-24 Cognex Corporation System and method for detecting flaws in objects using machine vision
US10121079B2 (en) 2008-05-09 2018-11-06 Intuvision Inc. Video tracking systems and methods employing cognitive vision
US9019381B2 (en) 2008-05-09 2015-04-28 Intuvision Inc. Video tracking systems and methods employing cognitive vision
US20090315996A1 (en) * 2008-05-09 2009-12-24 Sadiye Zeyno Guler Video tracking systems and methods employing cognitive vision
US8965047B1 (en) 2008-06-10 2015-02-24 Mindmancer AB Selective viewing of a scene
US9172919B2 (en) 2008-06-10 2015-10-27 Mindmancer AB Selective viewing of a scene
US8311275B1 (en) 2008-06-10 2012-11-13 Mindmancer AB Selective viewing of a scene
US9430923B2 (en) * 2009-11-30 2016-08-30 Innovative Signal Analysis, Inc. Moving object detection, tracking, and displaying systems
US10510231B2 (en) 2009-11-30 2019-12-17 Innovative Signal Analysis, Inc. Moving object detection, tracking, and displaying systems
US20110169867A1 (en) * 2009-11-30 2011-07-14 Innovative Signal Analysis, Inc. Moving object detection, tracking, and displaying systems
US8929588B2 (en) 2011-07-22 2015-01-06 Honeywell International Inc. Object tracking
US8885887B1 (en) * 2012-01-23 2014-11-11 Hrl Laboratories, Llc System for object detection and recognition in videos using stabilization
US9576214B1 (en) 2012-01-23 2017-02-21 Hrl Laboratories, Llc Robust object recognition from moving platforms by combining form and motion detection with bio-inspired classification
US20140358639A1 (en) * 2013-05-30 2014-12-04 Panasonic Corporation Customer category analysis device, customer category analysis system and customer category analysis method
CN104200466A (en) * 2014-08-20 2014-12-10 深圳市中控生物识别技术有限公司 Early warning method and camera
US10139819B2 (en) 2014-08-22 2018-11-27 Innovative Signal Analysis, Inc. Video enabled inspection using unmanned aerial vehicles
US10614391B2 (en) * 2015-02-26 2020-04-07 Hitachi, Ltd. Method and apparatus for work quality control
US20160253618A1 (en) * 2015-02-26 2016-09-01 Hitachi, Ltd. Method and apparatus for work quality control
US10417484B2 (en) * 2017-05-30 2019-09-17 Wipro Limited Method and system for determining an intent of a subject using behavioural pattern
US20190012549A1 (en) * 2017-07-10 2019-01-10 Nanjing Yuanjue Information and Technology Company Scene analysis method and visual navigation device
US10614323B2 (en) * 2017-07-10 2020-04-07 Nanjing Yuanjue Information and Technology Company Scene analysis method and visual navigation device
US10650547B2 (en) 2018-07-26 2020-05-12 Microsoft Technology Licensing, Llc Blob detection using feature match scores
US20210390711A1 (en) * 2020-06-16 2021-12-16 Sony Corporation Apparatus, method and computer program product for predicting whether an object moving across a surface will reach a target destination
US11615539B2 (en) * 2020-06-16 2023-03-28 Sony Group Corporation Apparatus, method and computer program product for predicting whether an object moving across a surface will reach a target destination
US20210409655A1 (en) * 2020-06-25 2021-12-30 Innovative Signal Analysis, Inc. Multi-source 3-dimensional detection and tracking
US11770506B2 (en) * 2020-06-25 2023-09-26 Innovative Signal Analysis, Inc. Multi-source 3-dimensional detection and tracking
CN112419367A (en) * 2020-12-02 2021-02-26 中国人民解放军军事科学院国防科技创新研究院 Method and device for identifying specific target object
CN115394026A (en) * 2022-07-15 2022-11-25 安徽电信规划设计有限责任公司 Intelligent monitoring method and system based on 5G technology

Also Published As

Publication number Publication date
US10291884B2 (en) 2019-05-14
WO2009008939A2 (en) 2009-01-15
US8823804B2 (en) 2014-09-02
US20140341433A1 (en) 2014-11-20
US20190246073A1 (en) 2019-08-08
US20120268594A1 (en) 2012-10-25
WO2009008939A3 (en) 2009-03-05
TW200905575A (en) 2009-02-01

Similar Documents

Publication Publication Date Title
US20190246073A1 (en) Method for finding paths in video
Pavlidis et al. Urban surveillance systems: from the laboratory to the commercial world
CN107832680B (en) Computerized method, system and storage medium for video analytics
US10346688B2 (en) Congestion-state-monitoring system
US7796780B2 (en) Target detection and tracking from overhead video streams
Haering et al. The evolution of video surveillance: an overview
Dedeoğlu Moving object detection, tracking and classification for smart video surveillance
US20080018738A1 (en) Video analytics for retail business process monitoring
WO2006036578A2 (en) Method for finding paths in video
Turek et al. Unsupervised learning of functional categories in video scenes
JP2004534315A (en) Method and system for monitoring moving objects
EP1405504A1 (en) Surveillance system and methods regarding same
TW200903386A (en) Target detection and tracking from video streams
JP2004537790A (en) Moving object evaluation system and method
CA2583425A1 (en) Target property maps for surveillance systems
Morellas et al. DETER: Detection of events for threat evaluation and recognition
Feris et al. Case study: IBM smart surveillance system
Zhang et al. A robust human detection and tracking system using a human-model-based camera calibration
Makris et al. Learning scene semantics
US20220366575A1 (en) Method and system for gathering information of an object moving in an area of interest
Salih et al. Visual surveillance for hajj and umrah: a review
Brooks et al. Towards intelligent networked video surveillance for the detection of suspicious behaviours
GDANSK Deliverable 2.1–Review of existing smart video surveillance systems capable of being integrated with ADDPRIV. ADDPRIV consortium
Malkapur Video Object Tracking and Segmentation Using Artificial Neural Network for Surveillance System

Legal Events

Date Code Title Description
AS Assignment

Owner name: OBJECT VIDEO, INC., VIRGINIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HAERING, NIELS;RASHEED, ZEESHAN;YU, LI;AND OTHERS;REEL/FRAME:019203/0085;SIGNING DATES FROM 20070328 TO 20070405

AS Assignment

Owner name: RJF OV, LLC, DISTRICT OF COLUMBIA

Free format text: GRANT OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:OBJECTVIDEO, INC.;REEL/FRAME:021744/0464

Effective date: 20081016

Owner name: RJF OV, LLC,DISTRICT OF COLUMBIA

Free format text: GRANT OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:OBJECTVIDEO, INC.;REEL/FRAME:021744/0464

Effective date: 20081016

AS Assignment

Owner name: OBJECTVIDEO, INC., VIRGINIA

Free format text: RELEASE OF SECURITY AGREEMENT/INTEREST;ASSIGNOR:RJF OV, LLC;REEL/FRAME:027810/0117

Effective date: 20101230

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION