US20140105459A1 - Location-aware event detection - Google Patents

Location-aware event detection Download PDF

Info

Publication number
US20140105459A1
US20140105459A1 US14/133,206 US201314133206A US2014105459A1 US 20140105459 A1 US20140105459 A1 US 20140105459A1 US 201314133206 A US201314133206 A US 201314133206A US 2014105459 A1 US2014105459 A1 US 2014105459A1
Authority
US
United States
Prior art keywords
event
events
interest
item
overlapping regions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/133,206
Inventor
Russell Patrick Bobbitt
Quanfu Fan
Arun Hampapur
Frederick KJELDSEN
Sharathchandra Umapathirao Pankanti
Akira YANAGAQA
Yun Zhai
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Global Commerce Solutions Holdings Corp
Original Assignee
Toshiba Global Commerce Solutions Holdings Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Global Commerce Solutions Holdings Corp filed Critical Toshiba Global Commerce Solutions Holdings Corp
Priority to US14/133,206 priority Critical patent/US20140105459A1/en
Publication of US20140105459A1 publication Critical patent/US20140105459A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06K9/00342
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06K9/00771
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Definitions

  • Embodiments of the invention generally relate to information technology, and, more particularly, to retail loss prevention.
  • Event detection is critical to any video analytics surveillance systems. Events are often location-dependent, and knowing where an event occurs is as important as knowing when it occurs. For example, during checkouts at a grocery store, the cashier repeatedly picks up items from the lead-in belt (pickup), scans them by a scanner for purchase (scan), and places them onto the take-away belt area (drop). The pickup-scandrop sequences are repetitive, but the locations of pickup and drop operations can vary each time. This un-oriented interaction between the cashier's hand(s) and the belt area poses a problem for learning event models where features need to be extracted from some known location.
  • ROI region of interest
  • An exemplary method for detecting one or more events, according to one aspect of the invention, can include steps of using multiple overlapping regions of interest on a video sequence to cover a location for one or more events, wherein each event is associated with at least one of the multiple overlapping regions of interest, applying multiple-instance learning to the video sequence to select one or more of the multiple overlapping regions of interest to construct one or more location-aware event models, and applying the models to the video sequence to detect the one or more events and to determine the one or more regions of interest that are associated with the one or more events.
  • One or more embodiments of the invention or elements thereof can be implemented in the form of a computer product including a computer usable medium with computer usable program code for performing the method steps indicated. Furthermore, one or more embodiments of the invention or elements thereof can be implemented in the form of an apparatus or system including a memory and at least one processor that is coupled to the memory and operative to perform exemplary method steps.
  • one or more embodiments of the invention or elements thereof can be implemented in the form of means for carrying out one or more of the method steps described herein; the means can include hardware module(s), software module(s), or a combination of hardware and software modules.
  • FIG. 1 is a diagram illustrating exemplary supervised learning and multiple-instance learning (MIL), according to an embodiment of the present invention
  • FIG. 2 is a diagram illustrating detecting cashier operations at a pas, according to an embodiment of the present invention
  • FIG. 3 is a diagram illustrating small and large ROIs, according to an embodiment of the present invention.
  • FIG. 4 is a diagram illustrating using multiple overlapped ROIs, according to an embodiment of the present invention.
  • FIG. 5 is a flow diagram illustrating techniques for detecting one or more events, according to an embodiment of the present invention.
  • FIG. 6 is a system diagram of an exemplary computer system on which at least one embodiment of the present invention can be implemented.
  • Principles of the invention include location-aware event detection via multiple-instance learning.
  • One or more embodiments of the invention include using multiple regions of interest (ROIs) (also called sensors here) on a video sequence to cover all possible locations for events such that each event can be associated with at least one ROI (or sensor).
  • ROIs regions of interest
  • MIL Multiple-instance learning
  • the training data can include a set of positive and negative bags of instances (for example, feature vectors).
  • a positive bag there is at least one instance (positive) associated with the concept to be learned, but they are not known.
  • a negative bag all instances are negative, that is, irrelevant to the concept.
  • MIL algorithms include Diverse Density (DD), Expectation-Maximization DD (EM-DD), support vector machine-multiple instance learning (SVM-MIL) and citation-k-nearest neighbor (kNN).
  • FIG. 1 is a diagram illustrating exemplary supervised learning and multiple-instance learning (MIL), according to an embodiment of the present invention.
  • MIL multiple-instance learning
  • FIG. 1 depicts exemplary supervised learning which includes positive elements 102 and negative elements 104 .
  • FIG. 1 also depicts exemplary MIL which includes positive bags 106 , 108 , and negative bags 110 and 112 .
  • one or more embodiments of the invention include the use of multiple sensors and multiple-instance learning.
  • events can be represented as positive bags, and features extracted from a sensor associated with a manually annotated event are instances in a positive bag.
  • negative instances can be constructed automatically by considering time periods when no events are annotated.
  • one or more embodiments of the invention specify multiple ROIs (for example, overlapped ROIs) to cover all possible locations for events.
  • ROIs can be any shape (for example, polygons are often used) and ROIs do not need to be the same size.
  • the techniques described herein can also extract features (for example, color, edge, motion, etc.) from each ROI as well as select a learning technique (for example, Support Vector Machines (SVMs and build event models under multiple-instance learning contexts.
  • SVMs Support Vector Machines
  • one or more embodiments of the invention perform event detection with the event models learned from MIL.
  • one or more embodiments of the invention divide a transaction area into three parts: the lead-in belt area where a customer unloads the merchandise, the scan area where a scanner is installed, and the take-away area where scanned items are deposited.
  • a complete process to transact one item at the POS is referred to herein as a visual scan.
  • a visual scan can include three major operations from the cashier: picking up an item from the lead-in belt, reading the bar code on the item via the scanner (or weighing an item if it has no bar code) for registration and then placing the item onto the take-away belt for bagging. These three operations are referred to herein as pickup, scan and drop, respectively. These operations are the primary primitive events (or primitives), as described herein.
  • a pickup (or drop) event can be considered as an interaction between the cashier's hand(s) and the lead-in (or take-away) area.
  • this interaction is un-oriented, and can occur almost anywhere in the transaction area.
  • This poses a problem for defining an appropriate ROI for the event model. While an ideal ROI should be large enough to cover all possible locations of the events to be detected, it likely includes many irrelevant features that result from the bagging person or the customer.
  • one or more embodiments of the invention apply the multiple instance learning technique to build location-aware event models.
  • the techniques described herein use multiple overlapped ROIs to cover a transaction area as much as possible so that each event is guaranteed to be in an ROI.
  • a motion-based segmentation algorithm is applied to identify segments as candidates for primitives in the video sequence of each ROI.
  • a supervised learning paradigm is not suited for multiple ROIs because the correspondence between events and ROIs is unknown.
  • one or more embodiments of the invention use multiple-instance learning (MIL), which is effective in resolving problems where correspondences are missing.
  • MIL multiple-instance learning
  • MIL solves the problem of learning from incompletely labeled data. Unlike supervised learning, in which every training instance is associated with a label, MIL deals with data where labels (for example, binary, either 0 or 1) are assigned to bags of instances instead of an individual instance. A positive bag has at least one positive instance that is related to a concept of interest, while all instances in a negative bag are negative. The goal of MIL is to learn a model of the concept from the incompletely labeled data for classification of unseen bags or instances.
  • labels for example, binary, either 0 or 1
  • Learning event models from multiple ROIs is connected to MIL in that each event corresponds to at least one ROI, but the correspondence is not specified.
  • one or more embodiments of the invention create a positive bag, the instances of which are the features extracted from all the ROIs with regards to color, edge, motion information, etc.
  • Negative bags can be generated in a similar way by considering those video segments with sufficient motion change but no primitives annotated in the ground truth.
  • one or more embodiments of the invention use the SVM-based MIL techniques (MIL-SVM) to learn event models for pickup and drop. Scan events are more limited to a small region, so one or more embodiments of the invention use a single ROI for it.
  • MIL-SVM SVM-based MIL techniques
  • FIG. 2 is a diagram illustrating detecting cashier operations at a POS 202 , according to an embodiment of the present invention.
  • cashier operations at a POS 202 can include picking up an item (pickup), placing an item onto the belt (drop) scanning an item (scan), etc.
  • Cashier operations can also include Ulloriented interactions between the hand and the belt, so it is advantageous to know where a pickup (or drop) occurs.
  • FIG. 3 is a diagram illustrating small and large ROIs, according to an embodiment of the present invention.
  • FIG. 3 depicts a small ROI 302 and a large ROI 304 .
  • building event models requires specifying a ROI. However, if the ROI is too small, it may miss many items. And if the ROI is too large, it may include too much noise from bagging or customer intervention.
  • FIG. 4 is a diagram illustrating using multiple overlapped ROIs, according to an embodiment of the present invention.
  • FIG. 4 depicts multiple overlapped ROIs 402 and 404 .
  • FIG. 4 also depicts bags of features (which are represented by histograms of visual words here) extracted from all the ROIs 406 and 408 , a MIL component 410 and an event model 412 .
  • one or more embodiments of the invention use multiple overlapped ROIs, wherein each item is guaranteed to be in one ROI (that is, at least one ROI corresponds to an event), but the correspondence is missing. Additionally, the techniques described herein can apply MIL to resolve the correspondence problem (that is, identify the missing correspondence) and learn better event models.
  • FIG. 5 is a flow diagram illustrating techniques for detecting one or more events, according to an embodiment of the present invention.
  • the events can include events (for example, cashier activity) at a point of sale.
  • the events at a point of sale can include a pickup, a scan and a drop, wherein a pickup includes a cashier picking up an item, a scan includes a cashier at least one of reading the barcode of an item via a scanner and weighing an item, and a drop includes a cashier placing an item on the take-away belt.
  • Step 502 includes using one or more regions of interest on a video sequence to cover a location for one or more events, wherein each event is associated with at least one of the one or more regions of interest.
  • regions of interest on a video sequence can include, for example, overlapping one or more regions of interest on a video sequence.
  • the regions of interest can be of one or more shapes as well as one or more sizes.
  • Step 504 includes applying multiple-instance learning to the video sequence to construct one or more location-aware event models.
  • Step 506 includes applying the models to the video sequence to determine the one or more regions of interest that are associated with the one or more events.
  • the techniques depicted in FIG. 5 can also include using a support vector machine-(SVM)-based MIL technique to learn event models for a pickup and a drop. Additionally, one or more embodiments of the invention include extracting features (for example, color, edge, motion, etc.) from each region of interest.
  • SVM support vector machine-
  • At least one embodiment of the invention can be implemented in the form of a computer product including a computer usable medium with computer usable program code for performing the method steps indicated.
  • at least one embodiment of the invention can be implemented in the form of an apparatus including a memory and at least one processor that is coupled to the memory and operative to perform exemplary method steps.
  • processors 602 might employ, for example, a processor 602 , a memory 604 , and an input and/or output interface formed, for example, by a display 606 and a keyboard 608 .
  • processor as used herein is intended to include any processing device, such as, for example, one that includes a CPU (central processing unit) and/or other forms of processing circuitry. Further, the term “processor” may refer to more than one individual processor.
  • memory is intended to include memory associated with a processor or CPU, such as, for example, RAM (random access memory), ROM (read only memory), a fixed memory device (for example, hard drive), a removable memory device (for example, diskette), a flash memory and the like.
  • input and/or output interface is intended to include, for example, one or more mechanisms for inputting data to the processing unit (for example, mouse), and one or more mechanisms for providing results associated with the processing unit (for example, printer).
  • the processor 602 , memory 604 , and input and/or output interface such as display 606 and keyboard 608 can be interconnected, for example, via bus 610 as part of a data processing unit 612 .
  • Suitable interconnections can also be provided to a network interface 614 , such as a network card, which can be provided to interface with a computer network, and to a media interface 616 , such as a diskette or CD-ROM drive, which can be provided to interface with media 618 .
  • a network interface 614 such as a network card
  • a media interface 616 such as a diskette or CD-ROM drive
  • computer software including instructions or code for performing the methodologies of the invention, as described herein, may be stored in one or more of the associated memory devices (for example, ROM, fixed or removable memory) and, when ready to be utilized, loaded in part or in whole (for example, into RAM) and executed by a CPU.
  • Such software could include, but is not limited to, firmware, resident software, microcode, and the like.
  • the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium (for example, media 618 ) providing program code for use by or in connection with a computer or any instruction execution system.
  • a computer usable or computer readable medium can be any apparatus for use by or in connection with the instruction execution system, apparatus, or device.
  • the medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
  • Examples of a computer-readable medium include a semiconductor or solid-state memory (for example, memory 604 ), magnetic tape, a removable computer diskette (for example, media 618 ), a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk.
  • Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read and/or write (CD-R1W) and DVD.
  • a data processing system suitable for storing and/or executing program code will include at least one processor 602 coupled directly or indirectly to memory elements 604 through a system bus 610 .
  • the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • Input and/or output or 110 devices can be coupled to the system either directly (such as via bus 610 ) or through intervening 110 controllers (omitted for clarity).
  • Network adapters such as network interface 614 may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
  • At least one embodiment of the invention may provide one or more beneficial effects, such as, for example, using multiple ROIs or sensors to cover all possible locations for events such that each event can be associated with at least one sensor, and applying multiple-instance learning to select one or more appropriate sensors for building event detection models.

Abstract

Techniques for detecting one or more events are provided. The techniques include using multiple overlapping regions of interest on a video sequence to cover a location for one or more events, wherein each event is associated with at least one of the multiple overlapping regions of interest, applying multiple-instance learning to the video sequence to select one or more of the multiple overlapping regions of interest to construct one or more location-aware event models, and applying the models to the video sequence to detect the one or more events and to determine the one or more regions of interest that are associated with the one or more events.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of co-pending U.S. patent application Ser. No. 13/464,328, filed May 4, 2012, which is a continuation of U.S. patent application Ser. No. 12/325,178, filed on Nov. 29, 2008, now U.S. Pat. No. 8,253,831, and is also related to U.S. application Ser. No. 12/325,176, now U.S. Pat. No. 8,165,349, filed Nov. 29, 2008, and is related to U.S. patent application Serial No. 12/325,177, filed Nov. 29, 2008, and is related to U.S. application Ser. No. 12/262,446, now U.S. Pat. No. 8,345,101, filed on Oct. 31, 2008, and is related to U.S. application Ser. No. 12/262,454, now U.S. Pat. No. 8,429,016, filed on Oct. 31, 2008, and is related to U.S. application Ser. No. 12/262,458, now U.S. Pat. No. 7,962,365, filed on Oct. 31, 2008, and is also related to U.S. application Ser. No. 12/262,467, now U.S. Pat. No. 8,612,286, filed on Oct. 31, 2008, the disclosures of which are incorporated by reference herein in their entirety.
  • FIELD OF THE INVENTION
  • Embodiments of the invention generally relate to information technology, and, more particularly, to retail loss prevention.
  • BACKGROUND
  • Event detection is critical to any video analytics surveillance systems. Events are often location-dependent, and knowing where an event occurs is as important as knowing when it occurs. For example, during checkouts at a grocery store, the cashier repeatedly picks up items from the lead-in belt (pickup), scans them by a scanner for purchase (scan), and places them onto the take-away belt area (drop). The pickup-scandrop sequences are repetitive, but the locations of pickup and drop operations can vary each time. This un-oriented interaction between the cashier's hand(s) and the belt area poses a problem for learning event models where features need to be extracted from some known location.
  • A large portion of event models are built to detect events at a pre-specified region of interest (ROI). However, one problem may arise in some scenarios when it comes to defining an appropriate ROI for the model. In the retail example mentioned above, the cashier may pick up (or place) products anywhere in the transaction area. An overly large ROI would include many irrelevant features from bagging activity and customer interventions, while an overly small region would miss many products that are presented outside of the region. In such an instance, one could use a sliding window to exhaustively test every possible location, but such an approach is extremely inefficient and normally requires a non-trivial post-process to merge similar detected results that are nearby.
  • SUMMARY
  • Principles and embodiments of the invention provide techniques for location-aware event detection. An exemplary method (which may be computer-implemented) for detecting one or more events, according to one aspect of the invention, can include steps of using multiple overlapping regions of interest on a video sequence to cover a location for one or more events, wherein each event is associated with at least one of the multiple overlapping regions of interest, applying multiple-instance learning to the video sequence to select one or more of the multiple overlapping regions of interest to construct one or more location-aware event models, and applying the models to the video sequence to detect the one or more events and to determine the one or more regions of interest that are associated with the one or more events.
  • One or more embodiments of the invention or elements thereof can be implemented in the form of a computer product including a computer usable medium with computer usable program code for performing the method steps indicated. Furthermore, one or more embodiments of the invention or elements thereof can be implemented in the form of an apparatus or system including a memory and at least one processor that is coupled to the memory and operative to perform exemplary method steps.
  • Yet further, in another aspect, one or more embodiments of the invention or elements thereof can be implemented in the form of means for carrying out one or more of the method steps described herein; the means can include hardware module(s), software module(s), or a combination of hardware and software modules.
  • These and other objects, features and advantages of the present invention will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a diagram illustrating exemplary supervised learning and multiple-instance learning (MIL), according to an embodiment of the present invention;
  • FIG. 2 is a diagram illustrating detecting cashier operations at a pas, according to an embodiment of the present invention;
  • FIG. 3 is a diagram illustrating small and large ROIs, according to an embodiment of the present invention;
  • FIG. 4 is a diagram illustrating using multiple overlapped ROIs, according to an embodiment of the present invention;
  • FIG. 5 is a flow diagram illustrating techniques for detecting one or more events, according to an embodiment of the present invention; and
  • FIG. 6 is a system diagram of an exemplary computer system on which at least one embodiment of the present invention can be implemented.
  • DETAILED DESCRIPTION
  • Principles of the invention include location-aware event detection via multiple-instance learning. One or more embodiments of the invention include using multiple regions of interest (ROIs) (also called sensors here) on a video sequence to cover all possible locations for events such that each event can be associated with at least one ROI (or sensor). Also, one can use motion-based segmentation techniques to identify candidates for one or more events at one or more ROIs.
  • Further, one can also apply the multiple-instance learning techniques to the video sequence to select one or more appropriate sensors for building location-aware event detection models. Also, one can apply the models to determine and/or detect the events as well as the associated regions of interest. Further, the techniques described herein are efficient, easy to implement, as well as flexible and applicable to many learning paradigms and event detection techniques.
  • Multiple-instance learning (MIL) is a variation of supervised learning, where the task is to learn a concept (or model) from a set of incompletely labeled data. The training data can include a set of positive and negative bags of instances (for example, feature vectors). In a positive bag, there is at least one instance (positive) associated with the concept to be learned, but they are not known. In a negative bag, all instances are negative, that is, irrelevant to the concept. By way of example, MIL algorithms include Diverse Density (DD), Expectation-Maximization DD (EM-DD), support vector machine-multiple instance learning (SVM-MIL) and citation-k-nearest neighbor (kNN).
  • FIG. 1 is a diagram illustrating exemplary supervised learning and multiple-instance learning (MIL), according to an embodiment of the present invention. By way of illustration, FIG. 1 depicts exemplary supervised learning which includes positive elements 102 and negative elements 104. FIG. 1 also depicts exemplary MIL which includes positive bags 106, 108, and negative bags 110 and 112.
  • As detailed herein, one or more embodiments of the invention include the use of multiple sensors and multiple-instance learning. As illustrated in FIG. 1, events can be represented as positive bags, and features extracted from a sensor associated with a manually annotated event are instances in a positive bag. Also, negative instances can be constructed automatically by considering time periods when no events are annotated.
  • Additionally, one or more embodiments of the invention, specify multiple ROIs (for example, overlapped ROIs) to cover all possible locations for events. ROIs can be any shape (for example, polygons are often used) and ROIs do not need to be the same size. The techniques described herein can also extract features (for example, color, edge, motion, etc.) from each ROI as well as select a learning technique (for example, Support Vector Machines (SVMs
    Figure US20140105459A1-20140417-P00001
    and build event models under multiple-instance learning contexts. Also, one or more embodiments of the invention perform event detection with the event models learned from MIL.
  • Also, one or more embodiments of the invention divide a transaction area into three parts: the lead-in belt area where a customer unloads the merchandise, the scan area where a scanner is installed, and the take-away area where scanned items are deposited. A complete process to transact one item at the POS is referred to herein as a visual scan. A visual scan can include three major operations from the cashier: picking up an item from the lead-in belt, reading the bar code on the item via the scanner (or weighing an item if it has no bar code) for registration and then placing the item onto the take-away belt for bagging. These three operations are referred to herein as pickup, scan and drop, respectively. These operations are the primary primitive events (or primitives), as described herein.
  • As noted above, a pickup (or drop) event can be considered as an interaction between the cashier's hand(s) and the lead-in (or take-away) area. However, this interaction is un-oriented, and can occur almost anywhere in the transaction area. This poses a problem for defining an appropriate ROI for the event model. While an ideal ROI should be large enough to cover all possible locations of the events to be detected, it likely includes many irrelevant features that result from the bagging person or the customer. As such, one or more embodiments of the invention apply the multiple instance learning technique to build location-aware event models.
  • The techniques described herein use multiple overlapped ROIs to cover a transaction area as much as possible so that each event is guaranteed to be in an ROI. A motion-based segmentation algorithm is applied to identify segments as candidates for primitives in the video sequence of each ROI. As noted herein, however, a supervised learning paradigm is not suited for multiple ROIs because the correspondence between events and ROIs is unknown. As such, one or more embodiments of the invention use multiple-instance learning (MIL), which is effective in resolving problems where correspondences are missing.
  • MIL, as described herein, solves the problem of learning from incompletely labeled data. Unlike supervised learning, in which every training instance is associated with a label, MIL deals with data where labels (for example, binary, either 0 or 1) are assigned to bags of instances instead of an individual instance. A positive bag has at least one positive instance that is related to a concept of interest, while all instances in a negative bag are negative. The goal of MIL is to learn a model of the concept from the incompletely labeled data for classification of unseen bags or instances.
  • Learning event models from multiple ROIs is connected to MIL in that each event corresponds to at least one ROI, but the correspondence is not specified. For each annotated event, one or more embodiments of the invention create a positive bag, the instances of which are the features extracted from all the ROIs with regards to color, edge, motion information, etc. Negative bags can be generated in a similar way by considering those video segments with sufficient motion change but no primitives annotated in the ground truth.
  • Additionally, one or more embodiments of the invention use the SVM-based MIL techniques (MIL-SVM) to learn event models for pickup and drop. Scan events are more limited to a small region, so one or more embodiments of the invention use a single ROI for it.
  • FIG. 2 is a diagram illustrating detecting cashier operations at a POS 202, according to an embodiment of the present invention. As depicted in FIG. 2, cashier operations at a POS 202 can include picking up an item (pickup), placing an item onto the belt (drop) scanning an item (scan), etc. Cashier operations can also include Ulloriented interactions between the hand and the belt, so it is advantageous to know where a pickup (or drop) occurs.
  • FIG. 3 is a diagram illustrating small and large ROIs, according to an embodiment of the present invention. By way of illustration, FIG. 3 depicts a small ROI 302 and a large ROI 304. As described herein, building event models requires specifying a ROI. However, if the ROI is too small, it may miss many items. And if the ROI is too large, it may include too much noise from bagging or customer intervention.
  • FIG. 4 is a diagram illustrating using multiple overlapped ROIs, according to an embodiment of the present invention. By way of illustration, FIG. 4 depicts multiple overlapped ROIs 402 and 404. FIG. 4 also depicts bags of features (which are represented by histograms of visual words here) extracted from all the ROIs 406 and 408, a MIL component 410 and an event model 412.
  • As illustrated in FIG. 4, one or more embodiments of the invention use multiple overlapped ROIs, wherein each item is guaranteed to be in one ROI (that is, at least one ROI corresponds to an event), but the correspondence is missing. Additionally, the techniques described herein can apply MIL to resolve the correspondence problem (that is, identify the missing correspondence) and learn better event models.
  • FIG. 5 is a flow diagram illustrating techniques for detecting one or more events, according to an embodiment of the present invention. The events can include events (for example, cashier activity) at a point of sale. For example, the events at a point of sale can include a pickup, a scan and a drop, wherein a pickup includes a cashier picking up an item, a scan includes a cashier at least one of reading the barcode of an item via a scanner and weighing an item, and a drop includes a cashier placing an item on the take-away belt.
  • Step 502 includes using one or more regions of interest on a video sequence to cover a location for one or more events, wherein each event is associated with at least one of the one or more regions of interest. Using regions of interest on a video sequence can include, for example, overlapping one or more regions of interest on a video sequence. Also, the regions of interest can be of one or more shapes as well as one or more sizes.
  • Step 504 includes applying multiple-instance learning to the video sequence to construct one or more location-aware event models. Step 506 includes applying the models to the video sequence to determine the one or more regions of interest that are associated with the one or more events.
  • The techniques depicted in FIG. 5 can also include using a support vector machine-(SVM)-based MIL technique to learn event models for a pickup and a drop. Additionally, one or more embodiments of the invention include extracting features (for example, color, edge, motion, etc.) from each region of interest.
  • A variety of techniques, utilizing dedicated hardware, general purpose processors, software, or a combination of the foregoing may be employed to implement the present invention. At least one embodiment of the invention can be implemented in the form of a computer product including a computer usable medium with computer usable program code for performing the method steps indicated. Furthermore, at least one embodiment of the invention can be implemented in the form of an apparatus including a memory and at least one processor that is coupled to the memory and operative to perform exemplary method steps.
  • At present, it is believed that the preferred implementation will make substantial use of software running on a general-purpose computer or workstation. With reference to FIG. 6, such an implementation might employ, for example, a processor 602, a memory 604, and an input and/or output interface formed, for example, by a display 606 and a keyboard 608. The term “processor” as used herein is intended to include any processing device, such as, for example, one that includes a CPU (central processing unit) and/or other forms of processing circuitry. Further, the term “processor” may refer to more than one individual processor. The term “memory” is intended to include memory associated with a processor or CPU, such as, for example, RAM (random access memory), ROM (read only memory), a fixed memory device (for example, hard drive), a removable memory device (for example, diskette), a flash memory and the like. In addition, the phrase “input and/or output interface” as used herein, is intended to include, for example, one or more mechanisms for inputting data to the processing unit (for example, mouse), and one or more mechanisms for providing results associated with the processing unit (for example, printer). The processor 602, memory 604, and input and/or output interface such as display 606 and keyboard 608 can be interconnected, for example, via bus 610 as part of a data processing unit 612. Suitable interconnections, for example via bus 610, can also be provided to a network interface 614, such as a network card, which can be provided to interface with a computer network, and to a media interface 616, such as a diskette or CD-ROM drive, which can be provided to interface with media 618.
  • Accordingly, computer software including instructions or code for performing the methodologies of the invention, as described herein, may be stored in one or more of the associated memory devices (for example, ROM, fixed or removable memory) and, when ready to be utilized, loaded in part or in whole (for example, into RAM) and executed by a CPU. Such software could include, but is not limited to, firmware, resident software, microcode, and the like.
  • Furthermore, the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium (for example, media 618) providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer usable or computer readable medium can be any apparatus for use by or in connection with the instruction execution system, apparatus, or device.
  • The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid-state memory (for example, memory 604), magnetic tape, a removable computer diskette (for example, media 618), a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read and/or write (CD-R1W) and DVD.
  • A data processing system suitable for storing and/or executing program code will include at least one processor 602 coupled directly or indirectly to memory elements 604 through a system bus 610. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • Input and/or output or 110 devices (including but not limited to keyboards 608, displays 606, pointing devices, and the like) can be coupled to the system either directly (such as via bus 610) or through intervening 110 controllers (omitted for clarity).
  • Network adapters such as network interface 614 may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
  • In any case, it should be understood that the components illustrated herein may be implemented in various forms of hardware, software, or combinations thereof, for example, application specific integrated circuit(s) (ASICS), functional circuitry, one or more appropriately programmed general purpose digital computers with associated memory, and the like. Given the teachings of the invention provided herein, one of ordinary skill in the related art will be able to contemplate other implementations of the components of the invention.
  • At least one embodiment of the invention may provide one or more beneficial effects, such as, for example, using multiple ROIs or sensors to cover all possible locations for events such that each event can be associated with at least one sensor, and applying multiple-instance learning to select one or more appropriate sensors for building event detection models.
  • Although illustrative embodiments of the present invention have been described herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various other changes and modifications may be made by one skilled in the art without departing from the scope or spirit of the invention.

Claims (14)

What is claimed is:
1. A method for detecting one or more events, comprising:
using multiple overlapping regions of interest on a video sequence to cover a respective location for two or more events at a point of sale, wherein the two or more events include a pickup event and a drop event, wherein each event is associated with at least one of the multiple overlapping regions of interest;
applying event models to the video sequence to detect the one or more events and to determine the one or more regions of interest that are associated with the one or more events; and
identifying missing correspondences using multiple overlapping regions of interest.
2. The method of claim 1, wherein the two or more events at the point of sale comprise the pickup event, a scan event, and the drop event, wherein the pickup event comprises a cashier picking up an item, the scan event comprises a cashier at least one of reading the barcode on an item via a scanner and weighing an item, and the drop event comprises a cashier placing an item onto a take-away belt area.
3. The method of claim 1, wherein the multiple overlapping regions of interest comprise one or more shapes.
4. The method of claim 1, wherein the multiple overlapping regions of interest comprise one or more sizes.
5. The method of claim 1, further comprising extracting one or more features from each region of interest.
6. The method of claim 5, wherein the one or more features comprise at least one of color, edge and motion.
7. A computer program product comprising a tangible computer readable recordable storage medium having computer readable program code for detecting one or more events, said computer program product including:
computer readable program code for using multiple overlapping regions of interest on a video sequence to cover a respective location for two or more events at a point of sale, wherein the two or more events includes a pickup event and a drop event, wherein each event is associated with at least one of the multiple overlapping regions of interest;
computer readable program code for applying event models to the video sequence to detect the one or more events and to determine the one or more regions of interest that are associated with the one or more events; and
computer readable program code for identifying missing correspondences using multiple overlapping regions of interest.
8. The computer program product of claim 7, wherein the two or more events at the point of sale comprise the pickup event, a scan event, and the drop event, wherein the pickup event comprises a cashier picking up an item, the scan event comprises a cashier at least one of reading the barcode on an item via scanner and weighing an item, and the drop event comprises a cashier placing an item onto a take-away belt area.
9. The computer program product of claim 7, further comprising computer readable program code for extracting one or more features from each region of interest.
10. The computer program product of claim 9, wherein the one or more features comprise at least one of color, edge and motion.
11. The computer program product of claim 7, wherein the multiple overlapping regions of interest comprise one or more shapes and one or more sizes.
12. A system for detecting one or more events, comprising:
a memory; and
at least one processor coupled to said memory and operative to:
use multiple overlapping regions of interest on a video sequence to cover a respective location for two or more events at a point of sale, wherein the two or more events include a pickup event and a drop event, wherein each event is associated with at least one of the multiple overlapping regions of interest;
apply event models to the video sequence to detect the one or more events and to determine the one or more regions of interest that are associated with the one or more events; and
identify missing correspondences using multiple overlapping regions of interest.
13. The system of claim 12, wherein the two or more events at the point of sale comprise the pickup event, a scan event, and the drop event, wherein the pickup event comprises a cashier picking up an item, the scan event comprises a cashier at least one of reading the barcode on an item via a scanner and weighing an item, and the drop event comprises a cashier placing an item onto a take-away belt area.
14. The system of claim 12, wherein the at least one processor coupled to said memory is further operative to extract one or more features from each region of interest.
US14/133,206 2008-11-29 2013-12-18 Location-aware event detection Abandoned US20140105459A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/133,206 US20140105459A1 (en) 2008-11-29 2013-12-18 Location-aware event detection

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US12/325,178 US8253831B2 (en) 2008-11-29 2008-11-29 Location-aware event detection
US13/464,328 US8638380B2 (en) 2008-11-29 2012-05-04 Location-aware event detection
US14/133,206 US20140105459A1 (en) 2008-11-29 2013-12-18 Location-aware event detection

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/464,328 Continuation US8638380B2 (en) 2008-11-29 2012-05-04 Location-aware event detection

Publications (1)

Publication Number Publication Date
US20140105459A1 true US20140105459A1 (en) 2014-04-17

Family

ID=42222471

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/325,178 Expired - Fee Related US8253831B2 (en) 2008-11-29 2008-11-29 Location-aware event detection
US13/464,328 Active US8638380B2 (en) 2008-11-29 2012-05-04 Location-aware event detection
US14/133,206 Abandoned US20140105459A1 (en) 2008-11-29 2013-12-18 Location-aware event detection

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US12/325,178 Expired - Fee Related US8253831B2 (en) 2008-11-29 2008-11-29 Location-aware event detection
US13/464,328 Active US8638380B2 (en) 2008-11-29 2012-05-04 Location-aware event detection

Country Status (1)

Country Link
US (3) US8253831B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018036454A1 (en) * 2016-08-26 2018-03-01 Huawei Technologies Co., Ltd. Method and apparatus for annotating a video stream comprising a sequence of frames

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9299229B2 (en) * 2008-10-31 2016-03-29 Toshiba Global Commerce Solutions Holdings Corporation Detecting primitive events at checkout
US8553989B1 (en) * 2010-04-27 2013-10-08 Hrl Laboratories, Llc Three-dimensional (3D) object recognition system using region of interest geometric features
US9665767B2 (en) * 2011-02-28 2017-05-30 Aic Innovations Group, Inc. Method and apparatus for pattern tracking
US8682032B2 (en) * 2011-08-19 2014-03-25 International Business Machines Corporation Event detection through pattern discovery
IES86318B2 (en) 2012-08-15 2013-12-04 Everseen Intelligent retail manager
US11170331B2 (en) 2012-08-15 2021-11-09 Everseen Limited Virtual management system data processing unit and method with rules and alerts
US10839227B2 (en) * 2012-08-29 2020-11-17 Conduent Business Services, Llc Queue group leader identification
DE102013012285A1 (en) * 2013-07-24 2015-01-29 Giesecke & Devrient Gmbh Method and device for value document processing
US10313597B2 (en) 2015-08-11 2019-06-04 Magna Electronics Inc. Vehicle vision system camera with adjustable focus
JP6907079B2 (en) * 2017-09-14 2021-07-21 キヤノン株式会社 Imaging device, control method and program of imaging device
US10521704B2 (en) * 2017-11-28 2019-12-31 Motorola Solutions, Inc. Method and apparatus for distributed edge learning
CN111191078A (en) * 2020-01-08 2020-05-22 腾讯科技(深圳)有限公司 Video information processing method and device based on video information processing model
US11800222B2 (en) 2021-01-11 2023-10-24 Magna Electronics Inc. Vehicular camera with focus drift mitigation system
US11800056B2 (en) 2021-02-11 2023-10-24 Logitech Europe S.A. Smart webcam system
US11800048B2 (en) 2021-02-24 2023-10-24 Logitech Europe S.A. Image generating system with background replacement or modification capabilities

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5494136A (en) * 1993-08-05 1996-02-27 Humble; David R. Integrated automated retail checkout terminal
US5609223A (en) * 1994-05-30 1997-03-11 Kabushiki Kaisha Tec Checkout system with automatic registration of articles by bar code or physical feature recognition
US5965841A (en) * 1996-03-19 1999-10-12 Ngk Insulators, Ltd. Thermoelectric conversion material and a process for producing the same
US5965861A (en) * 1997-02-07 1999-10-12 Ncr Corporation Method and apparatus for enhancing security in a self-service checkout terminal
US6105866A (en) * 1997-12-15 2000-08-22 Ncr Corporation Method and apparatus for reducing shrinkage during operation of a self-service checkout terminal
US6201473B1 (en) * 1999-04-23 2001-03-13 Sensormatic Electronics Corporation Surveillance system for observing shopping carts
US6236736B1 (en) * 1997-02-07 2001-05-22 Ncr Corporation Method and apparatus for detecting movement patterns at a self-service checkout terminal
US20060243798A1 (en) * 2004-06-21 2006-11-02 Malay Kundu Method and apparatus for detecting suspicious activity using video analysis
US20070058040A1 (en) * 2005-09-09 2007-03-15 Objectvideo, Inc. Video surveillance using spatial-temporal motion analysis
US7416118B2 (en) * 2004-05-14 2008-08-26 Digital Site Management, Llc Point-of-sale transaction recording system
US20080218591A1 (en) * 2007-03-06 2008-09-11 Kurt Heier Event detection based on video metadata
US20110050848A1 (en) * 2007-06-29 2011-03-03 Janos Rohaly Synchronized views of video data and three-dimensional model data

Family Cites Families (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4991008A (en) * 1988-12-01 1991-02-05 Intec Video Systems, Inc. Automatic transaction surveillance system
IL113434A0 (en) * 1994-04-25 1995-07-31 Katz Barry Surveillance system and method for asynchronously recording digital data with respect to video data
DE69635101T2 (en) * 1995-11-01 2006-06-01 Canon K.K. Method for extracting objects and image recording apparatus using this method
US5745036A (en) * 1996-09-12 1998-04-28 Checkpoint Systems, Inc. Electronic article security system for store which uses intelligent security tags and transaction data
US7319479B1 (en) * 2000-09-22 2008-01-15 Brickstream Corporation System and method for multi-camera linking and analysis
US20050162515A1 (en) * 2000-10-24 2005-07-28 Objectvideo, Inc. Video surveillance system
US20050146605A1 (en) * 2000-10-24 2005-07-07 Lipton Alan J. Video surveillance system employing video primitives
US7594609B2 (en) * 2003-11-13 2009-09-29 Metrologic Instruments, Inc. Automatic digital video image capture and processing system supporting image-processing based code symbol reading during a pass-through mode of system operation at a retail point of sale (POS) station
WO2002045434A1 (en) 2000-12-01 2002-06-06 Vigilos, Inc. System and method for processing video data utilizing motion detection and subdivided video fields
US6823011B2 (en) 2001-11-19 2004-11-23 Mitsubishi Electric Research Laboratories, Inc. Unusual event detection using motion activity descriptors
US7688349B2 (en) * 2001-12-07 2010-03-30 International Business Machines Corporation Method of detecting and tracking groups of people
US20030174869A1 (en) * 2002-03-12 2003-09-18 Suarez Anthony P. Image processing apparatus, image processing method, program and recording medium
US6847393B2 (en) * 2002-04-19 2005-01-25 Wren Technology Group Method and system for monitoring point of sale exceptions
CA2436319C (en) * 2002-08-02 2014-05-13 Calin A. Sandru Payment validation network
US7194114B2 (en) * 2002-10-07 2007-03-20 Carnegie Mellon University Object finder for two-dimensional images, and system for determining a set of sub-classifiers composing an object finder
EP1563686B1 (en) * 2002-11-12 2010-01-06 Intellivid Corporation Method and system for tracking and behavioral monitoring of multiple objects moving through multiple fields-of-view
US7480412B2 (en) * 2003-12-16 2009-01-20 Siemens Medical Solutions Usa, Inc. Toboggan-based shape characterization
US7246745B2 (en) * 2004-02-27 2007-07-24 Evolution Robotics Retail, Inc. Method of merchandising for checkout lanes
US7100824B2 (en) * 2004-02-27 2006-09-05 Evolution Robotics, Inc. System and methods for merchandise checkout
US7080778B1 (en) * 2004-07-26 2006-07-25 Advermotion, Inc. Moveable object accountability system
US7756342B2 (en) * 2004-09-20 2010-07-13 The United States Of America As Represented By The Secretary Of The Navy Method for image data processing
RU2323475C2 (en) * 2004-11-12 2008-04-27 Общество с ограниченной ответственностью "Центр Нейросетевых Технологий - Интеллектуальные Системы Безопасности" (ООО "ИСС") Method (variants) and device (variants) for automated detection of intentional or incidental disruptions of technological procedure by operator
JP5054670B2 (en) 2005-03-29 2012-10-24 ストップリフト インコーポレイテッド Method and apparatus for detecting suspicious behavior using video analysis
US9036028B2 (en) * 2005-09-02 2015-05-19 Sensormatic Electronics, LLC Object tracking and alerts
US7925536B2 (en) * 2006-05-25 2011-04-12 Objectvideo, Inc. Intelligent video verification of point of sale (POS) transactions
US7646745B2 (en) * 2006-06-30 2010-01-12 T-Mobile Usa, Inc. System and method for operating a mobile device, such as providing an out of box connection system for UMA type mobile devices
US7822252B2 (en) * 2006-11-28 2010-10-26 Siemens Medical Solutions Usa, Inc. Method of multiple instance learning and classification with correlations in object detection
US8244012B2 (en) * 2007-02-05 2012-08-14 Siemens Medical Solutions Usa, Inc. Computer aided detection of pulmonary embolism with local characteristic features in CT angiography
US7957565B1 (en) * 2007-04-05 2011-06-07 Videomining Corporation Method and system for recognizing employees in a physical space based on automatic behavior analysis
EP2217982A4 (en) * 2007-11-26 2011-05-04 Proiam Llc Enrollment apparatus, system, and method
US7448542B1 (en) * 2008-05-05 2008-11-11 International Business Machines Corporation Method for detecting a non-scan at a retail checkout station
US20090290802A1 (en) * 2008-05-22 2009-11-26 Microsoft Corporation Concurrent multiple-instance learning for image categorization
US8478111B2 (en) * 2008-10-03 2013-07-02 3M Innovative Properties Company Systems and methods for optimizing a scene
US7962365B2 (en) * 2008-10-31 2011-06-14 International Business Machines Corporation Using detailed process information at a point of sale
US8459545B1 (en) * 2012-03-29 2013-06-11 Cisco Technology, Inc. Image-based point-of-sale mobile settlement system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5494136A (en) * 1993-08-05 1996-02-27 Humble; David R. Integrated automated retail checkout terminal
US5609223A (en) * 1994-05-30 1997-03-11 Kabushiki Kaisha Tec Checkout system with automatic registration of articles by bar code or physical feature recognition
US5965841A (en) * 1996-03-19 1999-10-12 Ngk Insulators, Ltd. Thermoelectric conversion material and a process for producing the same
US6236736B1 (en) * 1997-02-07 2001-05-22 Ncr Corporation Method and apparatus for detecting movement patterns at a self-service checkout terminal
US5965861A (en) * 1997-02-07 1999-10-12 Ncr Corporation Method and apparatus for enhancing security in a self-service checkout terminal
US6105866A (en) * 1997-12-15 2000-08-22 Ncr Corporation Method and apparatus for reducing shrinkage during operation of a self-service checkout terminal
US6201473B1 (en) * 1999-04-23 2001-03-13 Sensormatic Electronics Corporation Surveillance system for observing shopping carts
US7416118B2 (en) * 2004-05-14 2008-08-26 Digital Site Management, Llc Point-of-sale transaction recording system
US20060243798A1 (en) * 2004-06-21 2006-11-02 Malay Kundu Method and apparatus for detecting suspicious activity using video analysis
US20120127316A1 (en) * 2004-06-21 2012-05-24 Malay Kundu Method and apparatus for detecting suspicious activity using video analysis
US20070058040A1 (en) * 2005-09-09 2007-03-15 Objectvideo, Inc. Video surveillance using spatial-temporal motion analysis
US20080218591A1 (en) * 2007-03-06 2008-09-11 Kurt Heier Event detection based on video metadata
US20110050848A1 (en) * 2007-06-29 2011-03-03 Janos Rohaly Synchronized views of video data and three-dimensional model data

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018036454A1 (en) * 2016-08-26 2018-03-01 Huawei Technologies Co., Ltd. Method and apparatus for annotating a video stream comprising a sequence of frames
US10140508B2 (en) 2016-08-26 2018-11-27 Huawei Technologies Co. Ltd. Method and apparatus for annotating a video stream comprising a sequence of frames
CN109644255A (en) * 2016-08-26 2019-04-16 华为技术有限公司 Mark includes the method and apparatus of the video flowing of a framing

Also Published As

Publication number Publication date
US8638380B2 (en) 2014-01-28
US20100134625A1 (en) 2010-06-03
US8253831B2 (en) 2012-08-28
US20120218414A1 (en) 2012-08-30

Similar Documents

Publication Publication Date Title
US8638380B2 (en) Location-aware event detection
US9299229B2 (en) Detecting primitive events at checkout
US8345101B2 (en) Automatically calibrating regions of interest for video surveillance
Santra et al. A comprehensive survey on computer vision based approaches for automatic identification of products in retail store
US9262832B2 (en) Cart inspection for suspicious items
US11494573B2 (en) Self-checkout device to which hybrid product recognition technology is applied
US7962365B2 (en) Using detailed process information at a point of sale
US8165349B2 (en) Analyzing repetitive sequential events
US8681232B2 (en) Visual content-aware automatic camera adjustment
Trinh et al. Detecting human activities in retail surveillance using hierarchical finite state machine
Tonioni et al. Product recognition in store shelves as a sub-graph isomorphism problem
Rosado et al. Supervised learning for out-of-stock detection in panoramas of retail shelves
US11354549B2 (en) Method and system for region proposal based object recognition for estimating planogram compliance
US8612286B2 (en) Creating a training tool
WO2021072699A1 (en) Irregular scan detection for retail systems
CN109948515B (en) Object class identification method and device
US11100303B2 (en) Method, system and apparatus for auxiliary label detection and association
Bartl et al. PersonGONE: Image inpainting for automated checkout solution
US20210166028A1 (en) Automated product recognition, analysis and management
US20230130674A1 (en) Computer-readable recording medium storing learning program, learning method, and information processing apparatus
US20230297990A1 (en) Bi-optic object classification system
Jurj et al. Mobile application for receipt fraud detection based on optical character recognition
US20230169452A1 (en) System Configuration for Learning and Recognizing Packaging Associated with a Product
Pan et al. Soft margin keyframe comparison: Enhancing precision of fraud detection in retail surveillance
US20240020857A1 (en) System and method for identifying a second item based on an association with a first item

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION