EP2763116A1 - Fall detection system and method for detecting a fall of a monitored person - Google Patents

Fall detection system and method for detecting a fall of a monitored person Download PDF

Info

Publication number
EP2763116A1
EP2763116A1 EP14152990.9A EP14152990A EP2763116A1 EP 2763116 A1 EP2763116 A1 EP 2763116A1 EP 14152990 A EP14152990 A EP 14152990A EP 2763116 A1 EP2763116 A1 EP 2763116A1
Authority
EP
European Patent Office
Prior art keywords
fall
image
detection system
monitored person
processing unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP14152990.9A
Other languages
German (de)
French (fr)
Other versions
EP2763116B1 (en
Inventor
Sylvie De Smet
Natasha Dewaele
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Supair Cvba
Original Assignee
FamilyEye bvba
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FamilyEye bvba filed Critical FamilyEye bvba
Publication of EP2763116A1 publication Critical patent/EP2763116A1/en
Application granted granted Critical
Publication of EP2763116B1 publication Critical patent/EP2763116B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0476Cameras to detect unsafe condition, e.g. video cameras

Definitions

  • the invention generally relates to a fall detection system and a method for detecting a fall of a monitored person.
  • the invention relates more specifically to a fall detection system for detecting a fall of a monitored person, wherein said fall detection system comprises at least one infrared source and at least one depth infrared sensor, wherein said infrared source is provided to irradiate said monitored person with a predetermined infrared dot pattern which is detectable by said depth infrared sensor in order to generate an image of said infrared dot pattern.
  • the invention furthermore relates more specifically to a method for detecting a fall of a monitored person, wherein said method comprises the following steps:
  • Fall detection systems are mainly used in the care of the elderly. Fall incidents occur particularly often in the case of elderly people. In home care, one in three elderly people falls at least once a year, in rest homes around fifty to seventy per cent of the elderly people fall at least once a year. This can result in serious physical injuries in elderly people, such as hip fractures. Moreover, if an elderly person falls, it is crucial that this is quickly noticed, given that elderly people who remain recumbent for too long, sometimes even for an entire night, become cold or dehydrated, which is highly traumatising for this elderly person. Above all elderly people living alone face a greater risk of falling due to health disorders.
  • Fall detection systems of this type also enable elderly people to live independently for a longer period.
  • a first type of fall detection systems concerns portable sensor-based appliances. Examples thereof are the Zenio system from Vitaltronics or the monitoring and alarm system as described in CN 201853320 . Most of these systems use accelerometers and gyroscopes which detect abnormal accelerations and trigger an alarm.
  • a first major disadvantage of systems of this type is that a high risk exists that elderly people forget to don these appliances or forget to recharge the batteries or replace flat batteries in the appliances.
  • a second major disadvantage of portable sensor-based appliances of this type is that these appliances are uncomfortable to wear, which discourages elderly people from using these appliances.
  • a second type of fall detection systems concerns video-based fall detection systems.
  • video-based fall detection human activity is recorded in a video which is further analysed using image processing and computer image analysis systems in order to detect a fall and generate an alarm.
  • image processing and computer image analysis systems in order to detect a fall and generate an alarm.
  • a diversity of video-based fall detection systems of this type which use a diversity of fall detection technologies have already been described in both the technical literature and the patent literature.
  • the most frequently used and simplest technology for detecting a fall is the aspect ratio of the enclosing rectangle which is drawn around the person who falls, also referred to as the 'bounding box'.
  • the aspect ratio is the ratio of the width and the height of the enclosing rectangle around the object.
  • a standing person has a low value, whereas a recumbent person provides a wide, flat rectangle, which corresponds to a high aspect ratio.
  • a fall detection method which uses this fall detection technology is described for detecting persons with a restricted autonomy.
  • Image-recording means such as a camera are provided here to capture consecutive images of the same surface.
  • a movement is detected which represents the shape of a person.
  • a rectangular contour is localized around the shape which represents the movement.
  • the reduction in the height of the contour is detected, said reduction being representative of a fall of the person.
  • a first disadvantage of the fall detection technology which is described in this French patent is that a person is detected only at the time when he moves in his environment.
  • a further disadvantage of the fall detection technology as described therein is that, given that the reduction in the height of a rectangular contour which is localized around a person therein is representative of a fall, it is not suitable for detecting every type of fall.
  • a fall detection technology of this type is, for example, not always accurate for detecting a fall of a person who falls over partially somewhere, for example over a chair, or for detecting a fall of a wheelchair patient.
  • a different fall detection technology uses an enclosing ellipse which is drawn around a part of the falling person or the complete falling person. If this ellipse moves at a specific speed, a fall is registered.
  • the disadvantage of the fall detection technology as described in the aforementioned article is that no automatic tracking of the head can take place when a person to be monitored enters a room in which the fall detection system has been installed. This is still always carried out manually. The further tracking can take place only at the time when the ellipse has been placed around the head of the person.
  • a major disadvantage of using the fall angle is that this fall detection algorithm does not work when the person falls in the direction of the camera.
  • HMM Hidden Markov Model
  • IR infrared
  • a fall detection system is described in WO 2012/119903 which uses passive optical sensors, including an infrared camera. Images of a monitored person are captured continuously using a number of passive optical sensors. A server containing logic analyses the images originating from these passive optical sensors in geometric primitives and thus determines by means of algorithms whether or not a fall or a different specific emergency situation occurs.
  • the fall detection algorithms are defined in advance and form part of a library of activated fall detection algorithms.
  • a person is removed from the background of the image and is vectorized and then disassembled into an image of a person in geometric primitives. The position of this person in space is then determined by means of the server or in the passive optical sensor.
  • the fall detection uses the dynamics of a fall.
  • This Kinect comprises three types of sensors, i.e. a standard camera, a depth infrared camera and a microphone.
  • the depth infrared camera detects points projected by an infrared source, more specifically a laser, and converts these automatically into a depth map.
  • the cameras are calibrated in such a way that the pixels of the depth map match the pixels in the images of the standard camera.
  • the Kinect SDK software automatically detects the 3D location of 21 joints of two persons. Moreover, the floor surface is automatically detected. A fall of a monitored person is detected using two algorithms, i.e.
  • the disadvantage of this fall detection system is that the floor surface is used as a reference during the position algorithm, as a result of which, for example, a fall of a person sitting on a bed or on a table is not detected, given that, in most cases, this person will not fall to the floor.
  • a further disadvantage is that voice recognition is used in order to validate the fall. Consequently, the person who has fallen must remain capable of responding audibly, which is not always possible, for example after suffering a stroke.
  • a fall detection system for detecting a fall of a person, wherein said fall detection system comprises at least one infrared source and a depth infrared sensor, wherein said infrared source is provided to irradiate said person with a predetermined infrared dot pattern which is detectable by said depth infrared sensor in order to generate an image of said infrared dot pattern, and wherein said fall detection system is provided with a control unit which:
  • a depth infrared sensor is an infrared sensor which is provided with a transmitter and sensors which can measure the distance between the infrared sensor and objects according to the 'LightCoding' or 'Structured Light' principle.
  • the objects are built up in layers on the Z-axis and a depth map of the objects is produced.
  • 'LightCoding' an infrared dot pattern is converted into a depth map through pattern recognition on the infrared dots and triangulation between the infrared source and the depth infrared sensor.
  • 'Structured Light' is the process of projecting a known pattern of pixels (usually grids or horizontal bars) onto an image. The manner in which these patterns deform when they collide with surfaces provides an image analysis system for calculating the depth and the surface information of the objects in the image.
  • a fall detection system of this type according to the invention has the major advantage that the fall of a person from any position of this person can be detected, from a standing, seated or recumbent position. Moreover, given that identification points which are generated in the images and which are located in the area of specific parts of the body of a person are used in this fall detection system, no ghost images are generated in this fall detection system. A fall detection system of this type according to the invention is thus very reliable.
  • a further advantage of the fall detection system according to the invention is that this is a relatively simple system and complex calculation formulae, comparative tables and the like do not need to be used herein.
  • said image-processing unit is configured to generate at least one identification point which is located in the area of the hip centre of said monitored person on the basis of said infrared dot pattern and to add it to said image received from said infrared sensor, wherein said image-processing unit is configured to detect said monitored person in said image from the depth infrared sensor if said identification points which are located in the area of the head and the hip centre of said monitored person have been generated and have been added to said image received from said depth infrared sensor.
  • a person who enters a room in which the fall detection system according to the invention has been installed is detected as being a person if an identification point in the area of the head of the person and in the area of the hip centre of the person is added to the image received from the infrared sensor by the image-processing unit.
  • said fall-processing unit is configured to adjust said predetermined minimum threshold value according to the distance between the head of said monitored person and said depth infrared sensor.
  • said minimum threshold value is linearly proportional to said distance between the head of said monitored person and said depth infrared sensor.
  • said extent of change in said Y-position between said first and second image is expressed in the number of pixels per second.
  • said images comprise a well-defined number of pixels on the horizontal X-axis and the vertical Y-axis, and a well-defined resolution on the vertical Y-axis, and said minimum threshold value is linearly proportional to:
  • said control unit comprises an alarm-processing unit which is configured to alert automatically one or more external persons at the time when a said fall of said monitored person is detected.
  • the fall is registered by the fall-processing unit and the alarm-processing unit ensures that the necessary warnings are automatically sent to one or more external persons.
  • said alarm-processing unit is configured to set up automatically a connection to an external appliance which is located with said one or more external persons at the time when a said fall of said monitored person is detected.
  • said fall detection system comprises a colour camera which is configured to capture colour images of said monitored person at different times and said external appliance comprises a screen, and said alarm-processing unit is configured to send a said colour image of said monitored person to said external appliance at the time of said fall.
  • said alarm-processing unit is configured to allow said external person to visually observe said monitored person via livestream on said screen.
  • Said fall detection system preferably comprises a microphone, wherein said alarm-processing unit is configured to allow said one or more external persons to make audible contact with said monitored person.
  • said fall detection system comprises a social contact unit which is configured to allow said monitored person to have visual and/or audible contact with said one or more external persons.
  • said social contact unit is configured to allow said monitored person and said one or more external persons to have visual and/or audible contact with one another via a screen which is located, on the one hand, with said monitored person and, on the other hand, with said one or more external persons.
  • a method for detecting a fall of a monitored person comprising the following steps:
  • said method uses a fall detection system according to the invention as described above.
  • a fall detection system (1) for detecting a fall of a monitored person (10), as shown schematically in Figure 1 comprises at least one infrared source (not shown in the figures), preferably an infrared laser light source, which has the advantage that it emits invisible light.
  • This infrared source irradiates the environment with a known infrared dot pattern (not shown in the figures).
  • This infrared dot pattern is detectable by at least one depth infrared sensor (not shown in the figures), preferably a 3D depth infrared sensor.
  • This infrared dot pattern preferably consists of a semi-random, constant pattern of small dots which are projected in front of the depth infrared sensor onto the environment.
  • This infrared dot pattern is analysed by the depth infrared sensor and converted into a depth map. In this way, the depth infrared sensor generates an image of the infrared dot pattern.
  • This infrared source and the depth infrared sensor are preferably accommodated in the same housing (2).
  • the fall detection system (1) further comprises a control unit which comprises an image-processing unit which is configured to generate identification points (10a-10t) on the monitored person (10) on the basis of the infrared dot pattern and to add them to the image received from the depth infrared sensor.
  • the image-processing unit is configured to generate at least one identification point (10a) in the area of the head of the monitored person (10) (see Figure 2 ).
  • this image-processing unit is also configured to generate at least one identification point (10b) in the area of the hip centre of the monitored person (10) (see Figure 2 ) and to add it to the image received from the depth infrared sensor.
  • the image-processing unit is furthermore preferably designed to generate identification points in the area of a number of other bones and joints of the monitored person (10).
  • identification points (10a,10b) in the area of the head and the hip centre of the monitored person (10) are preferably generated:
  • the aforementioned identification points (10a-10t) are preferably generated using a software program developed specifically for this purpose.
  • a typical example thereof is the software program known as SDK which was developed by Microsoft and in which the identification points (10a-10t) described above are formed by "skeleton tracking", as described, inter alia, in US 2010/0199228 .
  • the control unit of the fall detection system (1) furthermore comprises a fall-processing unit (not shown in the figures) which is responsible for the fall detection.
  • the fall-processing unit (1) is configured to receive at least a first image (3a) and a second image (3b) of the infrared dot pattern of the image-processing unit at a different time within a predefined time interval. Thirty images per second are preferably captured by the infrared sensor and are processed by the image-processing unit, wherein the aforementioned identification points are generated by this image-processing unit and are added to these images.
  • each of the identification points (10a-10t) on the images has a vertical Y-position.
  • the fall-processing unit is furthermore provided to compare with one another the vertical Y-positions of the identification point (10a) which is located in the area of the head of the monitored person (10) on the different images received from the image-processing unit.
  • the infrared source which is installed in the housing (2) will irradiate the monitored person (10) with the known infrared dot pattern.
  • This infrared dot pattern will then be detected by the depth infrared sensor which, as described above, will convert this infrared dot pattern into a depth map.
  • This image-processing unit will then generate the aforementioned identification points (10a-10t) and add them to the image received from this image-processing unit.
  • the identification points (10a, 10b) which are located in the area of the head and the hip centre of the monitored person (10) have been generated, the monitored person (10) is then detected.
  • the aforementioned fall-processing unit will compare the images received from the image-processing unit with one another.
  • An image has a well-defined number of pixels on the horizontal X-axis, a well-defined number of pixels on the vertical Y-axis and a well-defined resolution on the vertical Y-axis. More specifically, the different vertical Y-positions of the identification point (10a) which is located in the area of the head of the monitored person (10) are compared with one another in the different images by the fall-processing unit.
  • this minimum threshold value is linearly proportional to the distance (d) between the head of the monitored person (10) and the depth infrared sensor (see Figure 1 a) .
  • the fall-processing unit is preferably configured to adjust this predetermined minimum threshold value according to this distance (d).
  • this predetermined minimum threshold value is linearly proportional to the number of pixels on the horizontal X-axis and the vertical Y-axis and the resolution on the vertical Y-axis.
  • the minimum threshold value is therefore 20 pixels between the two images (3a, 3b) or 600 pixels/s (see Figure 4 ).
  • the control unit of the fall detection system (1) furthermore preferably comprises an alarm-signalling unit (not shown in the figures), whereby the latter is extremely suitable for being installed in one or more rooms in which a person in need of care, usually an elderly person, moves.
  • an alarm-signalling unit (not shown in the figures), whereby the latter is extremely suitable for being installed in one or more rooms in which a person in need of care, usually an elderly person, moves.
  • One or more infrared sources and infrared sensors as described above are installed in these rooms.
  • This alarm-processing unit is configured to alert automatically one or more external persons such as a family member, neighbour, informal care person, nurse/carer, person in an emergency services centre (not shown in the figures) at the time when a fall of the monitored person (10) is detected, without any manual intervention on the part of the monitored person (10).
  • This alarm-processing unit is configured to set up automatically a connection to one or more external appliances, for example a smartphone, TV, computer, etc., which are located with the one or more external persons and which are provided with an independent platform such as IOS, Microsoft, Android with an app.
  • This automatic signalling preferably consists of a photo or an image of the situation in the event of the fall, which is forwarded to one or more external appliances provided with a screen.
  • the fall detection system (1) is preferably provided with one or more colour cameras, more preferably RGB colour cameras.
  • the fall detection system (1) may comprise a loudspeaker in order to allow one or more external persons to make audible contact with the monitored person (10).
  • a microphone is preferably present alongside the monitored person (10).
  • the alarm-processing unit is furthermore preferably configured to allow one or more external persons to observe the monitored person (10) via livestream on an external appliance provided with a screen, using the aforementioned colour camera.
  • the fall detection system (1) preferably further comprises a social contact unit which is configured to allow the one or more external persons to make visual and audible contact with the monitored person (10).
  • a social contact unit which is configured to allow the one or more external persons to make visual and audible contact with the monitored person (10).
  • the aforementioned one or more colour cameras themselves must be connected to a screen (computer, tablet, TV, etc.).
  • This social contact unit can be configured in such a way that the monitored person (10) can make visual and/or audible external contact by pressing once on the photo of the one or more external persons, who are preferably predetermined, resulting in a cost saving in terms of telephony.
  • the method according to the invention will typically be computer-implemented on a system or platform with a client-server architecture.
  • the images, reports and tasks are retained centrally or are distributed among one or more servers. Users will typically have access to their task lists and will consult images which are stored in the system via a client device.
  • the control unit or data processing unit or a computer device which is controlled according to a method according to the invention may comprise a workstation, a server, a laptop, a desktop, a portable control system such as a handheld device or mobile device, a tablet computer or any other computer device as known to the person skilled in the art.
  • the control unit or data processing unit may comprise a bus or a network for connectivity, directly or indirectly, between different components, a memory or database, one or more processors, input/output ports, a power supply and the like.
  • the bus or the network may comprise one or more buses, such as an address bus, a data bus or a combination thereof, or one or more network links. It should furthermore be clear to a person skilled in the art, depending on the intended application and the use of a specific embodiment, that a multiplicity of these components can be implemented in a single device.
  • the control unit or data processing unit may comprise a variety of computer-readable media or may interact therewith.
  • computer-readable media may comprise a Random Access Memory (RAM), Read Only Memory (ROM), Electronically Erasable Programmable Read Only Memory (EEPROM), flash memory or any other memory technology, CD-ROM, Digital Versatile Disks (DVD) or any other type of optical or holographic media, magnetic cassettes, magnetic tapes, magnetic disk storage or any other magnetic storage devices which may be used to encode information and which can be accessed by the control unit or data processing unit of the computer device.
  • the memory may comprise computer storage media in the form of volatile and/or non-volatile memories.
  • the memory may be removable, non-removable or any combination thereof.
  • Examples of hardware devices are devices such as HDD (hard drive disks), SSD (solid-state disks), ODD (optical disk drives), and the like.
  • the control unit or data processing unit or the computer device may comprise one or more processors which read data from components such as memories, the different I/O components and the like.
  • the I/O ports may allow the control unit or data processing unit of the computer device to be logically coupled to other appliances such as I/O components.
  • Some of the I/O components may be built into a computer device. Examples of I/O components of this type comprise a microphone, a joystick, a recording device, a game pad, a dish antenna, a scanner, a printer, a wireless device, a network device and the like.

Abstract

The invention relates to a fall detection system (1) for detecting a fall of a monitored person (10), wherein an infrared source is provided to irradiate the person (10) with an infrared dot pattern which is detectable by a depth infrared sensor in order to generate an image of the infrared dot pattern, wherein an image-processing unit is provided to generate at least one identification point (10a) which is located in the area of the head on the basis of the infrared dot pattern and add it to the image received from the infrared sensor, and a fall-processing unit is provided to detect the fall. The invention furthermore relates to a method for detecting a fall of this type.
Figure imgaf001
Figure imgaf002

Description

  • The invention generally relates to a fall detection system and a method for detecting a fall of a monitored person.
  • The invention relates more specifically to a fall detection system for detecting a fall of a monitored person, wherein said fall detection system comprises at least one infrared source and at least one depth infrared sensor, wherein said infrared source is provided to irradiate said monitored person with a predetermined infrared dot pattern which is detectable by said depth infrared sensor in order to generate an image of said infrared dot pattern.
  • The invention furthermore relates more specifically to a method for detecting a fall of a monitored person, wherein said method comprises the following steps:
    • the irradiation of said monitored person with a predetermined infrared dot pattern which is detectable by at least one depth infrared sensor using at least one infrared source; and
    • the generation of an image of said infrared dot pattern.
  • Fall detection systems are mainly used in the care of the elderly. Fall incidents occur particularly often in the case of elderly people. In home care, one in three elderly people falls at least once a year, in rest homes around fifty to seventy per cent of the elderly people fall at least once a year. This can result in serious physical injuries in elderly people, such as hip fractures. Moreover, if an elderly person falls, it is crucial that this is quickly noticed, given that elderly people who remain recumbent for too long, sometimes even for an entire night, become cold or dehydrated, which is highly traumatising for this elderly person. Above all elderly people living alone face a greater risk of falling due to health disorders.
  • Consequently, a need exists for fall protection systems which are provided to detect and signal a fall of a person. Fall detection systems of this type also enable elderly people to live independently for a longer period.
  • Various types of fall detection systems are already known to date.
  • A first type of fall detection systems concerns portable sensor-based appliances. Examples thereof are the Zenio system from Vitaltronics or the monitoring and alarm system as described in CN 201853320 . Most of these systems use accelerometers and gyroscopes which detect abnormal accelerations and trigger an alarm. However, a first major disadvantage of systems of this type is that a high risk exists that elderly people forget to don these appliances or forget to recharge the batteries or replace flat batteries in the appliances. A second major disadvantage of portable sensor-based appliances of this type is that these appliances are uncomfortable to wear, which discourages elderly people from using these appliances.
  • A second type of fall detection systems concerns video-based fall detection systems. In video-based fall detection, human activity is recorded in a video which is further analysed using image processing and computer image analysis systems in order to detect a fall and generate an alarm. A diversity of video-based fall detection systems of this type which use a diversity of fall detection technologies have already been described in both the technical literature and the patent literature.
  • The most frequently used and simplest technology for detecting a fall is the aspect ratio of the enclosing rectangle which is drawn around the person who falls, also referred to as the 'bounding box'. The aspect ratio is the ratio of the width and the height of the enclosing rectangle around the object. A standing person has a low value, whereas a recumbent person provides a wide, flat rectangle, which corresponds to a high aspect ratio.
  • In FR 2 870 378 , a fall detection method which uses this fall detection technology is described for detecting persons with a restricted autonomy. Image-recording means such as a camera are provided here to capture consecutive images of the same surface. During this image capture, a movement is detected which represents the shape of a person. A rectangular contour is localized around the shape which represents the movement. The reduction in the height of the contour is detected, said reduction being representative of a fall of the person.
  • A first disadvantage of the fall detection technology which is described in this French patent is that a person is detected only at the time when he moves in his environment. A further disadvantage of the fall detection technology as described therein is that, given that the reduction in the height of a rectangular contour which is localized around a person therein is representative of a fall, it is not suitable for detecting every type of fall. A fall detection technology of this type is, for example, not always accurate for detecting a fall of a person who falls over partially somewhere, for example over a chair, or for detecting a fall of a wheelchair patient.
  • A different fall detection technology uses an enclosing ellipse which is drawn around a part of the falling person or the complete falling person. If this ellipse moves at a specific speed, a fall is registered.
  • Thus, for example, in the article 'Fall detection using 3D head trajectory extracted from a single camera video sequence', C Rougier, J Meunier, Journal of Telemedicine and Telecare, 2005, a fall detection method is described which uses this technology. This fall detection method is based on three steps, i.e.:
    1. 1/ head tracking: the head of a person is followed, given that the head of the person is usually visible in the image and undergoes a substantial movement during a fall. The head is modelled as an ellipsoid in 3D which is projected as an ellipse in a 2D image surface. In order to localize the head, the De Menthon algorithm is used which uses three input arguments, i.e. the 3D points of the main model, the corresponding 2D points projected in the image surface and the intrinsic parameters of the camera obtained from a calibration of the camera.
    2. 2/ 3D tracking: the head is traced by means of a particle filter in order to extract a 3D trajectory; and
    3. 3/ fall detection: a fall is detected using 3D velocities which are calculated from the 3D trajectory of the head. The vertical and horizontal velocity are used in the world coordinate system in order to distinguish a fall from normal activities.
  • However, the disadvantage of the fall detection technology as described in the aforementioned article is that no automatic tracking of the head can take place when a person to be monitored enters a room in which the fall detection system has been installed. This is still always carried out manually. The further tracking can take place only at the time when the ellipse has been placed around the head of the person.
  • Further different fall detection algorithms use the following:
    • the fall angle: the fall angle is defined as the angle between the floor surface and the person from which it is certain that the person will fall. Although this fall angle may differ from person to person, a good estimate for said angle is 45°;
    • the centroid of the falling person: the centroid is the geometric centre of a flat figure. During a fall, the centroid changes significantly and quickly;
    • the vertical projection histogram: the vertical projection histogram counts pixel values per column of a silhouette of a person. If a vertical projection histogram of a person is plotted in a graph, three peaks will be visible herein which correspond from left to right to the head, the hands and the hip of the person respectively. This vertical projection histogram of a person changes significantly when a person falls;
    • the horizontal and vertical gradient: when a fall starts, a significant change takes place in either the X-direction or the Y-direction. The gradient value is calculated both horizontally and vertically for each pixel. In the event of a fall, the vertical gradient is smaller than the horizontal gradient.
  • A major disadvantage of using the fall angle is that this fall detection algorithm does not work when the person falls in the direction of the camera.
  • The disadvantage of all known fall detection algorithms summarized above is that they work in specific circumstances only.
  • A different, more advanced fall detection algorithm uses the Hidden Markov Model (HMM). Thus, for example, a fall detection system is described in CN 102289911 , wherein two pyroelectric infrared sensors are used to carry out signal acquisition, and wherein HMM pattern matching is used for the fall detection. The use of HMM pattern matching for the fall detection is subdivided into two phases, i.e. an HMM training phase and an HMM recognition phase.
  • The major disadvantage of fall detection algorithms which use HMM is the need to obtain training data which are necessary for the training phase (also referred to as the learning phase) of this algorithm.
  • Different types of cameras and sensors can also be used in these video-based fall detection systems. A frequently used type of cameras/sensors concerns infrared (IR) cameras or sensors.
  • Thus, for example, a fall detection system is described in WO 2012/119903 which uses passive optical sensors, including an infrared camera. Images of a monitored person are captured continuously using a number of passive optical sensors. A server containing logic analyses the images originating from these passive optical sensors in geometric primitives and thus determines by means of algorithms whether or not a fall or a different specific emergency situation occurs. The fall detection algorithms are defined in advance and form part of a library of activated fall detection algorithms. In the recognition algorithms, a person is removed from the background of the image and is vectorized and then disassembled into an image of a person in geometric primitives. The position of this person in space is then determined by means of the server or in the passive optical sensor. Here, the fall detection uses the dynamics of a fall. In other words, how quickly the location of the geometric primitives changes within an image sequence is analysed here. If a fall is detected, an alarm is generated. The monitored person can invalidate this alarm and thus classify it as a false alarm. If this has not happened after a specific time, a transfer of the image of the monitored person or a sequence of images to an unauthorized group of persons then follows.
  • However, the major disadvantage of this fall detection system is that ghost images are created. The reason for this is that the image is split up into a foreground and background, wherein the person to be followed is removed from the background of the image. However, if something is then changed in the background, for example a chair or a cupboard is moved, the information in the background is no longer true, resulting in a ghost image and an incorrect detection of a fall.
  • In the article entitled 'Development of a fall detection system with Microsoft Kinect', Christopher Kawatsu, Jiaxing Li and CJ Chung, Department of Mathematics and Computer Science, Lawrence Technological University, 2012, a fall detection system which uses the Microsoft Kinect is described. This Kinect comprises three types of sensors, i.e. a standard camera, a depth infrared camera and a microphone. The depth infrared camera detects points projected by an infrared source, more specifically a laser, and converts these automatically into a depth map. The cameras are calibrated in such a way that the pixels of the depth map match the pixels in the images of the standard camera. The Kinect SDK software automatically detects the 3D location of 21 joints of two persons. Moreover, the floor surface is automatically detected. A fall of a monitored person is detected using two algorithms, i.e.
    • a position algorithm which uses the position data of the joints. The distance from the floor surface to each joint of the monitored person is calculated. If the maximum distance is less than a determined limit value, a fall is detected; and
    • a speed algorithm which determines the speed of each joint in the normal direction in relation to the floor surface. The speeds are averaged over all joints and a number of frames. If this averaged speed is lower (speeds in the downward direction are defined as negative) than a determined limit value, a fall is detected.
    Voice recognition is used in order to rule out false-positive fall reports. After a fall has been detected, the microphone and the Microsoft Kinect voice recognition system are used to validate the fall.
  • However, the disadvantage of this fall detection system is that the floor surface is used as a reference during the position algorithm, as a result of which, for example, a fall of a person sitting on a bed or on a table is not detected, given that, in most cases, this person will not fall to the floor. A further disadvantage is that voice recognition is used in order to validate the fall. Consequently, the person who has fallen must remain capable of responding audibly, which is not always possible, for example after suffering a stroke.
  • Consequently, a need exists for a simple and reliable fall detection system wherein no ghost images and as few false-positive fall detections as possible are generated.
  • According to a first aspect of the invention, a fall detection system is provided for detecting a fall of a person, wherein said fall detection system comprises at least one infrared source and a depth infrared sensor, wherein said infrared source is provided to irradiate said person with a predetermined infrared dot pattern which is detectable by said depth infrared sensor in order to generate an image of said infrared dot pattern, and wherein said fall detection system is provided with a control unit which:
    • comprises an image-processing unit which is configured to generate at least one identification point which is located in the area of the head of said person on the basis of said infrared dot pattern and to add it to said image received from said depth infrared sensor; and
    • comprises a fall-processing unit which is configured to:
      • receive at least a first and a second said image of said infrared dot pattern from said image-processing unit at a different time within a predetermined time interval, wherein said identification points on said images comprise a vertical Y-position; and
      • detect said fall if, for said identification point which is located in the area of the head, the extent of change in said Y-position between said first and said second image exceeds a predetermined minimum threshold value.
  • A depth infrared sensor is an infrared sensor which is provided with a transmitter and sensors which can measure the distance between the infrared sensor and objects according to the 'LightCoding' or 'Structured Light' principle. With the received data, the objects are built up in layers on the Z-axis and a depth map of the objects is produced. In 'LightCoding', an infrared dot pattern is converted into a depth map through pattern recognition on the infrared dots and triangulation between the infrared source and the depth infrared sensor. 'Structured Light' is the process of projecting a known pattern of pixels (usually grids or horizontal bars) onto an image. The manner in which these patterns deform when they collide with surfaces provides an image analysis system for calculating the depth and the surface information of the objects in the image.
  • A fall detection system of this type according to the invention has the major advantage that the fall of a person from any position of this person can be detected, from a standing, seated or recumbent position. Moreover, given that identification points which are generated in the images and which are located in the area of specific parts of the body of a person are used in this fall detection system, no ghost images are generated in this fall detection system. A fall detection system of this type according to the invention is thus very reliable.
  • A further advantage of the fall detection system according to the invention is that this is a relatively simple system and complex calculation formulae, comparative tables and the like do not need to be used herein.
  • In a preferred embodiment of a fall detection system according to the invention, said image-processing unit is configured to generate at least one identification point which is located in the area of the hip centre of said monitored person on the basis of said infrared dot pattern and to add it to said image received from said infrared sensor, wherein said image-processing unit is configured to detect said monitored person in said image from the depth infrared sensor if said identification points which are located in the area of the head and the hip centre of said monitored person have been generated and have been added to said image received from said depth infrared sensor.
  • Consequently, a person who enters a room in which the fall detection system according to the invention has been installed is detected as being a person if an identification point in the area of the head of the person and in the area of the hip centre of the person is added to the image received from the infrared sensor by the image-processing unit.
  • In one advantageous embodiment of a fall detection system according to the invention, said fall-processing unit is configured to adjust said predetermined minimum threshold value according to the distance between the head of said monitored person and said depth infrared sensor.
  • More specifically, said minimum threshold value is linearly proportional to said distance between the head of said monitored person and said depth infrared sensor.
  • In an advantageous embodiment of a fall detection system according to the invention, said extent of change in said Y-position between said first and second image is expressed in the number of pixels per second.
  • In a more specific embodiment of a fall detection system according to the invention, said images comprise a well-defined number of pixels on the horizontal X-axis and the vertical Y-axis, and a well-defined resolution on the vertical Y-axis, and said minimum threshold value is linearly proportional to:
    • said number of pixels on the horizontal X-axis and the vertical Y-axis; and
    • said resolution on the vertical Y-axis.
  • In a preferred embodiment of a fall detection system according to the invention, said control unit comprises an alarm-processing unit which is configured to alert automatically one or more external persons at the time when a said fall of said monitored person is detected.
  • This has the advantage that the monitored person does not have to carry out any manual action at the time when a fall occurs. The fall is registered by the fall-processing unit and the alarm-processing unit ensures that the necessary warnings are automatically sent to one or more external persons.
  • In a preferred embodiment of a fall detection system according to the invention, said alarm-processing unit is configured to set up automatically a connection to an external appliance which is located with said one or more external persons at the time when a said fall of said monitored person is detected.
  • In a more advantageous embodiment of a fall detection system according to the invention, said fall detection system comprises a colour camera which is configured to capture colour images of said monitored person at different times and said external appliance comprises a screen, and said alarm-processing unit is configured to send a said colour image of said monitored person to said external appliance at the time of said fall.
  • In an even more advantageous embodiment of a fall detection system according to the invention, said alarm-processing unit is configured to allow said external person to visually observe said monitored person via livestream on said screen.
  • Said fall detection system preferably comprises a microphone, wherein said alarm-processing unit is configured to allow said one or more external persons to make audible contact with said monitored person.
  • In a preferred embodiment of a fall detection system according to the invention, said fall detection system comprises a social contact unit which is configured to allow said monitored person to have visual and/or audible contact with said one or more external persons.
  • In a more preferred embodiment of a fall detection system according to the invention, said social contact unit is configured to allow said monitored person and said one or more external persons to have visual and/or audible contact with one another via a screen which is located, on the one hand, with said monitored person and, on the other hand, with said one or more external persons.
  • According to a second aspect of the invention, a method is provided for detecting a fall of a monitored person, wherein said method comprises the following steps:
    • the irradiation of said monitored person with a predetermined infrared dot pattern which is detectable by a depth infrared sensor using an infrared source;
    • the generation of an image of said infrared dot pattern;
    wherein said method further comprises the following steps:
    • the generation of at least one identification point which is located in the area of the head of said monitored person on the basis of said infrared dot pattern using an image-processing unit;
    • the addition of said identification point to said image received from said depth infrared sensor;
    • the reception of at least a first and a second said image of said infrared dot pattern from said image-processing unit at a different time within a predefined time interval using a fall-processing unit, wherein said identification points on said images comprise a vertical Y-position; and
    • the detection of said fall by means of said fall-processing unit if, for said identification point which is located in the area of the head, the extent of change in said vertical Y-position between said first and said second image exceeds a predetermined minimum threshold value.
  • In a preferred embodiment of a method according to the invention, said method uses a fall detection system according to the invention as described above.
  • The invention will now be described in detail with reference to the drawings, wherein:
    • Fig. 1a-1c are a schematic representation of the different steps which a preferred fall detection system according to the invention carries out in order to detect a fall and following detection of a fall;
    • Fig. 2 is a schematic representation of 20 identification points which are generated in the area of the bones and the joints of the monitored person on the basis of the infrared dot pattern with which the monitored person is irradiated by the infrared source of the preferred fall detection system according to the invention;
    • Fig. 3 is a graphical representation of an example of the extent of change in the vertical Y-position of the identification point which is located in the area of the head of the monitored person between consecutive images 1 to 4; and
    • Fig. 4 is a graphical representation of the connection between the minimum threshold value of the extent of change in the vertical Y-position between different images received by the fall-processing unit, expressed in the number of pixels/s, in relation to the distance d between the head of the monitored person and depth infrared sensor.
  • A fall detection system (1) according to the invention for detecting a fall of a monitored person (10), as shown schematically in Figure 1, comprises at least one infrared source (not shown in the figures), preferably an infrared laser light source, which has the advantage that it emits invisible light. This infrared source irradiates the environment with a known infrared dot pattern (not shown in the figures). This infrared dot pattern is detectable by at least one depth infrared sensor (not shown in the figures), preferably a 3D depth infrared sensor. This infrared dot pattern preferably consists of a semi-random, constant pattern of small dots which are projected in front of the depth infrared sensor onto the environment. This infrared dot pattern is analysed by the depth infrared sensor and converted into a depth map. In this way, the depth infrared sensor generates an image of the infrared dot pattern.
  • This infrared source and the depth infrared sensor are preferably accommodated in the same housing (2).
  • The fall detection system (1) further comprises a control unit which comprises an image-processing unit which is configured to generate identification points (10a-10t) on the monitored person (10) on the basis of the infrared dot pattern and to add them to the image received from the depth infrared sensor. The image-processing unit is configured to generate at least one identification point (10a) in the area of the head of the monitored person (10) (see Figure 2). Furthermore, this image-processing unit is also configured to generate at least one identification point (10b) in the area of the hip centre of the monitored person (10) (see Figure 2) and to add it to the image received from the depth infrared sensor.
  • The image-processing unit is furthermore preferably designed to generate identification points in the area of a number of other bones and joints of the monitored person (10). In addition to the aforementioned identification points (10a,10b) in the area of the head and the hip centre of the monitored person (10), the following identification points specified below are preferably generated:
    • the right hand (10c);
    • the right wrist (10d);
    • the right elbow (10e);
    • the right shoulder (10f);
    • the shoulder centre (10g);
    • the left shoulder (10h);
    • the left elbow (10i);
    • the left wrist (10j);
    • the left hand (10k);
    • the spinal column (10l);
    • the right hip (10m);
    • the left hip (10n);
    • the right knee (10o);
    • the left knee (10p);
    • the right ankle (10q);
    • the left ankle (10r);
    • the right foot (10s); and
    • the left foot (10t).
    Figure 1 also shows the monitored person (10) on the basis of the aforementioned identification points (10a-10t).
  • The aforementioned identification points (10a-10t) are preferably generated using a software program developed specifically for this purpose. A typical example thereof is the software program known as SDK which was developed by Microsoft and in which the identification points (10a-10t) described above are formed by "skeleton tracking", as described, inter alia, in US 2010/0199228 .
  • The control unit of the fall detection system (1) furthermore comprises a fall-processing unit (not shown in the figures) which is responsible for the fall detection. As shown in Figure 1 b, the fall-processing unit (1) is configured to receive at least a first image (3a) and a second image (3b) of the infrared dot pattern of the image-processing unit at a different time within a predefined time interval. Thirty images per second are preferably captured by the infrared sensor and are processed by the image-processing unit, wherein the aforementioned identification points are generated by this image-processing unit and are added to these images. As shown in Figure 3, each of the identification points (10a-10t) on the images has a vertical Y-position. The fall-processing unit is furthermore provided to compare with one another the vertical Y-positions of the identification point (10a) which is located in the area of the head of the monitored person (10) on the different images received from the image-processing unit.
  • The detection of a fall by a monitored person (10) is illustrated in Figures 1 a to 1c.
  • As shown in Figure 1 a, if the monitored person (10) enters a room in which the fall detection system (1) according to the invention has been installed, the infrared source which is installed in the housing (2) will irradiate the monitored person (10) with the known infrared dot pattern. This infrared dot pattern will then be detected by the depth infrared sensor which, as described above, will convert this infrared dot pattern into a depth map. This image-processing unit will then generate the aforementioned identification points (10a-10t) and add them to the image received from this image-processing unit. At the time when the identification points (10a, 10b) which are located in the area of the head and the hip centre of the monitored person (10) have been generated, the monitored person (10) is then detected.
  • As shown in Figure 1b, the aforementioned fall-processing unit will compare the images received from the image-processing unit with one another. An image has a well-defined number of pixels on the horizontal X-axis, a well-defined number of pixels on the vertical Y-axis and a well-defined resolution on the vertical Y-axis. More specifically, the different vertical Y-positions of the identification point (10a) which is located in the area of the head of the monitored person (10) are compared with one another in the different images by the fall-processing unit. If the extent of change in the vertical Y-position between a first and a second image (3a, 3b), for example between image 2 (3a) and image 3 (3b), as shown in Figure 3, then exceeds a predetermined minimum threshold value, a fall is detected. This extent of change in the vertical Y-position is expressed in the number of pixels per second.
  • As shown in Figure 4, this minimum threshold value is linearly proportional to the distance (d) between the head of the monitored person (10) and the depth infrared sensor (see Figure 1 a). The fall-processing unit is preferably configured to adjust this predetermined minimum threshold value according to this distance (d).
  • Furthermore, this predetermined minimum threshold value is linearly proportional to the number of pixels on the horizontal X-axis and the vertical Y-axis and the resolution on the vertical Y-axis.
  • Example
  • Of a person who is located at a distance of 1.5 metres from the infrared sensor:
    • Number of pixels on the vertical Y-axis: 1024 pixels
    • Number of pixels on the horizontal X-axis: 768 pixels
    • Resolution on the vertical Y-axis: 320 x 240 pixels
    • Minimum threshold value of the extent of change in the vertical Y-position between two images (3a, 3b) received by the fall-processing unit at a different time: 10 pixels between the two images (3a, 3b) or 300 pixels/s (given that 30 images are captured per second) (see Figure 4).
  • If this person is located at a distance of 3 metres from the infrared sensor, the minimum threshold value is therefore 20 pixels between the two images (3a, 3b) or 600 pixels/s (see Figure 4).
  • The control unit of the fall detection system (1) according to the invention furthermore preferably comprises an alarm-signalling unit (not shown in the figures), whereby the latter is extremely suitable for being installed in one or more rooms in which a person in need of care, usually an elderly person, moves. One or more infrared sources and infrared sensors as described above are installed in these rooms. This alarm-processing unit is configured to alert automatically one or more external persons such as a family member, neighbour, informal care person, nurse/carer, person in an emergency services centre (not shown in the figures) at the time when a fall of the monitored person (10) is detected, without any manual intervention on the part of the monitored person (10). This alarm-processing unit is configured to set up automatically a connection to one or more external appliances, for example a smartphone, TV, computer, etc., which are located with the one or more external persons and which are provided with an independent platform such as IOS, Microsoft, Android with an app. This automatic signalling preferably consists of a photo or an image of the situation in the event of the fall, which is forwarded to one or more external appliances provided with a screen. To capture these images, the fall detection system (1) is preferably provided with one or more colour cameras, more preferably RGB colour cameras.
  • Furthermore, the fall detection system (1) according to the invention may comprise a loudspeaker in order to allow one or more external persons to make audible contact with the monitored person (10). A microphone is preferably present alongside the monitored person (10).
  • The alarm-processing unit is furthermore preferably configured to allow one or more external persons to observe the monitored person (10) via livestream on an external appliance provided with a screen, using the aforementioned colour camera.
  • The fall detection system (1) according to the invention preferably further comprises a social contact unit which is configured to allow the one or more external persons to make visual and audible contact with the monitored person (10). To do this, the aforementioned one or more colour cameras themselves must be connected to a screen (computer, tablet, TV, etc.). This social contact unit can be configured in such a way that the monitored person (10) can make visual and/or audible external contact by pressing once on the photo of the one or more external persons, who are preferably predetermined, resulting in a cost saving in terms of telephony.
  • The method according to the invention will typically be computer-implemented on a system or platform with a client-server architecture. The images, reports and tasks are retained centrally or are distributed among one or more servers. Users will typically have access to their task lists and will consult images which are stored in the system via a client device. The control unit or data processing unit or a computer device which is controlled according to a method according to the invention may comprise a workstation, a server, a laptop, a desktop, a portable control system such as a handheld device or mobile device, a tablet computer or any other computer device as known to the person skilled in the art.
  • The control unit or data processing unit may comprise a bus or a network for connectivity, directly or indirectly, between different components, a memory or database, one or more processors, input/output ports, a power supply and the like. It should be clear to a person skilled in the art that the bus or the network may comprise one or more buses, such as an address bus, a data bus or a combination thereof, or one or more network links. It should furthermore be clear to a person skilled in the art, depending on the intended application and the use of a specific embodiment, that a multiplicity of these components can be implemented in a single device.
  • The control unit or data processing unit may comprise a variety of computer-readable media or may interact therewith. Thus, for example, computer-readable media may comprise a Random Access Memory (RAM), Read Only Memory (ROM), Electronically Erasable Programmable Read Only Memory (EEPROM), flash memory or any other memory technology, CD-ROM, Digital Versatile Disks (DVD) or any other type of optical or holographic media, magnetic cassettes, magnetic tapes, magnetic disk storage or any other magnetic storage devices which may be used to encode information and which can be accessed by the control unit or data processing unit of the computer device.
  • The memory may comprise computer storage media in the form of volatile and/or non-volatile memories. The memory may be removable, non-removable or any combination thereof. Examples of hardware devices are devices such as HDD (hard drive disks), SSD (solid-state disks), ODD (optical disk drives), and the like. The control unit or data processing unit or the computer device may comprise one or more processors which read data from components such as memories, the different I/O components and the like.
  • The I/O ports may allow the control unit or data processing unit of the computer device to be logically coupled to other appliances such as I/O components. Some of the I/O components may be built into a computer device. Examples of I/O components of this type comprise a microphone, a joystick, a recording device, a game pad, a dish antenna, a scanner, a printer, a wireless device, a network device and the like.
  • Although the present invention has been illustrated with reference to specific embodiments, it will be clear to the person skilled in the art that the invention is not restricted to the details of the preceding illustrative example embodiments, and that the present invention can be designed with different modifications and adaptations without departing from the field of application of the invention. The present embodiments must therefore be regarded in all respects as illustrative and non-restrictive, wherein the field of application of the invention is described by the attached claims and not by the preceding description, and all modifications which fall within the meaning and the scope of the claims are therefore also incorporated therein. In other words, it is assumed that all modifications, variations or equivalents which fall within the field of application of the present basic principles and the essential attributes of which are claimed in this patent application are included herein. Moreover, the reader of this patent application will understand that the words "comprising" or "comprise" do not exclude other elements or steps, that the word "a/one" does not exclude a plurality, and that a single element, such as a computer system, a processor or other integrated unit can perform the functions of different appliances which are specified in the claims. Any references in the claims must not be construed as a restriction of the claims in question. The terms "first", "second", "third", "a", "b", "c" and the like, if used in the description or in the claims, are used to make the distinction between elements or steps of the same type, and do not necessarily describe a sequential or chronological order. Similarly, the terms "upper edge", "lower edge", "above", "below" and the like are used for the purpose of the description and do not necessarily refer to relative positions. It must be understood that these terms are interchangeable under the appropriate circumstances and that embodiments of the invention are capable of functioning according to the present invention in sequences or orientations other than those described or illustrated above.

Claims (15)

  1. Fall detection system (1) for detecting a fall of a monitored person (10), wherein said fall detection system (1) comprises at least one infrared source and at least one depth infrared sensor, wherein said infrared source is provided to irradiate said monitored person (10) with a predetermined infrared dot pattern which is detectable by said depth infrared sensor in order to generate an image of said infrared dot pattern,
    CHARACTERIZED IN THAT said fall detection system (1) is provided with a control unit which:
    - comprises an image-processing unit which is configured to generate at least one identification point (10a) which is located in the area of the head of said monitored person (10) on the basis of said infrared dot pattern and to add it to said image received from said depth infrared sensor; and
    - comprises a fall-processing unit which is configured to:
    • receive at least a first and a second said image (3a, 3b) of said infrared dot pattern from said image-processing unit at a different time within a predetermined time interval, wherein said identification points (10a-10t) on said images comprise a vertical Y-position; and
    • detect said fall if, for said identification point (10a) which is located in the area of the head, the extent of change in said Y-position between said first and said second image (3a, 3b) exceeds a predetermined minimum threshold value.
  2. Fall detection system according to Claim 1, CHARACTERIZED IN THAT said image-processing unit is configured to generate at least one identification point (10b) which is located in the area of the hip centre of said monitored person (10) on the basis of said infrared dot pattern and to add it to said image (3a, 3b) received from said infrared sensor, wherein said image-processing unit is configured to detect said monitored person in said image (3a, 3b) from the depth infrared sensor if said identification points (10a, 10b) which are located in the area of the head and the hip centre of said monitored person (10) have been generated and have been added to said image (3a, 3b) received from said depth infrared sensor.
  3. Fall detection system according to Claim 1 or 2, CHARACTERIZED IN THAT said fall-processing unit is configured to adjust said predetermined minimum threshold value according to the distance (d) between the head of said monitored person (10) and said depth infrared sensor.
  4. Fall detection system according to Claim 3, CHARACTERIZED IN THAT said minimum threshold value is linearly proportional to said distance (d) between the head of said monitored person (10) and said depth infrared sensor.
  5. Fall detection system according to one of Claims 1 to 4, CHARACTERIZED IN THAT said extent of change in said Y-position between said first and second image (3a, 3b) is expressed in the number of pixels per second.
  6. Fall detection system according to one of Claims 1 to 5, CHARACTERIZED IN THAT said images (3a, 3b) comprise a well-defined number of pixels on the horizontal X-axis and the vertical Y-axis, and a well-defined resolution on the vertical Y-axis, and that said minimum threshold value is linearly proportional to:
    - said number of pixels on the horizontal X-axis and the vertical Y-axis; and
    - said resolution on the vertical Y-axis.
  7. Fall detection system according to one of Claims 1 to 6, CHARACTERIZED IN THAT said control unit comprises an alarm-processing unit which is configured to alert automatically one or more external persons at the time when a said fall of said monitored person (10) is detected.
  8. Fall detection system according to Claim 7, CHARACTERIZED IN THAT said alarm-processing unit is configured to set up automatically a connection to one or more external appliances which are located with said one or more external persons at the time when a said fall of said monitored person (10) is detected.
  9. Fall detection system according to Claim 8, CHARACTERIZED IN THAT said fall detection system comprises a colour camera which is configured to capture colour images of said monitored person (10) at different times and said external appliance comprises a screen, and that said alarm-processing unit is configured to send a said colour image of said monitored person (10) to said external appliance at the time of said fall.
  10. Fall detection system according to Claim 9, CHARACTERIZED IN THAT said alarm-processing unit is configured to allow said one or more external persons to visually observe said monitored person (10) via livestream on said screen.
  11. Fall detection system according to Claim 10, CHARACTERIZED IN THAT said fall detection system comprises a microphone and that said alarm-processing unit is configured to allow said one or more external persons to make audible contact with said monitored person (10).
  12. Fall detection system according to one of Claims 7 to 11, CHARACTERIZED IN THAT said fall detection system comprises a social contact unit which is configured to allow said monitored person to have visual and/or audible contact with said one or more external persons.
  13. Fall detection system according to Claim 12, CHARACTERIZED IN THAT said social contact unit is configured to allow said monitored person and said one or more external persons to have visual and/or audible contact with one another via a screen which is located, on the one hand, with said monitored person (10) and, on the other hand, with said one or more external persons.
  14. Method for detecting a fall of a monitored person (10), wherein said method comprises the following steps:
    - the irradiation of said monitored person (10) with a predetermined infrared dot pattern which is detectable by at least one depth infrared sensor using at least one infrared source;
    - the generation of an image (3a, 3b) of said infrared dot pattern;
    CHARACTERIZED IN THAT said method further comprises the following steps:
    - the generation of at least one identification point (10a) which is located in the area of the head of said monitored person (10) on the basis of said infrared dot pattern using an image-processing unit;
    - the addition of said identification point to said image (3a, 3b) received from said depth infrared sensor;
    - the reception of at least a first and a second said image (3a, 3b) of said infrared dot pattern from said image-processing unit at a different time within a predefined time interval using a fall-processing unit, wherein said identification points (10a-10t) on said images (3a, 3b) comprise a vertical Y-position; and
    - the detection of said fall by means of said fall-processing unit if, for said identification point (10a) which is located in the area of the head, the extent of change in said vertical Y-position between said first and said second image (3a, 3b) exceeds a predetermined minimum threshold value.
  15. Method according to Claim 14, CHARACTERIZED IN THAT said method uses a fall detection system (1) according to one of Claims 1 to 13.
EP14152990.9A 2013-02-01 2014-01-29 Fall detection system and method for detecting a fall of a monitored person Active EP2763116B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
BE2013/0065A BE1021528B1 (en) 2013-02-01 2013-02-01 FALL DETECTION SYSTEM AND METHOD FOR DETECTING A FALL OF A MONITORED PERSON

Publications (2)

Publication Number Publication Date
EP2763116A1 true EP2763116A1 (en) 2014-08-06
EP2763116B1 EP2763116B1 (en) 2022-08-24

Family

ID=48092631

Family Applications (1)

Application Number Title Priority Date Filing Date
EP14152990.9A Active EP2763116B1 (en) 2013-02-01 2014-01-29 Fall detection system and method for detecting a fall of a monitored person

Country Status (2)

Country Link
EP (1) EP2763116B1 (en)
BE (1) BE1021528B1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016350A (en) * 2017-04-26 2017-08-04 中科唯实科技(北京)有限公司 A kind of Falls Among Old People detection method based on depth camera
CN108229421A (en) * 2018-01-24 2018-06-29 华中科技大学 A kind of falling from bed behavior real-time detection method based on deep video information
CN110688969A (en) * 2019-09-30 2020-01-14 上海依图网络科技有限公司 Video frame human behavior identification method
CN111460908A (en) * 2020-03-05 2020-07-28 中国地质大学(武汉) Human body tumbling identification method and system based on OpenPose
DE102019125049A1 (en) * 2019-09-17 2021-03-18 3Dvisionlabs Gmbh SENSOR UNIT AND METHOD FOR EVALUATING THE POSITION OF PERSONS
CN112949503A (en) * 2021-03-05 2021-06-11 齐齐哈尔大学 Site monitoring management method and system for ice and snow sports
CN113382275A (en) * 2021-06-07 2021-09-10 广州博冠信息科技有限公司 Live broadcast data generation method and device, storage medium and electronic equipment
GB2605647A (en) * 2021-04-09 2022-10-12 Secure Sensor Innovative Design Ltd Method and device
FR3136094A1 (en) * 2022-05-25 2023-12-01 Inetum Fall detection method by image analysis

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2870378A1 (en) 2004-05-17 2005-11-18 Electricite De France Person e.g. aged person, fall detecting method for use in e.g. home, involves providing exposing unit for successive exposures of same plane, and detecting reduction in height of outline form beyond chosen threshold interval
US20100199228A1 (en) 2009-01-30 2010-08-05 Microsoft Corporation Gesture Keyboarding
CN201853320U (en) 2010-11-23 2011-06-01 南通大学 Monitoring and alarm system for the old
CN102289911A (en) 2011-07-19 2011-12-21 中山大学深圳研究院 Falling detection system based on pyroelectric infrared rays
WO2012119903A1 (en) 2011-03-04 2012-09-13 Deutsche Telekom Ag Method and system for detecting a fall and issuing an alarm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2870378A1 (en) 2004-05-17 2005-11-18 Electricite De France Person e.g. aged person, fall detecting method for use in e.g. home, involves providing exposing unit for successive exposures of same plane, and detecting reduction in height of outline form beyond chosen threshold interval
US20100199228A1 (en) 2009-01-30 2010-08-05 Microsoft Corporation Gesture Keyboarding
CN201853320U (en) 2010-11-23 2011-06-01 南通大学 Monitoring and alarm system for the old
WO2012119903A1 (en) 2011-03-04 2012-09-13 Deutsche Telekom Ag Method and system for detecting a fall and issuing an alarm
CN102289911A (en) 2011-07-19 2011-12-21 中山大学深圳研究院 Falling detection system based on pyroelectric infrared rays

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
C ROUGIER; J MEUNIER: "Fall detection using 3D head trajectory extracted from a single camera video sequence", JOURNAL OF TELEMEDICINE AND TELECARE, 2005
CHRISTOPHER KAWATSU ET AL: "Development of a fall detection system with Microsoft Kinect", ROBOT INTELLIGENCE TECHNOLOGY AND APPLICATIONS 2012 : AN EDITION OF THE PRESENTED PAPERS FROM THE 1ST INTERNATIONAL CONFERENCE ON ROBOT INTELLIGENCE TECHNOLOGY AND APPLICATIONS ; INTERNATIONAL CONFERENCE ON ROBOT INTELLIGENCE TECHNOLOGY AND APPLICATI, vol. 133, 1 January 2013 (2013-01-01), pages 623 - 630, XP008165645, ISBN: 978-3-642-37373-2, DOI: 10.1007/978-3-642-37374-9_59 *
CHRISTOPHER KAWATSU; JIAXING LI; CJ CHUNG: "Development of a fall detection system with Microsoft Kinect", DEPARTMENT OF MATHEMATICS AND COMPUTER SCIENCE, 2012

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016350A (en) * 2017-04-26 2017-08-04 中科唯实科技(北京)有限公司 A kind of Falls Among Old People detection method based on depth camera
CN108229421A (en) * 2018-01-24 2018-06-29 华中科技大学 A kind of falling from bed behavior real-time detection method based on deep video information
DE102019125049A1 (en) * 2019-09-17 2021-03-18 3Dvisionlabs Gmbh SENSOR UNIT AND METHOD FOR EVALUATING THE POSITION OF PERSONS
CN110688969A (en) * 2019-09-30 2020-01-14 上海依图网络科技有限公司 Video frame human behavior identification method
CN111460908A (en) * 2020-03-05 2020-07-28 中国地质大学(武汉) Human body tumbling identification method and system based on OpenPose
CN111460908B (en) * 2020-03-05 2023-09-01 中国地质大学(武汉) Human body fall recognition method and system based on OpenPose
CN112949503A (en) * 2021-03-05 2021-06-11 齐齐哈尔大学 Site monitoring management method and system for ice and snow sports
GB2605647A (en) * 2021-04-09 2022-10-12 Secure Sensor Innovative Design Ltd Method and device
CN113382275A (en) * 2021-06-07 2021-09-10 广州博冠信息科技有限公司 Live broadcast data generation method and device, storage medium and electronic equipment
CN113382275B (en) * 2021-06-07 2023-03-07 广州博冠信息科技有限公司 Live broadcast data generation method and device, storage medium and electronic equipment
FR3136094A1 (en) * 2022-05-25 2023-12-01 Inetum Fall detection method by image analysis

Also Published As

Publication number Publication date
BE1021528B1 (en) 2015-12-08
EP2763116B1 (en) 2022-08-24

Similar Documents

Publication Publication Date Title
EP2763116B1 (en) Fall detection system and method for detecting a fall of a monitored person
US11544953B2 (en) Methods and systems for identifying the crossing of a virtual barrier
US10368039B2 (en) Video monitoring system
Zhang et al. A survey on vision-based fall detection
JP6720961B2 (en) Attitude detection device and attitude detection method
Stone et al. Fall detection in homes of older adults using the Microsoft Kinect
JP7138931B2 (en) Posture analysis device, posture analysis method, and program
US8427324B2 (en) Method and system for detecting a fallen person using a range imaging device
JP6150207B2 (en) Monitoring system
Li et al. Detection of patient's bed statuses in 3D using a Microsoft Kinect
WO2019003859A1 (en) Monitoring system, control method therefor, and program
Kepski et al. Human fall detection using Kinect sensor
JP2016157170A (en) Abnormal condition notification system, abnormal condition notification program, abnormal condition notification method, and abnormal condition notification equipment
JP2017228042A (en) Monitoring device, monitoring system, monitoring method and monitoring program
WO2012002904A1 (en) Device and method for detection of abnormal spatial states of a human body
CN116243306A (en) Method and system for monitoring object
JP5870230B1 (en) Watch device, watch method and watch program
Su et al. A smart ward with a fall detection system
Mecocci et al. Automatic falls detection in hospital-room context
JP2022010581A (en) Detection device, detection method, image processing method and program
Pathak et al. Fall detection for elderly people in homes using Kinect sensor
Baha et al. Towards a real-time fall detection system using kinect sensor
US20240049991A1 (en) Systems and methods for bed exit and fall detection
KR101046163B1 (en) Real time multi-object tracking method
JP7205540B2 (en) Computer Executed Programs, Information Processing Devices, and Computer Executed Methods

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20140129

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

R17P Request for examination filed (corrected)

Effective date: 20150206

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

17Q First examination report despatched

Effective date: 20160318

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: FAMILYEYE BVBA

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: FAMILYEYE BVBA

RIN1 Information on inventor provided before grant (corrected)

Inventor name: DE SMET, SYLVIE

Inventor name: DEWAELE, NATASHA

APBK Appeal reference recorded

Free format text: ORIGINAL CODE: EPIDOSNREFNE

APBN Date of receipt of notice of appeal recorded

Free format text: ORIGINAL CODE: EPIDOSNNOA2E

APBR Date of receipt of statement of grounds of appeal recorded

Free format text: ORIGINAL CODE: EPIDOSNNOA3E

APAF Appeal reference modified

Free format text: ORIGINAL CODE: EPIDOSCREFNE

APBT Appeal procedure closed

Free format text: ORIGINAL CODE: EPIDOSNNOA9E

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20210916

RAP3 Party data changed (applicant data changed or rights of an application transferred)

Owner name: FAMILYEYE BVBA

19U Interruption of proceedings before grant

Effective date: 20201027

19W Proceedings resumed before grant after interruption of proceedings

Effective date: 20220301

RAP3 Party data changed (applicant data changed or rights of an application transferred)

Owner name: SUPAIR CVBA

INTG Intention to grant announced

Effective date: 20220414

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602014084693

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1514199

Country of ref document: AT

Kind code of ref document: T

Effective date: 20220915

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20220824

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20221226

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20221124

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1514199

Country of ref document: AT

Kind code of ref document: T

Effective date: 20220824

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20221224

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20221125

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602014084693

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602014084693

Country of ref document: DE

26N No opposition filed

Effective date: 20230525

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220824

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20230129

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230129

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20230131

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230131

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230129

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230801

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230131

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230131

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230131

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230129