WO1997023096A1 - Systems and methods employing video combining for intelligent transportation applications - Google Patents

Systems and methods employing video combining for intelligent transportation applications Download PDF

Info

Publication number
WO1997023096A1
WO1997023096A1 PCT/US1996/019639 US9619639W WO9723096A1 WO 1997023096 A1 WO1997023096 A1 WO 1997023096A1 US 9619639 W US9619639 W US 9619639W WO 9723096 A1 WO9723096 A1 WO 9723096A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
bridge
inputs
combining
graphical image
Prior art date
Application number
PCT/US1996/019639
Other languages
French (fr)
Inventor
David Gray Boyer
Lanny Starkes Smoot
Original Assignee
Bell Communications Research, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bell Communications Research, Inc. filed Critical Bell Communications Research, Inc.
Priority to EP96942940A priority Critical patent/EP0867088A4/en
Publication of WO1997023096A1 publication Critical patent/WO1997023096A1/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19639Details of the system layout
    • G08B13/19645Multiple cameras, each having view on one of a plurality of scenes, e.g. multiple cameras for multi-room surveillance or for tracking an object by view hand-over
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/19682Graphic User Interface [GUI] presenting system data to the user, e.g. information on a screen helping a user interacting with an alarm system
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19678User interface
    • G08B13/19689Remote control of cameras, e.g. remote orientation or image zooming control for a PTZ camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Definitions

  • Patents and pending patent applications containing information related to this patent application are:
  • the present invention is directed to video bridging applications and, more particularly, to multicamera surveillance applications, such as an Intelligent Transportation Systems video camera access platform. Discussion of Related Art
  • ITS Intelligent Transport Systems
  • ITS systems perform one or more of the following: (1) monitor roads at bottlenecks and other problem areas to measure traffic flow and to report backups or delays;
  • ITS The goal of ITS is to allow drivers to travel quickly and safely, thus lowering congestion and increasing travel safety. ITS may also decrease transportation costs. The cost of implementing ITS has been found to be significantly less expensive than reducing congestion by building new roads or bridges.
  • One way ITS accomplishes its goal is through video camera surveillance of traffic flows. This surveillance allows the appropriate authorities (i.e., state or local police, highway patrol, department of transportation) to monitor highway traffic and direct traffic flows to decrease congestion and avoid traffic jams and safety hazards.
  • current highway video surveillance facilities 100 have a number of surveillance cameras 104 connected to a video router 105 at a monitoring station.
  • the monitoring station may have a number of video display monitors 102 connected to the router 105.
  • the areas under surveillance are typically displayed on a monitor in one of the following ways. Each location may be displayed on the full monitor (or part of the monitor) for a time period (i.e., several seconds), and then other locations are displayed.
  • a monitor 102' may be divided into a number blocks (such as 4 or 16) 106, each block continuously displaying one location. In either case, the video combining is performed at the video router 105.
  • Either of these display methods requires the human operator 108 viewing a monitor 102 to keep track of the several locations displayed on the monitor(s). This may be difficult and confusing. Special training may be needed before one may effectively monitor such a screen. For example, the screen does not indicate the relationship between locations. Thus, an operator may need to be familiar with the monitored locations to determine, for example, that one viewed location that is uncongested is a suitable alternative route to another viewed location that is congested.
  • Fig. 2 illustrates a conventional ITS network architecture 200.
  • Each of a number of surveillance cameras 104 are directly connected to each ITS monitoring station. If, for example, one hundred surveillance cameras are monitored by four different monitoring stations (i.e., state police 202, local police 204, the fire department 206, and EMS dispatchers 208), four hundred separate, dedicated video connections are required. Some ITS systems may have one thousand or more surveillance cameras 104, further exacerbating the number of direct connections.
  • Each monitoring station has its own video router 105.
  • a multicamera surveillance system such as an ITS camera access platform, using video combining.
  • the inventive system connects video surveillance cameras to a video bridge.
  • One or more users in the same or a different location may obtain video images from one or more video cameras from this bridge, without having a direct connection to the video cameras.
  • the invention may use graphics combining techniques so that the video inputs may be superimposed on a graphical image, such as a map.
  • a graphical image such as a map.
  • the combination of the graphical image and video inputs lends clarity to the video
  • a preferred embodiment of the present invention includes a graphics server configured to provide a graphical image, a video bridge configured to receive a number of video inputs, and a means for combining the video images and graphical image so that the video images appear at desired locations on the graphical image.
  • the graphics server provides a graphical image to the video bridge, and the video/graphics combining is performed by the bridge.
  • the graphics server provides the graphical image to a user's terminal and the video bridge combines the video inputs into a single composite video signal. The composite video signal is sent to the user terminal where it is combined with the graphical image.
  • a user may select a number of video inputs to be combined from a greater number of available video inputs.
  • the video surveillance cameras are preferably (but not necessarily) pannable electronic cameras which provide an individually selectable view of the scene under surveillance without affecting other users' views of the same scene.
  • the inventive system provides a video surveillance system which is simple to understand and use. It also provides a centralized video bridge which may be used to provide a number of monitoring stations to independently view video inputs of interest.
  • a video surveillance system which is simple to understand and use. It also provides a centralized video bridge which may be used to provide a number of monitoring stations to independently view video inputs of interest.
  • state and local police, EMS dispatch, fire stations, trucking companies, television stations, radio stations, commuters, or any other interested party may be able to view locations of interest using relatively inexpensive equipment, such as a computer and joystick.
  • Fig. 1 is a block diagram of a prior art highway video surveillance facility
  • Fig. 2 is a block diagram of a conventional ITS network architecture
  • Fig. 3 illustrates an end-to-end system according to a first preferred embodiment according to the present invention
  • Fig. 4A is a block diagram of a multimedia bridge such as may be used in the present invention.
  • Fig. 4B illustrates a number of multimedia bridge video composing modules combined into a video composing chain
  • Fig. 5 illustrates a monitor screen according to a preferred embodiment of the present invention
  • Figs. 6 - 8 illustrate alternative configurations for resource sharing using the present invention
  • Fig. 9 is a software hierarchy of a preferred embodiment according to the present invention.
  • Video Bridging Enabled ITS Monitoring Video bridging enabled ITS monitoring is described with reference to Fig. 5.
  • HI. Svstem Architecture The system architecture is described with reference to Figs. 6
  • the components of the present invention include a video bridge, such as the Bellcore proprietary Personal Presence System, electronic panning capability, and video graphics combining.
  • a video bridge such as the Bellcore proprietary Personal Presence System, electronic panning capability, and video graphics combining.
  • a multicamera surveillance system provides any number of users with independently controllable views from video cameras stationed at desired locations. These views may be combined with a graphical image, such as a map, to lend clarity to the video input, such as showing a geographical relationship between the surveillance locations.
  • the video surveillance cameras are pannable electronic cameras, such as the electronic panning camera described in U.S. Patents Nos. 5,187,571 and 5,532,737, discussed above.
  • the multicamera surveillance network architecture 300 features a video bridge 310 as the center of the system.
  • a number of video surveillance cameras 104 are connected to the video bridge 310.
  • a bridge controller 320 and a graphics server 330 are also provided.
  • the bridge controller 320 is responsive to commands received from a user's terminal 102.
  • the bridge controller 320 then instructs the video bridge 310 accordingly.
  • the graphics server 330 provides graphical images, such as a map, to be combined with the video images from the cameras.
  • the graphical images may be combined with the video images at the bridge 310 or at the user terminal 102.
  • the bridge outputs are delivered to one or more monitoring stations, such as a state police station, a local police station, a local fire department, or EMS dispatcher. Note that each monitor on the network has a single connection to the b ⁇ dge 102 There is no need for a fully interconnected network as seen in Fig.2.
  • Each end-user display 340 is a combination of one or more video images and a graphical image.
  • the graphical image lends clanty to the video image, such as providing geographical information about the video images
  • the display 340 may be controlled by the user 108 at that monito ⁇ ng station
  • each user could send real-time requests from a pannable and zoomable graphical map and which in turn may be controlled by consumers wanting to avail themselves of the ITS traffic information (for example)
  • the zooming and panning may be performed in a conventional manner.
  • Each user may, therefore, view the loca ⁇ ons in a desired manner without affecting another user's view of the same location
  • Bellcore's proprietary Personal Presence System multimedia b ⁇ dge (descnbed in patent applications se ⁇ al numbers 08/434,081, 08/434,259, and 08/434,082, as discussed above) is advantageously used in a preferred embodiment of the multicamera surveillance system according to the present invention
  • the Personal Presence System (PPS) is descnbed in these patent applications.
  • a bnef desc ⁇ ption of the PPS multimedia b ⁇ dge is provided as background.
  • the PPS multimedia bndge performs audio and video combining
  • the present invention is directed to video applications
  • a video b ⁇ dge 310 is part of the multimedia bndge 400 )
  • Fig. 4A is a block diagram of a multimedia b ⁇ dge 400
  • the multimedia b ⁇ dge is preferably connected between network interfaces 402, 404.
  • Video inputs such as inputs i, j, and k are received from a first network interface 402 by video decoders 406a, 406b, 406c.
  • this first network interface 402 may receive video surveillance camera 104 inputs.
  • this interface may be connected to any suitable network, such as an ISDN, Internet, ATM, or other network.
  • the outputs of these decoders are sent to a baseband signal router 408, which routes the individual video signals to a video b ⁇ dge 310, such as a multipoint connection unit.
  • the video b ⁇ dge 310 includes a number of video composing chains (VCC) 412 (descnbed below), which each receive a number of the video inputs
  • VCC video composing chains
  • the output of each VCC 412 is sent to a video coder 414 for particular video service customers via a second network interface
  • the second network interface 404 may be connected to a video network at an ITS monitoring station, if the combined image is to be viewed locally, or may be connected to the Internet, ISDN, or other network if the image is to be viewed at a remotely located monitoring station.
  • a control 416 is provided for controlling the video bridge 310 and an audio bridge 418.
  • the video bridge 310 combines video streams in a flexible manner.
  • the video bridge 310 combines video streams in a flexible manner.
  • VCC 310 comprises a plurality of video composing chains (VCC) 412.
  • Fig. 4B is a block diagram of one VCC 412.
  • Each VCC 412 comprises a plurality of small modules called Video Composing Modules (VCM) 450.
  • VCM 450 receives a video A input 452, a priority A input 454, a sync input 456, a video B input 458, a priority B input 460, and a B external sync 462.
  • the video A input 452 and priority A input 456 are sent to a multiplexor 462.
  • the video B input 458 and priority B input 460 are received by a priority generator 464.
  • the video B input is also received at a frame memory 466.
  • the video B input and priority B input are sent to the multiplexor 462 to be combined with the video A and priority A inputs.
  • the sync input 456 is provided to the frame memory 466 and a delay 468.
  • the combined videos and delays are output and received by the next VCM as the video A and priority A inputs, the delayed sync is sent to the next VCM as sync in.
  • VCMs 450 are connected into Video Composing Chains 412.
  • the length of a VCC 412 is determined by the number of video streams to be viewed simultaneously.
  • the VCMs 450 add and change data fields associated with each picture element (Pel) process. This new data field is called priority, which may be associated with the stacking order on a video screen. This produces a visual effect similar to that of a windowed computer screen, with images stacked one on top of another, partially obscuring those beneath. Because the VCM deals with priority on a Pel by Pel basis, the images are not restricted to rectangular pages as a computer window is, but rather may be objects in areas of arbitrary shape combined from different video streams. This sort of hardware and display is called a Pel rate priority multiplexer and is described is U.S. Patent Application Serial No. 08/434,082.
  • a user 108 may issue a command to the bridge controller 320 via terminal 102.
  • the bridge controller 320 receives the command and instructs the video bridge 310 to respond accordingly.
  • a VCC 412 may be altered to include or delete a particular video image.
  • Bellcore's proprietary Electronic Panning Camera described in U.S. Patents Nos. 5,187,571 and 5,532,737 is an inventive device and method for producing a widely pannable video signal from a composite camera with no moving parts.
  • a composite camera is composed of several miniature, standard, video cameras whose fields of view are optically, and seamlessly, merged to form a broad panoramic field of view.
  • This panoramic view is provided to a virtually unlimited number of user-controlled panning circuits.
  • Each panning circuit provides a separate electronically pannable view.
  • Each user may perform the electronic equivalent of sliding a viewing window through the panoramic image to extract the piece of the view in which the viewer is interested.
  • any number of independently controllable camera views may be extracted from a single composite electronic panning system. This may be done, for example, by the user sending a control signal to a panning circuit contained in the multimedia bridge 400. This enables viewers to act as directors, choosing from a wide panorama whatever piece of the action they want to see — without interfering with other viewers as they choose their own perspectives.
  • Electronic panning capability may optionally and advantageously be provided in the multicamera surveillance system 300 according to the present invention.
  • the cameras 104 provided by the system are electronic panning cameras
  • each user 108 may individually select a portion of the panoramic view provided by the electronic panning cameras that the user desires to view, without interfering with the view of other users.
  • an unlimited number of viewers may view a scene from any number of angles without affecting any other users' views.
  • a user may use a joystick or other command to instruct the bridge controller 320 to have the video bridge 310 extract a desired portion of the panoramic view provided by the camera.
  • an illustrative monitor screen 340 according to the present invention comprises a graphical image, such as a digitally generated street map combined with real time video images 502 at certain locations to show what is happening at those locations.
  • This type of video graphics combining may be made possible by the video bridge described above.
  • the graphical image may be provided by a graphics server 330.
  • the graphics server may be nothing more than another camera 104 pointed at a map (or other image to be combined). This map is then supplied to the video bridge 310 and is combined with other video images to provide the composite image 340 seen in Fig. 5.
  • the graphics server provides the graphical image to either the video bridge 310 or the user terminal 102.
  • the graphics server 330 provides a graphical image to the video bridge 310 by, for example, a user 108 request via a command from the bridge controller 320.
  • the graphical image provided by the graphics server 330 may include designated areas for the insertion of certain video images 502 from particular locations. This information may be provided by the video bridge 310 and combined appropriately by a VCC 412 for the requesting user.
  • the terminal preferably includes a video overlay card 350 which supports chroma keying, and a combined, single video stream is provided from the video bridge 310 having a monochromatic background (such as a blue background).
  • the video may be combined with the digitally generated map in which the monochrome background is removed from the video signal received from the video bridge 310 and replaced with the digitally generated map received from the graphics server 330.
  • the present invention is a union of a number of visual services technologies to support integrated access to an unlimited number of remote camera inputs while supporting a strong contextual sense for the overall picture.
  • a preferred end-user interface is illustrated in Fig. 5.
  • the screen 340 comprises a video graphics map of the monitored area displayed on an ordinary NTSC or other video monitor. Integrated with the graphical image of a map are video images (preferably live, real time images) 502 of the traffic conditions at various locations on the map.
  • the map can be zoomed in or out, or panned vertically or horizontally by the operator by using a conventional mouse, joystick, or touch-screen interface.
  • the map displays a live video inset 502 of the image captured by the camera mounted at that location. If the map is zoomed in for detail, and the subject portion of the map contains a live video image, it too zooms up in size (occupying the full screen if the chosen view were centered on the camera image). If the map is zoomed out, many small video images could simultaneously be visible on the screen. Finally, if the map is zoomed out to the point where the images at each of the map monitor points is too small to be fully inte ⁇ reted, dien the video views preferably become icons. In this manner, the operators of such a system are able to keep track of the "big picture" in traffic situations as well as the detailed local view.
  • the video bridging capabilities include the ability to enable display of each of its video inputs in a customized fashion by a virtually unlimited number of end-users. This capability is usefully applied here.
  • Each displayed camera view is scaled analogously with the current map scale and properly positioned on the end-user's viewing screen 340.
  • video-graphics combining is performed at the video bridge 310, graphical images and video images are combined into a "group” and treated as a single image.
  • the video bridge may feature such group operations, e.g., simultaneous “scale” or “move” operations on all, or a subset of all, displayed images enables efficient “mass scaling.”
  • grouping video images may be "attached" to the map and the map and video images are treated as a single image.
  • the video image locations may be defined by designating a particular camera 104. That is, a graphical image of a map may have an area designated to receive a video image from a particular camera. This designation may be sent to the video bridge, which selects the proper video feed.
  • software in a user's terminal may send information to the bridge controller instructing a panning circuit in the video bridge 310 to "slide" to another portion of the panoramic view or to zoom on a particular image, thus changing the output of the user's VCC 412. Images can be overlapped and there is no limit to the number of inputs that can be displayed.
  • a human operator 108 may select the video inputs to be combined by a particular VCC 312 connected to that operator's video coder 314.
  • the video bridge 310 may send the selected video and graphics images to the user's monitor 102.
  • Traffic monitoring hubs can be moved down from regional-size to local-jurisdictional size. For instance, small municipal divisions (local police forces, ambulance services, trucking companies, radio stations, etc.) may access all of the visual traffic information along a thoroughfare (or across a state).
  • the only skill necessary to the successful operation of the system is the ability to read a map and to point and click a mouse (or joystick or touch screen).
  • the terminal equipment used would drop from myriad monitors and a complex local switching system to a single monitor or display with upstream signalling.
  • Fig. 6 illustrates a first type of resource sharing 600 wherein a multimedia bridge 310 (for simplicity Figs. 6, 7, and 8 do not show the bridge controller 320 or graphics server 330) is maintained by one ITS monitor station (for example, the state police), and remote access may be provided to other monitor stations, such as the local police and EMS dispatchers.
  • a multimedia bridge 310 for simplicity Figs. 6, 7, and 8 do not show the bridge controller 320 or graphics server 330
  • one ITS monitor station for example, the state police
  • remote access may be provided to other monitor stations, such as the local police and EMS dispatchers.
  • Fig. 7 illustrates a second type of resource sharing 700 wherein a centralized video bridge 310 is provided (perhaps by a video services provider).
  • Several ITS monitor stations obtain remote access to the bridge.
  • a service provider may provide ITS video surveillance services to any interested subscriber.
  • a commuter may connect a personal computer to the video bridge 310 to determine the traffic conditions before going to work.
  • the centralized video bridge permits the end user to have a minimum of equipment, such as a computer and a joystick. (This is particularly true if the graphics combining is performed in the video bridge 310).
  • Fig. 8 illustrates a third type of resource sharing 800 wherein video inputs are provided to a conventional video router and to a video bridge. The video output from the bridge may be shared by a number of ITS monitors.
  • Fig. 9 is a diagram illustrating the software hierarchy 900 of a preferred embodiment according to the present invention.
  • the software is divided into client (user) software 902 and server (video bridge) software 904.
  • the client software 902 includes a graphical user interface 906 and a video bridge client 908.
  • the applications layer of the present invention is the graphical user interface (GUI) 906 that appears on the monitor, display, or screen.
  • the video bridge client 908 provides an API for the underlying service software for the GUI 906 and relays the GUI commands to the service session 910.
  • the grouping information (how the video insets are attached to the map) is kept track of by the multimedia bridge client 908.
  • This software is separated from the GUI (whether the GUI runs in a set-top box to control an output displayed on a television, or if it runs in a PC) so that the tracking of the map and the attached videos can be done in a uniform way, regardless of the display device used.
  • the server software 904 includes a service session manager 910, a bridge manager 912, a VCM agent 914, a driver 916, and a hardware VCM 918.
  • the service session manager 910 keeps track of the active camera inputs and also tracks which users are currently receiving camera information.
  • the service session manager 910 also keeps track of the access permissions of each camera. Some cameras may have restricted access so that all potential users may not be permitted to receive the information from some of the monitoring cameras.
  • the bridge manager 912 keeps track of the state of the video bridge. The bridge is reconfigured when a new user requests the views of the ITS application and when he/she adds or drops camera views by panning or zooming to a new part of die map which has more or fewer video insets.
  • the VCM agent 914 and driver 916 control and track the state of a single combining module. It receives commands from the bridge manager 912 and the driver 916 sends commands to the address of the hardware (or possibly software if a software based bridge is being used) VCM 918 which is being controlled by the VCM Agent 914.
  • electronic panning cameras are used to allow each user to individually control the pan, tilt, and zoom of each of the camera views.

Abstract

A multicamera surveillance system (300) connects video surveillance cameras (104) to a video bridge (310). One or more users (108) in the same or different locations may obtain video from one or more video cameras from this bridge, without having a direct connection to the video cameras. The invention may use graphic combining techniques so that the video inputs may be superimposed on a graphic image, such as a map. The video surveillance cameras may be pannable electronic cameras which provide a user selectable panorama of the scene under surveillance. The inventive system provides a video surveillance system which is simple to understand and use. It also provides a centralized video bridge which may be used to provide a number of monitoring stations to independently view video inputs of interest. Thus, state and local police, EMS dispatch, fire stations, trucking companies, commuters, or any other interested party may be able to view locations of interest using relatively inexpensive equipment, such as a computer and joystick.

Description

SYSTEMS AND METHODS EMPLOYING VIDEO COMBINING FOR INTELLIGENT TRANSPORTATION APPLICATIONS
RELATED APPLICATIONS
This patent application claims the benefit of provisional patent application serial number 60/008,927 filed on December 15, 1995 and entitled "Systems and Method Employing Multimedia Combining for Intelligent Transportation and Other Communications Applications". The provisional application names the same inventors as named and discloses subject matter claimed herein. The content of this provisional application is incoφorated herein by reference.
Patents and pending patent applications containing information related to this patent application are:
1. U.S. Patent No. 5,187,571 entitled "Television System For Displaying Multiple Views of a Remote Location" which issued on February 16, 1993 to D. A. Braun, W. A. E. Nilson, m, T. J. Nelson, and L. S. Smoot;
2. U.S. Patent No. 5,532,737 entitled "Camera Arrangement With Wide Field of View" which issued on July 2, 1996 to D. A. Braun;
3. U.S. Patent No. 4,890,314 entitled "Teleconference Facility With High Resolution Video Display" which issued on December 26, 1989 to T. H. Judd and L. S. Smoot;
4. U.S. Patent Application Serial Number 08/434,081 entitled "System and Method For Associating Multimedia Object" filed on May 3, 1995 for D. G.
Boyer;
5. U.S. Patent Application Serial Number 08/434,259 entitled "Video Conferencing Service Having Centralized Multimedia Bridge" filed on May 3, 1995 for D. G. Boyer, M. E. Lukacs, and P. E. Fleischer; and 6. U.S. Patent Application Serial No. 08/434,082 entitled "Infinitely Expandable
Real-Time Video Conferencing System", filed on May 3, 1995 for M.E. Lukacs. All of these patents and patent applications are assigned to the assignee of the present invention. T e contents of all of these related documents are incoφorated herein by reference.
BACKGROUND OF THE INVENTION Field of the Invention
The present invention is directed to video bridging applications and, more particularly, to multicamera surveillance applications, such as an Intelligent Transportation Systems video camera access platform. Discussion of Related Art
The United States Federal government and several states have proposed implementing Intelligent Transport Systems (ITS). The goal of ITS is to improve highway safety and reduce highway congestion. Ultimately, ITS may include computerized automobile navigation, "hands-off, feet-up" driving by computer controlled cars, and highways which automatically provide safety and road condition information to the computer-controlled cars.
Currently, several ITS systems have been implemented. These systems perform one or more of the following: (1) monitor roads at bottlenecks and other problem areas to measure traffic flow and to report backups or delays;
(2) provide improved dispatch systems to aid local governments in detecting accidents; and
(3) provide electronic signs which instruct drivers to change commutation paths to divert traffic away from dangerous or congested areas.
The goal of ITS is to allow drivers to travel quickly and safely, thus lowering congestion and increasing travel safety. ITS may also decrease transportation costs. The cost of implementing ITS has been found to be significantly less expensive than reducing congestion by building new roads or bridges. One way ITS accomplishes its goal is through video camera surveillance of traffic flows. This surveillance allows the appropriate authorities (i.e., state or local police, highway patrol, department of transportation) to monitor highway traffic and direct traffic flows to decrease congestion and avoid traffic jams and safety hazards.
As seen in Fig. 1, current highway video surveillance facilities 100 have a number of surveillance cameras 104 connected to a video router 105 at a monitoring station. The monitoring station may have a number of video display monitors 102 connected to the router 105. Typically, there are fewer monitors than surveillance cameras 104, so that a single monitor 102' connected to the router 105 must include displays for more than one surveillance location. The areas under surveillance are typically displayed on a monitor in one of the following ways. Each location may be displayed on the full monitor (or part of the monitor) for a time period (i.e., several seconds), and then other locations are displayed. Alternatively, a monitor 102' may be divided into a number blocks (such as 4 or 16) 106, each block continuously displaying one location. In either case, the video combining is performed at the video router 105.
Either of these display methods requires the human operator 108 viewing a monitor 102 to keep track of the several locations displayed on the monitor(s). This may be difficult and confusing. Special training may be needed before one may effectively monitor such a screen. For example, the screen does not indicate the relationship between locations. Thus, an operator may need to be familiar with the monitored locations to determine, for example, that one viewed location that is uncongested is a suitable alternative route to another viewed location that is congested.
Fig. 2 illustrates a conventional ITS network architecture 200. Each of a number of surveillance cameras 104 are directly connected to each ITS monitoring station. If, for example, one hundred surveillance cameras are monitored by four different monitoring stations (i.e., state police 202, local police 204, the fire department 206, and EMS dispatchers 208), four hundred separate, dedicated video connections are required. Some ITS systems may have one thousand or more surveillance cameras 104, further exacerbating the number of direct connections. Each monitoring station has its own video router 105.
Therefore, it is an object of the present invention to provide a video monitoring system which allows an operator to inteφret the video information in the context of surrounding conditions.
It is another object of the present invention to provide a video camera access platform which does not require extensive training in order to inteφret the video displays. It is yet a further object of the present invention to provide an ITS network architecture having a central video server for connecting a number of video cameras to a number of monitoring stations without having a dedicated wire between each camera and monitoring station. It is yet an even further object of the present invention to provide a video monitoring system wherein each monitoring station does not need its own video router, thus reducing the amount of duplicative equipment in the system.
SUMMARY OF THE INVENTION
These and other objects of the present invention are provided by a multicamera surveillance system, such as an ITS camera access platform, using video combining. The inventive system connects video surveillance cameras to a video bridge. One or more users in the same or a different location may obtain video images from one or more video cameras from this bridge, without having a direct connection to the video cameras. The invention may use graphics combining techniques so that the video inputs may be superimposed on a graphical image, such as a map. Preferably, the combination of the graphical image and video inputs lends clarity to the video
inputs, such as showing a geographical relationship between surveillance locations.
A preferred embodiment of the present invention includes a graphics server configured to provide a graphical image, a video bridge configured to receive a number of video inputs, and a means for combining the video images and graphical image so that the video images appear at desired locations on the graphical image. In one embodiment, the graphics server provides a graphical image to the video bridge, and the video/graphics combining is performed by the bridge. In another embodiment, the graphics server provides the graphical image to a user's terminal and the video bridge combines the video inputs into a single composite video signal. The composite video signal is sent to the user terminal where it is combined with the graphical image. In a preferred embodiment, a user may select a number of video inputs to be combined from a greater number of available video inputs. Also, the video surveillance cameras are preferably (but not necessarily) pannable electronic cameras which provide an individually selectable view of the scene under surveillance without affecting other users' views of the same scene.
The inventive system provides a video surveillance system which is simple to understand and use. It also provides a centralized video bridge which may be used to provide a number of monitoring stations to independently view video inputs of interest. Thus, state and local police, EMS dispatch, fire stations, trucking companies, television stations, radio stations, commuters, or any other interested party may be able to view locations of interest using relatively inexpensive equipment, such as a computer and joystick.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention is described with reference to the following figures:
Fig. 1 is a block diagram of a prior art highway video surveillance facility;
Fig. 2 is a block diagram of a conventional ITS network architecture; Fig. 3 illustrates an end-to-end system according to a first preferred embodiment according to the present invention;
Fig. 4A is a block diagram of a multimedia bridge such as may be used in the present invention;
Fig. 4B illustrates a number of multimedia bridge video composing modules combined into a video composing chain;
Fig. 5 illustrates a monitor screen according to a preferred embodiment of the present invention;
Figs. 6 - 8 illustrate alternative configurations for resource sharing using the present invention; and Fig. 9 is a software hierarchy of a preferred embodiment according to the present invention.
DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT
The invention is described in the following sections: I. Overview and Components of the Present Invention: An overview and components of the present invention are described with reference to Figs. 3, 4A, and 97/23096
4B, including the video bridge, electronic panning capability, and video graphics combining. H. Video Bridging Enabled ITS Monitoring: Video bridging enabled ITS monitoring is described with reference to Fig. 5. HI. Svstem Architecture: The system architecture is described with reference to Figs. 6
- 9. IV. Conclusion: A conclusion is provided.
L Overview and Components ofthe Present Invention
The components of the present invention include a video bridge, such as the Bellcore proprietary Personal Presence System, electronic panning capability, and video graphics combining.
A. Overview of the Invention
A multicamera surveillance system according to a preferred embodiment of the present invention provides any number of users with independently controllable views from video cameras stationed at desired locations. These views may be combined with a graphical image, such as a map, to lend clarity to the video input, such as showing a geographical relationship between the surveillance locations. Preferably, the video surveillance cameras are pannable electronic cameras, such as the electronic panning camera described in U.S. Patents Nos. 5,187,571 and 5,532,737, discussed above.
As seen in Fig. 3, the multicamera surveillance network architecture 300 features a video bridge 310 as the center of the system. A number of video surveillance cameras 104 are connected to the video bridge 310. In this embodiment, a bridge controller 320 and a graphics server 330 are also provided. The bridge controller 320 is responsive to commands received from a user's terminal 102. The bridge controller 320 then instructs the video bridge 310 accordingly. The graphics server 330 provides graphical images, such as a map, to be combined with the video images from the cameras. The graphical images may be combined with the video images at the bridge 310 or at the user terminal 102. The bridge outputs are delivered to one or more monitoring stations, such as a state police station, a local police station, a local fire department, or EMS dispatcher. Note that each monitor on the network has a single connection to the bπdge 102 There is no need for a fully interconnected network as seen in Fig.2.
Each end-user display 340 is a combination of one or more video images and a graphical image. Preferably, the graphical image lends clanty to the video image, such as providing geographical information about the video images The display 340 may be controlled by the user 108 at that monitoπng station Thus, each user could send real-time requests from a pannable and zoomable graphical map and which in turn may be controlled by consumers wanting to avail themselves of the ITS traffic information (for example) The zooming and panning may be performed in a conventional manner. Each user may, therefore, view the locaϋons in a desired manner without affecting another user's view of the same location
B. The Video Bridge
Bellcore's proprietary Personal Presence System multimedia bπdge (descnbed in patent applications seπal numbers 08/434,081, 08/434,259, and 08/434,082, as discussed above) is advantageously used in a preferred embodiment of the multicamera surveillance system according to the present invention The Personal Presence System (PPS) is descnbed in these patent applications. A bnef descπption of the PPS multimedia bπdge is provided as background. (Note that the PPS multimedia bndge performs audio and video combining The present invention is directed to video applications A video bπdge 310 is part of the multimedia bndge 400 )
Fig. 4A is a block diagram of a multimedia bπdge 400 The multimedia bπdge is preferably connected between network interfaces 402, 404. Video inputs, such as inputs i, j, and k are received from a first network interface 402 by video decoders 406a, 406b, 406c. In the present invention, this first network interface 402 may receive video surveillance camera 104 inputs. Note that this interface may be connected to any suitable network, such as an ISDN, Internet, ATM, or other network. The outputs of these decoders are sent to a baseband signal router 408, which routes the individual video signals to a video bπdge 310, such as a multipoint connection unit. The video bπdge 310 includes a number of video composing chains (VCC) 412 (descnbed below), which each receive a number of the video inputs The output of each VCC 412 is sent to a video coder 414 for particular video service customers via a second network interface In the present invention, the second network interface 404 may be connected to a video network at an ITS monitoring station, if the combined image is to be viewed locally, or may be connected to the Internet, ISDN, or other network if the image is to be viewed at a remotely located monitoring station. A control 416 is provided for controlling the video bridge 310 and an audio bridge 418. The video bridge 310 combines video streams in a flexible manner. The video bridge
310 comprises a plurality of video composing chains (VCC) 412. Fig. 4B is a block diagram of one VCC 412. Each VCC 412 comprises a plurality of small modules called Video Composing Modules (VCM) 450. Each VCM 450 receives a video A input 452, a priority A input 454, a sync input 456, a video B input 458, a priority B input 460, and a B external sync 462. The video A input 452 and priority A input 456 are sent to a multiplexor 462. The video B input 458 and priority B input 460 are received by a priority generator 464. The video B input is also received at a frame memory 466. The video B input and priority B input are sent to the multiplexor 462 to be combined with the video A and priority A inputs. The sync input 456 is provided to the frame memory 466 and a delay 468. The combined videos and delays are output and received by the next VCM as the video A and priority A inputs, the delayed sync is sent to the next VCM as sync in.
As seen in Fig. 4B, the VCMs 450 are connected into Video Composing Chains 412.
The length of a VCC 412 is determined by the number of video streams to be viewed simultaneously. The VCMs 450 add and change data fields associated with each picture element (Pel) process. This new data field is called priority, which may be associated with the stacking order on a video screen. This produces a visual effect similar to that of a windowed computer screen, with images stacked one on top of another, partially obscuring those beneath. Because the VCM deals with priority on a Pel by Pel basis, the images are not restricted to rectangular pages as a computer window is, but rather may be objects in areas of arbitrary shape combined from different video streams. This sort of hardware and display is called a Pel rate priority multiplexer and is described is U.S. Patent Application Serial No. 08/434,082.
As noted above, with reference to Fig. 3, a user 108 may issue a command to the bridge controller 320 via terminal 102. The bridge controller 320 receives the command and instructs the video bridge 310 to respond accordingly. For example, a VCC 412 may be altered to include or delete a particular video image. C. Electronic Panning Capability
Bellcore's proprietary Electronic Panning Camera, described in U.S. Patents Nos. 5,187,571 and 5,532,737 is an inventive device and method for producing a widely pannable video signal from a composite camera with no moving parts. A composite camera is composed of several miniature, standard, video cameras whose fields of view are optically, and seamlessly, merged to form a broad panoramic field of view. This panoramic view is provided to a virtually unlimited number of user-controlled panning circuits. Each panning circuit provides a separate electronically pannable view. Each user may perform the electronic equivalent of sliding a viewing window through the panoramic image to extract the piece of the view in which the viewer is interested.
One advantage is the multi-user aspect of this innovative camera system. Using a video bridge, such as the one included in Bellcore's Personal Presence System, any number of independently controllable camera views may be extracted from a single composite electronic panning system. This may be done, for example, by the user sending a control signal to a panning circuit contained in the multimedia bridge 400. This enables viewers to act as directors, choosing from a wide panorama whatever piece of the action they want to see — without interfering with other viewers as they choose their own perspectives.
Electronic panning capability may optionally and advantageously be provided in the multicamera surveillance system 300 according to the present invention. Referring again to Fig. 3, if the cameras 104 provided by the system are electronic panning cameras, each user 108 may individually select a portion of the panoramic view provided by the electronic panning cameras that the user desires to view, without interfering with the view of other users. Thus, an unlimited number of viewers may view a scene from any number of angles without affecting any other users' views. A user may use a joystick or other command to instruct the bridge controller 320 to have the video bridge 310 extract a desired portion of the panoramic view provided by the camera.
Of course, stationary cameras or conventional electro-mechanically panning and tilting cameras may also be provided for the inventive system. D. Video-Graphics Combining
As described below, the present invention preferably combines real time video with graphics. As seen in Fig. 5, an illustrative monitor screen 340 according to the present invention comprises a graphical image, such as a digitally generated street map combined with real time video images 502 at certain locations to show what is happening at those locations. This type of video graphics combining may be made possible by the video bridge described above. As seen in Fig. 3, the graphical image may be provided by a graphics server 330. In a minimal configuration of the invention, the graphics server may be nothing more than another camera 104 pointed at a map (or other image to be combined). This map is then supplied to the video bridge 310 and is combined with other video images to provide the composite image 340 seen in Fig. 5. Preferably, however, the graphics server provides the graphical image to either the video bridge 310 or the user terminal 102.
Where the graphical image is provided to the video bridge 310, the graphics server 330 provides a graphical image to the video bridge 310 by, for example, a user 108 request via a command from the bridge controller 320. The graphical image provided by the graphics server 330 may include designated areas for the insertion of certain video images 502 from particular locations. This information may be provided by the video bridge 310 and combined appropriately by a VCC 412 for the requesting user. As seen in Fig. 3, where the graphics server 330 provides a digitally generated map to the user terminal 102, the terminal preferably includes a video overlay card 350 which supports chroma keying, and a combined, single video stream is provided from the video bridge 310 having a monochromatic background (such as a blue background). The video may be combined with the digitally generated map in which the monochrome background is removed from the video signal received from the video bridge 310 and replaced with the digitally generated map received from the graphics server 330.
II. Video Bridging Enabled ITS Monitoring
The present invention is a union of a number of visual services technologies to support integrated access to an unlimited number of remote camera inputs while supporting a strong contextual sense for the overall picture. A preferred end-user interface is illustrated in Fig. 5. The screen 340 comprises a video graphics map of the monitored area displayed on an ordinary NTSC or other video monitor. Integrated with the graphical image of a map are video images (preferably live, real time images) 502 of the traffic conditions at various locations on the map. The map can be zoomed in or out, or panned vertically or horizontally by the operator by using a conventional mouse, joystick, or touch-screen interface.
Corresponding to each of the real-world monitored points, the map displays a live video inset 502 of the image captured by the camera mounted at that location. If the map is zoomed in for detail, and the subject portion of the map contains a live video image, it too zooms up in size (occupying the full screen if the chosen view were centered on the camera image). If the map is zoomed out, many small video images could simultaneously be visible on the screen. Finally, if the map is zoomed out to the point where the images at each of the map monitor points is too small to be fully inteφreted, dien the video views preferably become icons. In this manner, the operators of such a system are able to keep track of the "big picture" in traffic situations as well as the detailed local view. To see where this may be critical, consider an emergency condition caused by an auto accident along some portion of a major thoroughfare. The operator is faced with getting emergency vehicles to the location. It is not only important to see what is happening at the accident site, but also to see how traffic is behaving at all the tributaries and attributes of that location so that emergency vehicles can be directed to and from the scene as quickly as possible.
HI. System Architecture
The video bridging capabilities include the ability to enable display of each of its video inputs in a customized fashion by a virtually unlimited number of end-users. This capability is usefully applied here. Each displayed camera view is scaled analogously with the current map scale and properly positioned on the end-user's viewing screen 340.
If the video-graphics combining is performed at the video bridge 310, graphical images and video images are combined into a "group" and treated as a single image. The video bridge may feature such group operations, e.g., simultaneous "scale" or "move" operations on all, or a subset of all, displayed images enables efficient "mass scaling." Using grouping, video images may be "attached" to the map and the map and video images are treated as a single image.
If the video-graphics combining is performed at the user terminal, or if different graphical images are supplied to the bridge (e.g., a more detailed map is sent to a VCC if an area is zoomed on), the video image locations may be defined by designating a particular camera 104. That is, a graphical image of a map may have an area designated to receive a video image from a particular camera. This designation may be sent to the video bridge, which selects the proper video feed. In a preferred embodiment, in response to a user's request (such as a joystick movement) software in a user's terminal may send information to the bridge controller instructing a panning circuit in the video bridge 310 to "slide" to another portion of the panoramic view or to zoom on a particular image, thus changing the output of the user's VCC 412. Images can be overlapped and there is no limit to the number of inputs that can be displayed.
Although as many as a thousand camera signals form (switched) bridge inputs, practically only a limited number (30, for example) may appear on a screen. A human operator 108 may select the video inputs to be combined by a particular VCC 312 connected to that operator's video coder 314. The video bridge 310 may send the selected video and graphics images to the user's monitor 102.
An important advantage of this system is that it allows ITS monitoring customers a much lower technology "bar" to vault. Traffic monitoring hubs can be moved down from regional-size to local-jurisdictional size. For instance, small municipal divisions (local police forces, ambulance services, trucking companies, radio stations, etc.) may access all of the visual traffic information along a thoroughfare (or across a state). The only skill necessary to the successful operation of the system is the ability to read a map and to point and click a mouse (or joystick or touch screen). The terminal equipment used would drop from myriad monitors and a complex local switching system to a single monitor or display with upstream signalling.
Note that the present invention may be configured in several ways to provide video surveillance to several locations. Fig. 6 illustrates a first type of resource sharing 600 wherein a multimedia bridge 310 (for simplicity Figs. 6, 7, and 8 do not show the bridge controller 320 or graphics server 330) is maintained by one ITS monitor station (for example, the state police), and remote access may be provided to other monitor stations, such as the local police and EMS dispatchers.
Fig. 7 illustrates a second type of resource sharing 700 wherein a centralized video bridge 310 is provided (perhaps by a video services provider). Several ITS monitor stations obtain remote access to the bridge. With this type of resource sharing, it may be possible for a service provider to provide ITS video surveillance services to any interested subscriber. For example, a commuter may connect a personal computer to the video bridge 310 to determine the traffic conditions before going to work. Note that the centralized video bridge permits the end user to have a minimum of equipment, such as a computer and a joystick. (This is particularly true if the graphics combining is performed in the video bridge 310). Fig. 8 illustrates a third type of resource sharing 800 wherein video inputs are provided to a conventional video router and to a video bridge. The video output from the bridge may be shared by a number of ITS monitors.
Fig. 9 is a diagram illustrating the software hierarchy 900 of a preferred embodiment according to the present invention. As seen in Fig. 9, the software is divided into client (user) software 902 and server (video bridge) software 904. The client software 902 includes a graphical user interface 906 and a video bridge client 908. The applications layer of the present invention is the graphical user interface (GUI) 906 that appears on the monitor, display, or screen. The video bridge client 908 provides an API for the underlying service software for the GUI 906 and relays the GUI commands to the service session 910.
The grouping information (how the video insets are attached to the map) is kept track of by the multimedia bridge client 908. This software is separated from the GUI (whether the GUI runs in a set-top box to control an output displayed on a television, or if it runs in a PC) so that the tracking of the map and the attached videos can be done in a uniform way, regardless of the display device used.
The server software 904 includes a service session manager 910, a bridge manager 912, a VCM agent 914, a driver 916, and a hardware VCM 918. The service session manager 910 keeps track of the active camera inputs and also tracks which users are currently receiving camera information. The service session manager 910 also keeps track of the access permissions of each camera. Some cameras may have restricted access so that all potential users may not be permitted to receive the information from some of the monitoring cameras. The bridge manager 912 keeps track of the state of the video bridge. The bridge is reconfigured when a new user requests the views of the ITS application and when he/she adds or drops camera views by panning or zooming to a new part of die map which has more or fewer video insets. The VCM agent 914 and driver 916 control and track the state of a single combining module. It receives commands from the bridge manager 912 and the driver 916 sends commands to the address of the hardware (or possibly software if a software based bridge is being used) VCM 918 which is being controlled by the VCM Agent 914.
In another preferred embodiment, electronic panning cameras (or the Personal Presence System bridge's inherent panning capability) are used to allow each user to individually control the pan, tilt, and zoom of each of the camera views.
TV. Conclusion
An improved system and method for employing video bridging for Intelligent Transport Systems and video camera access platforms are shown and described. The above described embodiments of the invention are intended to be illustrative only. Numerous alternative embodiments may be devised by those skilled in the art without departing from the spirit and scope of the following claims. For example, the present invention has been illustrated in the context of an intelligent transport system. However, a person skilled in the art readily recognizes that the present invention may be used in many types of multicamera surveillance systems.

Claims

We claim:
1. A method for providing a combined graphical image and video display, comprising the steps of: a. receiving a number of video inputs; b. receiving a graphical image; and c. combining the video inputs with a graphical image so that the video images appear at desired locations on the graphical image.
2. The mediod of claim 1, wherein the step of receiving video inputs further comprises receiving a number of live video inputs.
3. The method of claim 1, wherein the step of combining further comprises a user selecting a number of video inputs to be combined with the graphical image from a greater number of available video inputs.
4. The method of claim 1 , wherein the step of combining further comprises combining die video inputs with the graphical image in a manner which lends clarity to the video inputs.
5. The method of claim 1 , wherein the step of receiving a number of video inputs further comprises receiving at least one electronically pannable video input.
6. The method of claim 5, further comprising the step of an end user individually selecting a view from the electronically pannable camera.
7. The method of claim 1, further comprising the step of performing the step of combining in a video bridge.
8. The method of claim 1, further comprising the step of performing the step of combining in a user terminal.
9. The meϋ od of claim 1 , wherein the step of combining further comprises: a. combining the received video inputs into a single composite video image having a monochrome background; and b. using chroma keying, combining ύhe single composite video image with the graphical image.
10. The method of claim 1 , wherein the step of combining further comprises the steps of: a. selecting the graphical image to be a street map; b. selecting the video inputs to be video inputs viewing traffic at various locations; and c. combining the graphical image of the street map and the video input of the traffic conditions so that the video images are located on the street map in locations corresponding to the location from which the video input is being taken.
11. A system for combining a number of video inputs, comprising: a. a graphics server configured to provide a graphical image; b. j. a video bridge configured to receive a number of video inputs; c. video-graphics combining means for combining the video inputs and the graphical image so that the video images appear at desired locations on the graphical image.
12. The system of claim 1 1, wherein the video bridge is further configured to output a number of video outputs, each output being a combination of at least some of the video inputs.
13. The system of claim 1 1 , further comprising a number of video cameras connected to the video bridge.
14. The system of claim 13, wherein at least one of die video cameras is an electronic panning camera.
15. The system of claim 1 1, wherein the video bridge has an output connected to a number of monitoring stations.
16. The system of claim 15, wherein at least one monitoring station is remotely located from the video bridge.
17. The system of claim 1 1 , wherein the video-graphics combining means is configured to combine the video inputs and graphical image in a manner so that the video inputs are located on the graphical image in a manner which lends clarity to die video inputs.
18. The system of claim 17, wherein the graphical image is a street map and the video inputs are located on die street map in a location corresponding to a location from which the video input is being taken.
19. The system of claim 1 1, wherein the video-graphics combining means is located in the video bridge and the video bridge is configured to output the combined video inputs and the graphical image.
20. The system of claim 1 1 , wherein the video-graphics combining means is located in a user terminal and: a. the video bridge is configured to combine the video inputs into a single composite video image having a monochromatic background; and b. the user terminal includes a video overlay card configured to receive the graphical image and the single composite video image and to replace the monochromatic background with corresponding portions of the graphical image.
21. The system of claim 11, wherein the graphics server is a video camera configured to provide a video image of a desired graphical image.
22. The system of claim 11 , wherein the system further includes a video bridge controller responsive to user commands and configured to control the video bridge.
23. The system of claim 11 , wherein the video bridge is configured to combine the video inputs on a picture element by picture element basis.
24. The system of claim 11 , wherein the video bridge is configured to combine the video inputs so that me video images are not restricted to rectangular windows.
PCT/US1996/019639 1995-12-15 1996-12-12 Systems and methods employing video combining for intelligent transportation applications WO1997023096A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP96942940A EP0867088A4 (en) 1995-12-15 1996-12-12 Systems and methods employing video combining for intelligent transportation applications

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US892795P 1995-12-15 1995-12-15
US60/008,927 1995-12-15

Publications (1)

Publication Number Publication Date
WO1997023096A1 true WO1997023096A1 (en) 1997-06-26

Family

ID=21734534

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1996/019639 WO1997023096A1 (en) 1995-12-15 1996-12-12 Systems and methods employing video combining for intelligent transportation applications

Country Status (3)

Country Link
EP (1) EP0867088A4 (en)
CA (1) CA2242844A1 (en)
WO (1) WO1997023096A1 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0913800A1 (en) * 1997-10-29 1999-05-06 Endusis Limited Monitoring system
EP0927979A2 (en) * 1997-12-29 1999-07-07 François Dipl.-Ing. Sodji Device for monitoring a building and/or a room
EP0944260A2 (en) * 1998-03-18 1999-09-22 Kabushiki Kaisha Toshiba Remote image monitoring method & system & recording medium used for executing image monitoring
GB2384128A (en) * 2001-12-13 2003-07-16 Invideo Ltd Schematic mapping of surveillance area
EP1472863A2 (en) * 2002-02-04 2004-11-03 Polycom, Inc. Apparatus and method for providing electronic image manipulation in video conferencing applications
EP1535157A2 (en) * 2002-07-08 2005-06-01 Precache Inc. Packet routing via payload inspection for alert services, for digital content delivery and for quality of service management and caching with selective multicasting in a publish-subscribe network
WO2006006465A1 (en) 2004-07-12 2006-01-19 Matsushita Electric Industrial Co., Ltd. Camera control device
EP1798979A1 (en) * 2004-10-06 2007-06-20 Matsushita Electric Industrial Co., Ltd. Monitoring device
WO2008028720A1 (en) * 2006-09-08 2008-03-13 Robert Bosch Gmbh Method for operating at least one camera
GB2457707A (en) * 2008-02-22 2009-08-26 Crockford Christopher Neil Joh Integration of video information
US7840032B2 (en) 2005-10-04 2010-11-23 Microsoft Corporation Street-side maps and paths
US20120143359A1 (en) * 2010-12-03 2012-06-07 At&T Intellectual Property I, L.P. System and Methods to Test Media Devices
EP2806632A1 (en) * 2013-05-23 2014-11-26 Alcatel Lucent Method and apparatus for optimizing video quality of experience in end-to-end video applications
EP2806633A1 (en) * 2013-05-23 2014-11-26 Alcatel Lucent Method and apparatus for improved network optimization for providing video from a plurality of sources to a plurality of clients
US9644968B2 (en) 2000-10-06 2017-05-09 Vederi, Llc System and method for creating, storing and utilizing images of a geographical location
US20170339436A1 (en) * 2016-05-17 2017-11-23 SpectraRep, LLC Method and system for datacasting and content management

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4511886A (en) * 1983-06-01 1985-04-16 Micron International, Ltd. Electronic security and surveillance system
US4992866A (en) * 1989-06-29 1991-02-12 Morgan Jack B Camera selection and positioning system and method
US5130794A (en) * 1990-03-29 1992-07-14 Ritchey Kurtis J Panoramic display system
US5168451A (en) * 1987-10-21 1992-12-01 Bolger John G User responsive transit system
US5374952A (en) * 1993-06-03 1994-12-20 Target Technologies, Inc. Videoconferencing system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4958297A (en) * 1986-07-17 1990-09-18 Honeywell Inc. Apparatus for interfacing video information and computer-generated graphics on a visual display terminal
JPH0756622A (en) * 1993-08-11 1995-03-03 Toshiba Corp Monitoring control system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4511886A (en) * 1983-06-01 1985-04-16 Micron International, Ltd. Electronic security and surveillance system
US5168451A (en) * 1987-10-21 1992-12-01 Bolger John G User responsive transit system
US4992866A (en) * 1989-06-29 1991-02-12 Morgan Jack B Camera selection and positioning system and method
US5130794A (en) * 1990-03-29 1992-07-14 Ritchey Kurtis J Panoramic display system
US5374952A (en) * 1993-06-03 1994-12-20 Target Technologies, Inc. Videoconferencing system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0867088A4 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0913800A1 (en) * 1997-10-29 1999-05-06 Endusis Limited Monitoring system
EP0927979A2 (en) * 1997-12-29 1999-07-07 François Dipl.-Ing. Sodji Device for monitoring a building and/or a room
EP0927979A3 (en) * 1997-12-29 2000-06-28 François Dipl.-Ing. Sodji Device for monitoring a building and/or a room
EP0944260A2 (en) * 1998-03-18 1999-09-22 Kabushiki Kaisha Toshiba Remote image monitoring method & system & recording medium used for executing image monitoring
EP0944260A3 (en) * 1998-03-18 2000-04-05 Kabushiki Kaisha Toshiba Remote image monitoring method & system & recording medium used for executing image monitoring
US6239833B1 (en) 1998-03-18 2001-05-29 Kabushiki Kaisha Toshiba Remote image monitoring method and system, and recording medium used for executing image monitoring
US10473465B2 (en) 2000-10-06 2019-11-12 Vederi, Llc System and method for creating, storing and utilizing images of a geographical location
US9644968B2 (en) 2000-10-06 2017-05-09 Vederi, Llc System and method for creating, storing and utilizing images of a geographical location
GB2384128A (en) * 2001-12-13 2003-07-16 Invideo Ltd Schematic mapping of surveillance area
EP1472863A2 (en) * 2002-02-04 2004-11-03 Polycom, Inc. Apparatus and method for providing electronic image manipulation in video conferencing applications
EP1472863A4 (en) * 2002-02-04 2006-09-20 Polycom Inc Apparatus and method for providing electronic image manipulation in video conferencing applications
EP1535157A4 (en) * 2002-07-08 2010-09-08 Precache Inc Packet routing via payload inspection for alert services, for digital content delivery and for quality of service management and caching with selective multicasting in a publish-subscribe network
EP1535157A2 (en) * 2002-07-08 2005-06-01 Precache Inc. Packet routing via payload inspection for alert services, for digital content delivery and for quality of service management and caching with selective multicasting in a publish-subscribe network
WO2006006465A1 (en) 2004-07-12 2006-01-19 Matsushita Electric Industrial Co., Ltd. Camera control device
EP1768412A1 (en) * 2004-07-12 2007-03-28 Matsushita Electric Industrial Co., Ltd. Camera control device
EP1768412A4 (en) * 2004-07-12 2011-10-12 Panasonic Corp Camera control device
US8085298B2 (en) 2004-07-12 2011-12-27 Panasonic Corporation Camera control device
EP1798979A4 (en) * 2004-10-06 2011-10-19 Panasonic Corp Monitoring device
EP1798979A1 (en) * 2004-10-06 2007-06-20 Matsushita Electric Industrial Co., Ltd. Monitoring device
US7840032B2 (en) 2005-10-04 2010-11-23 Microsoft Corporation Street-side maps and paths
WO2008028720A1 (en) * 2006-09-08 2008-03-13 Robert Bosch Gmbh Method for operating at least one camera
DE102006042318B4 (en) 2006-09-08 2018-10-11 Robert Bosch Gmbh Method for operating at least one camera
GB2457707A (en) * 2008-02-22 2009-08-26 Crockford Christopher Neil Joh Integration of video information
EP2093999A1 (en) 2008-02-22 2009-08-26 Christopher Neil John Crockford Integration of video information
US20170188113A1 (en) * 2010-12-03 2017-06-29 At&T Intellectual Property I, L.P. Systems and methods to test media devices
US9635428B2 (en) * 2010-12-03 2017-04-25 At&T Intellectual Property I, L.P. System and methods to test media devices
US10187702B2 (en) 2010-12-03 2019-01-22 At&T Intellectual Property I, L.P. Systems and methods to test media devices
US20120143359A1 (en) * 2010-12-03 2012-06-07 At&T Intellectual Property I, L.P. System and Methods to Test Media Devices
WO2014187789A1 (en) * 2013-05-23 2014-11-27 Alcatel Lucent Method and apparatus for improved network optimization for providing video from a plurality of sources to a plurality of clients
EP2806633A1 (en) * 2013-05-23 2014-11-26 Alcatel Lucent Method and apparatus for improved network optimization for providing video from a plurality of sources to a plurality of clients
US10070197B2 (en) 2013-05-23 2018-09-04 Alcatel Lucent Method and apparatus for improved network optimization for providing video from a plurality of sources to a plurality of clients
EP2806632A1 (en) * 2013-05-23 2014-11-26 Alcatel Lucent Method and apparatus for optimizing video quality of experience in end-to-end video applications
US20170339436A1 (en) * 2016-05-17 2017-11-23 SpectraRep, LLC Method and system for datacasting and content management
US10021435B2 (en) * 2016-05-17 2018-07-10 SpectraRep, LLC Method and system for datacasting and content management

Also Published As

Publication number Publication date
EP0867088A4 (en) 2000-04-05
CA2242844A1 (en) 1997-06-26
EP0867088A1 (en) 1998-09-30

Similar Documents

Publication Publication Date Title
WO1997023096A1 (en) Systems and methods employing video combining for intelligent transportation applications
JP4167777B2 (en) VIDEO DISPLAY DEVICE, VIDEO DISPLAY METHOD, AND RECORDING MEDIUM CONTAINING PROGRAM FOR DISPLAYING VIDEO
US6356664B1 (en) Selective reduction of video data using variable sampling rates based on importance within the image
JP6456358B2 (en) Monitoring system and monitoring method
JP2006033793A (en) Tracking video reproducing apparatus
JP2007081553A (en) Camera system and display method thereof
KR100835085B1 (en) A real time processing system and method of a traffic accident using an unmanned recording device
WO2005019837A2 (en) Spherical surveillance system architecture
US7631261B2 (en) Efficient method for creating a visual telepresence for large numbers of simultaneous users
CN112449160A (en) Video monitoring method and device and readable storage medium
JP4206297B2 (en) Camera image monitoring apparatus and camera image monitoring method
KR101696730B1 (en) Method for intelligent city integrated monitoring and controlling system buildup
JP4977489B2 (en) Surveillance camera system
JP2006333319A (en) Video accumulation and distribution system
JP2003244683A (en) Remote monitor system and program
EP2725489A1 (en) Method of operating a video processing apparatus
JP5790788B2 (en) Monitoring system
JP2003134045A5 (en)
JP2003134045A (en) System and method of broadcast streaming distribution
JP2005175701A (en) Remote video image monitoring system
JP2001339710A (en) Selection controlling system for monitoring image
JPH11243508A (en) Image display device
Volner et al. Main security system of the first highway tunnel on Slovakia
Hall et al. A novel interactivity environment for integrated intelligent transportation and telematic systems
CN113938653A (en) Multi-video monitoring display method based on AR echelon cascade

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1996942940

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2242844

Country of ref document: CA

Ref country code: CA

Ref document number: 2242844

Kind code of ref document: A

Format of ref document f/p: F

NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 97522878

Format of ref document f/p: F

WWP Wipo information: published in national office

Ref document number: 1996942940

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1996942940

Country of ref document: EP