WO2010127418A1 - Systems and methods for the autonomous production of videos from multi-sensored data - Google Patents

Systems and methods for the autonomous production of videos from multi-sensored data Download PDF

Info

Publication number
WO2010127418A1
WO2010127418A1 PCT/BE2010/000039 BE2010000039W WO2010127418A1 WO 2010127418 A1 WO2010127418 A1 WO 2010127418A1 BE 2010000039 W BE2010000039 W BE 2010000039W WO 2010127418 A1 WO2010127418 A1 WO 2010127418A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
view
viewpoint
objects
cameras
Prior art date
Application number
PCT/BE2010/000039
Other languages
French (fr)
Inventor
Christophe De Vleeschouwer
Fan Chen
Original Assignee
Universite Catholique De Louvain
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Universite Catholique De Louvain filed Critical Universite Catholique De Louvain
Priority to BRPI1011189-1A priority Critical patent/BRPI1011189B1/en
Priority to PL10737234T priority patent/PL2428036T3/en
Priority to MX2011011799A priority patent/MX2011011799A/en
Priority to US13/319,202 priority patent/US8854457B2/en
Priority to EP10737234.4A priority patent/EP2428036B1/en
Priority to ES10737234.4T priority patent/ES2556601T3/en
Priority to CA2761187A priority patent/CA2761187C/en
Publication of WO2010127418A1 publication Critical patent/WO2010127418A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/268Signal distribution or switching

Definitions

  • the present invention relates to the integration of information from multiple cameras in a video system, e.g. a television production or intelligent surveillance system and to automatic production of video content, e.g. to render an action involving one or several persons and/or objects of interest.
  • a video system e.g. a television production or intelligent surveillance system
  • automatic production of video content e.g. to render an action involving one or several persons and/or objects of interest.
  • the APIDIS Autonomous Production of Images based on Distributed and Intelligent Sensing
  • the APIDIS tries to provide a solution to generate personalized contents for improved and low-cost visual representation of controlled scenarios such as sports television, where image quality and perceptual comfort are as essential as efficient integration of contextual information [I].
  • Production artefacts consist of both visual artefacts, which mainly means flickering effects due to shaking or fast zoom in/out of viewpoints, and story-telling artefacts such as the discontinuity of story caused by fast camera switching and dramatic viewpoint movements.
  • the first category selects one view (i.e. one video) among the ones covered by a predefined set of cameras, based on some activity detection mechanism.
  • each camera is activated based on some external device, which triggers the video acquisition each time a particular event is detected (e.g. an object entering the field of view).
  • audio sensors are used to identify the direction in which the video should be captures.
  • the second category captures a rich visual signal, either based on omnidirectional cameras or on wide-angle multi-camera setting, so as to offer some flexibility in the way the scene is rendered at the receiver-end.
  • the systems in [17] and [18] respectively consider multi-camera and omnidirectional viewing systems to capture and broadcast wide-angle video streams.
  • an interface allows the viewer to monitor the wide-angle video stream(s) to select which portion of the video to unwrap in real time. Further, the operator can stop the playback and control pan-tilt-zoom effects in a particular frame.
  • the interface is improved based on the automatic detection of the video areas in which an event participant is present. Hence, the viewer gets the opportunity to choose interactively which event participant (s)he would like to look at.
  • [19-21] detect people of interest in a scene (typically a lecturer or a videoconference participant).
  • a scene typically a lecturer or a videoconference participant.
  • the improvement over [18] is twofold. Firstly, in [19-21], methods are proposed to define automatically a set of candidate shots based on automatic analysis of the scene. Secondly, mechanisms are defined to select automatically a shot among the candidate shots.
  • the shot definition relies on detection and tracking of the lecturer, and probabilistic rules are used to pseudo- randomly switch from the audience to the lecturer camera during a lecture.
  • a list of candidate shots is also defined based on the detection of some particular object of interest (typically a face), but more sophisticated editing effects are considered to create a dynamic (videoconference) rendering.
  • one shot can pan from one person to another, or several faces can be pasted next to each other in a single shot.
  • the edited output video is then constructed by selecting a best shot among the candidate shots for each scene (in [20] and [21], a scene corresponds to a particular period of time).
  • the best shot is selected based on a pre-defined set of cinematic rules, e.g. to avoid too many of the same shot in a row.
  • the shot parameters i.e. the cropping parameters in the view at hand
  • the shot parameters stay fixed until the camera is switched.
  • a shot is directly associated to an object, so that in final, the shot selection ends up in selecting the object(s) to render, which might be difficult and irrelevant in contexts that are more complex than a videoconference or a lecture.
  • [19-21] do not select the shot based on the joint processing of the positions of the multiple objects.
  • the third and last category of semi-automatic video production systems differentiates the cameras that are dedicated to scene analysis from the ones that are used to capture the video sequences.
  • a grid of cameras is used for sport scene analysis purposes.
  • pan-tilt-zoom (PTZ) cameras that collect videos of players of interest (typically the one that holds the puck or the ball).
  • PTZ pan-tilt-zoom
  • [22] must implement all scene analysis algorithms in real time, since it aims at controlling the PTZ parameters of the camera instantaneously, as a function of the action observed in the scene. More importantly and fundamentally, [22] selects the PTZ parameters to capture a specific detected object and not to offer appropriate rendering of a team action, potentially composed of multiple objects-of-interest. In this it is similar to [19-21]. Also, when multiple videos are collected, [22] does not provide any solution to select one of them. It just forwards all the videos to an interface that presents them in an integrated manner to a human operator. This is the source of a bottleneck when many source cameras are considered.
  • US2008/0129825 discloses control of motorized camera to capture images of an individual tracked object, e.g. for individual sports like athletics competitions.
  • the user selects the camera through a user interface.
  • the location units are attached to the object. Hence they are intrusive.
  • GB2402011 discloses an automated camera control using event parameters. Based on player tracking and a set of trigger rules, the field of view of cameras is adapted and switched between close, mid and far views. A camera is selected based on trigger events.
  • a trigger event typically corresponds to specific movements or actions of sports(wo)men, e.g. the service of a tennis player, or to Scoreboard information updates.
  • US2004/0105004A1 relates rendering talks or meetings. Tracking cameras are exploited to render the presenter or a member of the audience who asks a question. The presenter and the audience members are tracked based on sound source localization, using an array of microphones. Given the position of the tracking camera target, the PTZ parameters of the motorized camera are controlled so as to provide a smooth edited video of the target. The described method and system is only suited to follow a single individual person. With respect to the selection of the camera, switching is disclosed between a set of very distinct views (one overview of the room, one view of the slides, one close view on the presenter, and one close view a speaking audience member). The camera selection process is controlled based on event detection (e.g. a new slide appearing, or a member of the audience speaking) and videography rules defined by professionals, to emulate a human video production team.
  • event detection e.g. a new slide appearing, or a member of the audience speaking
  • videography rules defined by professionals
  • EP1289282 (Al) Video sequence automatic production method and system Inventor: AYER SERGE [CH] ; MOREAUX MICHEL [CH] (+1); Applicant: DARTFISH SA [CH]; EC: H04N5/232 IPC: H04N5/232; H04N5/232;; (IPC 1-7): H04N5/232
  • An object of the present invention is to provide computer based methods and systems for the autonomous production of an edited video, composed based on the multiple video streams captured by a network of cameras, distributed around a scene of interest.
  • the present invention provides an autonomous computer based method and system for personalized production of videos such as team sport videos such as basketball videos from multi-sensored data under limited display resolution.
  • videos such as team sport videos such as basketball videos from multi-sensored data under limited display resolution.
  • Embodiments of the present invention relate to the selection of a view to display from among the multiple video streams captured by the camera network.
  • Technical solutions are provided to provide perceptual comfort as well as an efficient integration of contextual information, which is implemented, for example, by smoothing generated viewpoint/camera sequences to alleviate flickering visual artefacts and discontinuous story-telling artefacts.
  • a design and implementation of the viewpoint selection process is disclosed that has been verified by experiments, which shows that the method and system of the present invention efficiently distribute the processing load across cameras, and effectively selects viewpoints that cover the team action at hand while avoiding major perceptual artefacts.
  • the present invention provides a computer based method for autonomous production of an edited video from multiple video streams captured by a plurality of cameras distributed around a scene of interest, the method comprising:
  • the field of view parameters refer, for example to the cropping window in a static camera, and/or to the pan-tilt- zoom and position parameters in a motorized and moving camera.
  • the concept of action following can be quantified by measuring the amount of pixels associated to each object/persons of interest in the displayed image. Accurate following of the action results from complete and close rendering, where completeness count the number of objects/persons in the displayed image, while closeness measure the amount of pixels available to describe each object. • building the edited video by selecting and concatenating video segments provided by one or more individual cameras, in a way that maximizes completeness and closeness metrics along the time, while smoothing out the sequence of rendering parameters associated to concatenated segments.
  • the selecting of rendering parameters can be for all objects or objects-of-interest simultaneously.
  • the knowledge about the position of the objects in the images can be exploited to decide how to render the captured action.
  • the method can include selecting field of view parameters for the camera that renders action as a function of time based on an optimal balance between closeness and completeness metrics.
  • the field of view parameters refer to the crop in camera view of static cameras and/or to the pan-tilt-zoom or displacement parameters for dynamic and potentially moving cameras.
  • the closeness and completeness metrics can be adapted according to user preferences and/or resources.
  • a user resource can be encoding resolution.
  • a user preference can be at least one of preferred object, or preferred camera.
  • Images from all views of all cameras can be mapped to the same absolute temporal coordinates based a common unique temporal reference for all camera views.
  • field of view parameters are selected that optimize the trade-off between completeness and closeness.
  • the viewpoint selected in each camera view can be rated according to the quality of its completeness/closeness trade-off, and to its degree of occlusions.
  • the parameters of an optimal virtual camera that pans, zooms and switches across views can be computed to preserve high ratings of selected viewpoints while minimizing the amount of virtual camera movements.
  • the method can include selecting the optimal field of view in each camera, at a given time instant.
  • a field of view V k in the k th camera view is defined by the size S k and the center Ck of the window that is cropped in the k ⁇ view for actual display. It is selected to include the objects of interest and to provide a high resolution description of the objects, and an optimal field of view V k * is selected to maximize a weighted sum of object interests as follows where, in the above equation:
  • the function m( ) modulates the weights of the n l object according to its distance to the center of the viewpoint window, compared to the size of this window.
  • the vector u reflects the user preferences, in particular, its component u res defines the resolution of the output stream, which is generally constrained by the transmission bandwidth or end-user device resolution.
  • ⁇ (.) reflects the penalty induced by the fact that the native signal captured by the k th camera has to be sub-sampled once the size of the viewpoint becomes larger than the maximal resolution u res allowed by the user.
  • ⁇ (.%) decreases with Sk and the function ⁇ (.%) is equal to one when S* ⁇ u res , and decrease afterwards.
  • ⁇ (.%) is defined by:
  • the method includes rating the viewpoint associated to each camera according to the quality of its completeness/closeness trade-off, and to its degree of occlusions.
  • the highest rate should correspond to a view that (1) makes most object of interest visible, and (2) is close to the action, meaning that it presents important objects with lots of details, i.e. a high resolution.
  • the rate / ⁇ v*, u) associated to the k* camera view is defined as follows: where, in the above equation:
  • /country denotes the level of interest assigned to the n ⁇ object detected in the scene.
  • x ⁇ denotes the position of the vP object in the 3D space;
  • x) measures the occlusion ratio of the n th object in camera view k, knowing the position of all other objects, the occlusion ratio of an object being defined to be the fraction of pixels of the object that are hidden by other objects when projected on the camera sensor;
  • the height ⁇ x n is defined to be the height in pixels of the projection in view k of a reference height of a reference object located in X n .
  • the value of h k (x n ) is directly computed based on camera calibration, or when calibration is not available, it can be estimated based on the height of the object detected in view k.
  • the method may comprise smoothing the sequence of camera indices and corresponding viewpoint parameters, wherein the smoothing process is for example implemented based on two Markov Random Fields, linear or non-linear low-pass filtering mechanism, or via a graph model formalism, solved based on conventional Viterbi algorithm.
  • the capturing of the mulitiple video streams may be by static or dynamic cameras.
  • the present invention also includes a computer based system comprising a processing engine and memory for autonomous production of an edited video from multiple video streams captured by a plurality of cameras distributed around a scene of interest, adapted to carry out any of the methods of the present invention.
  • the system can comprise: a detector for detecting objects in the images of the video streams, first means for selecting one or more camera viewpoints based on joint processing of positions of multiple objects that have been detected, second means for selecting rendering parameters that maximize and smooth out closeness and completeness metrics by concatenating segments in the video streams provided by one or more individual cameras.
  • the computer based system can have
  • means for selecting for each camera the field of view that renders the scene of interest in a way that (allows the viewer to) follows the action carried out by the multiple and interacting objects/persons that have been detected.
  • the field of view parameters refer, for example to the cropping window in a static camera, and/or to the pan-tilt-zoom and position parameters in a motorized and moving camera.
  • the concept of action following can be quantified by measuring the amount of pixels associated to each object/persons of interest in the displayed image. Accurate following of the action results from complete and close rendering, where completeness count the number of objects/persons in the displayed image, while closeness measure the amount of pixels available to describe each object.
  • the present invention also provides a computer program product that comprises code segments which when executed on a processing engine execute any of the methods of the invention or implement any system according to the invention.
  • the present invention also includes a non-transitory machine readable signal storage medium storing the computer program product.
  • the present invention can deal with scenes involving several interacting moving persons/objects of interest.
  • those scenes are denoted as team actions, and typically correspond to the scenes encountered in team sports context.
  • An aim of the present invention is to target the production of semantically meaningful, i.e. showing the action of interest, and perceptually comfortable contents from raw multi-sensored data.
  • the system according to the present invention is computer based including memory and a processing engine and is a computationally efficient production system, e.g. based on a divide-and-conquer paradigm (see Fig. 15).
  • the best field of view is first computed for each individual camera, and then the best camera to render the scene is selected. Together the camera index and its field of view define the viewpoint to render the action.
  • field of view definition is limited to a crop of the image captured by the camera.
  • the field of view directly results from the pan-tilt-zoom parameters of the camera, and can thus capture an arbitrary rectangular portion of the light field reaching the centre of the camera.
  • Completeness stands for the integrity of action rendering.
  • closeness defines the fineness of detail description (typically the average amount of pixels that are available to render the persons/objects of interest), and smoothness is a term referring to the continuity of viewpoint selection .
  • the present invention is completely autonomous and self-governing, in the sense that it can select the pixels to display without any human intervention, based on a default set of production parameters and on the outcomes of people detection systems. But the invention can also deal with user-preferences, such as user's narrative profile, and device capabilities. Narrative preferences can be summarized into four descriptors, i.e., user preferred group of objects or "team”, user preferred object or “player”, user preferred 'view type' (e.g. close zoom-in or far zoom-out views), and user preferred "camera”. All device constraints, such as display resolution, network speed, decoder's performance, are abstracted as the output resolution parameter, which denotes the resolution at which the output video is encoded to be conveyed and displayed at the end-host.
  • a set of cameras that (partly) cover the same area are considered, which are likely to be activated simultaneously based on any activity detection mechanism which is another important advantage of the present invention over the prior art.
  • the purpose of the invention is thus not to select a camera view based on the fact that some activity was detected in the view. Rather, the objective is to select along the time the camera view and its corresponding variations in parameters such as cropping or PTZ parameters, to best render the action occurring in the covered area.
  • quality of rendering refers to the optimization of a trade-off between measures of closeness, completeness, and smoothness.
  • the present invention has an advantage of dynamically adapting and smoothing out viewpoint parameters with time, which is an improvement over prior art systems in which the shot parameters (e.g. the cropping parameters in the view at hand) stay fixed until the camera is switched.
  • the shot parameters e.g. the cropping parameters in the view at hand
  • a choice between one object or another is not made, but rather a selection is made of the viewpoint based on the joint processing of the positions of the multiple objects that have been detected.
  • a selection is made of the viewpoints sequence that is optimal in the way it maximizes and smoothes out closeness and completeness metrics e.g. for all objects simultaneously.
  • the methods and systems of the present invention capture and produce content automatically, without the need for costly handmade processes (no technical team or cameraman is needed).
  • the present invention aims at keeping the production of content profitable even for small- or medium-size targeted audiences. Thereby, it promotes the emergence of novel markets, offering a large choice of contents that are of interest for a relatively small number of users (e.g. the summary of a regional sport event, a university lecture, or a day at the nursery).
  • An aim of the present invention is to produce a video report of an event based on the concatenation of video (and optionally corresponding audio) segments captured by a set of cameras.
  • both static and dynamic cameras can be manipulated by the present invention: o Using static sensors adds to cost-effectiveness because it permits to store all relevant content and to process it off-line, to select the fragments of streams that are worth being presented to the viewer.
  • the autonomous production principles described below could as well be used to control a (set of) dynamic PTZ camera(s).
  • the information about the location of object-of-interests has to be provided in real-time, e.g. based on the real time analysis of the signal captured by some audio-visual sensors (as done in [refj), or based on information collected from embedded transmitters.
  • the space of candidate fields of view is defined by the position and control parameters of the PTZ camera, and not by the cropped image within the view angle covered by the static camera.
  • the main assumption underlying the networked acquisition setting is the existence of a common unique temporal reference for all camera views, so that the images from all cameras can be mapped to the same absolute temporal co-ordinates of the scene at hand.
  • the cameras are thus assumed to be loosely, but not necessarily tightly, synchronized.
  • the loose synchronization refers to a set of cameras that capture images independently, and that relies on timestamps to associate the images that have been captured at similar, but not necessarily identical, time instants.
  • a tight synchronization would refer to synchronized capture of the images by the cameras, as done when acquisition is controlled by a common trigger signal.
  • the invention has to know the position of objects-of-interest in the scene.
  • This knowledge might be an (error-prone) estimate, and can refer either to the position of objects in the 3D scene, or to the position of objects in each one of the camera views.
  • This information can be provided based on transmitters that are carried by the objects to be tracked in the scene of interest.
  • This knowledge can also be provided by a non- intrusive alternative, e.g. by exploitation of a set of video signals captured by a network of static cameras, e.g. the ones used for video report production, to detect and track the objects-of-interest.
  • the method is described in "Detection and Recognition of Sports(wo)men from Multiple Views, D. Delannay, N. Danhier, and C. De Vleeschouwer, Third ACM/IEEE International Conference on Distributed Smart Cameras, Como, Italy, September 2009" which is incorporated herein by reference in its entirety. It builds on a background reference model to identify the pixels that change in each view.
  • the change detection masks that are collected in each view can be merged, e.g. in a ground occupancy mask, to identify the position of objects-of-interest in the 3D space (see for example the approach depicted in Figure 16).
  • Particle filters or graph-based techniques can then be used to link occurrences of the same object along the time line. Note that such detection and tracking techniques are well known to those skilled in the art, and will not be described in detail herein. The embodiment of these algorithms that has been implemented is described in the reference above, and offers the advantage of handling occlusions in a computationally efficient way.
  • the approach is generic in the sense that it can integrate a large range of user preferences including transmission or display resources, semantic interest (like preferred player), or narrative preferences (dealing with the preferred way to visualize the story, e.g. preferred camera or zoom-in factor).
  • the present invention aims at selecting the sequence of viewpoints that optimizes scene rendering along the time, with respect to the detected persons/objects-of-interest.
  • a viewpoint refers to a camera index and to the window that is cropped in that particular camera view, for actual display.
  • the optimization of the rendering has to: Maximize the notion of completeness, which measures to which extent the (pixels of the) objects-of-interest are included and visible within the displayed viewpoint.
  • this involves minimizing the degree of object occlusion, which measures the fraction of an object that is present in the scene, but is (e.g. at least partly) hidden by other objects;
  • Maximize the notion of closeness which refers to the fineness of details, i.e. the density of pixels or resolution, when rendering the objects-of-interest.
  • methods and systems according to embodiments of the present invention propose to balance completeness and closeness, optionally as a function of individual user preferences (in terms of viewpoint resolution, or preferred camera or players for example).
  • smoothness of transitions between the rendering parameters of consecutive frames of the edited video has also to be taken into account when considering the production of a temporal segment.
  • Step 1 At each time instant, and for each camera view, select the variations in parameters such as cropping parameters that optimize the trade-off between completeness and closeness.
  • the completeness/closeness trade-off is measured as a function of the user preferences. For example, depending on the resolution at which (s)he accesses the produced content, a user might prefer a small (zoom-in) or a large (zoom-out) viewpoint.
  • Step 2 Rate the field of view selected in each camera view according to the quality (in terms of user preferences) of its completeness/closeness trade-off, and to its degree of occlusions.
  • Step 3 For the temporal segment at hand, compute the parameters of an optimal virtual
  • the first step consists in selecting the optimal field of view for each camera, at a given time instant. To simplify notations, in the following, we omit the time index t.
  • a field of view V k in the k ⁇ static camera is defined by the size S k and the center C k of the window that is cropped in the k ⁇ view for actual display.
  • the optimal field of view V k * is selected preferably according to user preferences, to maximize a weighted sum of object interests as follows
  • o / admiration denotes the level of interest assigned to the n th object recognized in the scene. This assignment can be done by any suitable method and the present invention assumes that this assignment has been completed and the results can be used by the present invention. These levels of interest can be defined by the user, e.g. once for the entire event, and made available to the present invention. In application scenarios for which object are detected but not labelled, the weight is omitted, i.e. replaced by a constant unitary value.
  • o x n ,k denotes the position of the n th object in camera view k.
  • the function m(.) modulates the weights of the n th object according to its distance to the center of the viewing window, compared to the size of this window.
  • the weight should be high and positive when the object-of-interest is located in the center of the display window, and should be negative or zero when the object lies outside the
  • m(.) should be positive between 0 and 0.5, and lower or equal to zero beyond 0.5.
  • Many functions are appropriate, and the choice of a particular instance could for example be driven based on computational issues. Examples of functions are the well-known Mexican hat or Gaussian functions. Another example is provided in detail in a particular embodiment of the invention described in appendix 1 of this application.
  • the vector u reflects the user constraints or preferences in terms of viewing window resolution and camera index.
  • its component u res defines the resolution of the output stream, which is generally constrained by the transmission bandwidth or end- user device resolution.
  • Its component u c ⁇ ose is set to a value larger than 1 that increases to favour close viewpoints compared to large zoom-out views.
  • the other components of u are dealing with camera preferences, and are defined below, while describing the second step of the invention.
  • ⁇ (.) reflects the penalty induced by the fact that the native signal captured by the k" 1 camera has to be down-sampled once the size of the viewpoint becomes larger than the maximal resolution u res allowed by the user.
  • This function typically decreases with Sk.
  • An appropriate choice consists in setting the function equal to one when £* ⁇ u res , and in making it decrease afterwards.
  • An example of ⁇ (.) is defined by
  • the rate I k (v k , u) associated to the k th camera view is defined as follows:
  • X n denotes the position of the n ⁇ object in the 3D space
  • x) measures the occlusion ratio of the n th object in camera view k, knowing the position of all other objects.
  • the occlusion ratio of an object is defined to be the fraction of pixels of the object that are hidden by other objects when projected on the camera sensor.
  • the height ⁇ *(x n ) is defined to be the height in pixels of the projection in view k of a six feet tall vertical object located in X n . Six feet is the average height of the players.
  • the value of A*(X n ) is directly computed based on camera calibration. When calibration is not available, it can be estimated based on the height of the object detected in view k.
  • /?*(.) reflects the impact of the user preferences in terms of camera view and display resolution.
  • /?*(.) can be defined as
  • the embodiment of the invention that is described in the appendix 1 defines the function to maximize based on the product of a closeness factor with a completeness factor, each factor measuring a weighted sum of individual object display resolution and visibility. Hence, it replaces the sum of product by a product of sums, but still follows the same basic idea of taking user preferences into account while trading off two antagonist terms, reflecting the concept of closeness and completeness, respectively.
  • the third and last step consists in smoothing the sequence of camera indices and corresponding viewpoint parameters.
  • the smoothing process is implemented based on the definition of two Markov Random Fields (see Figure 5, and the description of the embodiment below).
  • Other embodiments can as well build on any linear or nonlinear low-pass filtering mechanism to smooth out the sequence of camera indices and viewpoints parameters.
  • the smoothing could also be done through a graph model formalism, solved based on conventional Viterbi algorithm. In that case, graph vertices would correspond to candidate rendering parameters for a given frame, while edges would connect candidate rendering states along the time. The cost assigned to each edge would reflect the disturbance induced by a change of rendering parameters between two consecutive frames.
  • the automated video production system and method also includes a virtual director, e.g. a virtual director module for selecting and determining which of the multiple camera video streams are a current camera stream to be viewed.
  • the virtual director at each time instant, and for each camera view, selects the variations in parameters, e.g. in cropping parameters that optimize the trade-off between completeness and closeness.
  • the completeness/closeness trade-off is measured as a function of user preferences. For example, depending on the resolution at which a user accesses the produced content, a user might prefer a small (zoom-in) or a large (zoom-out) viewpoint.
  • the virtual director module also rates the viewpoint selected in each camera view according to the quality (in terms of user preferences) of its completeness/closeness trade-off, and to its degree of occlusions. Finally the virtual director module computes the parameters of an optimal virtual camera that pans, zooms and switches across views for the temporal segment at hand, to preserve high ratings of selected viewpoints while minimizing the amount of virtual camera movements.
  • the viewpoints selected by the virtual director match end-user expectations. Even more, subjective tests reveal that viewers generally prefer the viewpoints selected based on the automatic system than the ones selected by a human producer. This is partly explained by the severe load imposed to the human operator when the number of camera increases. Hence, the present invention also alleviates the bottleneck experienced by a human operator, when jointly and simultaneously processing a large number of source cameras.
  • Fig.1 hierarchical working flow
  • Fig. 2 hierarchical structure
  • Fig. 7 sample views from cameras
  • Fig. 8 short video clip
  • Fig. 11 comparison of camera and viewpoint sequences
  • Fig. 12 frames in generated sequences
  • Fig. 14 3 step embodiment, of the present invention
  • Fig. 15 divide and conquer embodiment of the present invention
  • the present invention provides computer based methods and systems for cost-effective and autonomous generation of video contents from multi-sensored data including automatic extraction of intelligent contents from a network of sensors distributed around the scene at hand.
  • intelligent contents refers to the identification of salient segments within the audiovisual content, using distributed scene analysis algorithms. This knowledge can be exploited to automate the production and personalize the summarization of video contents.
  • One input is the positions of objects of interest.
  • multi-camera analysis is considered, whereby relevant object detection such as people detection methods relying on the fusion of the foreground likelihood information computed in each view can be used.
  • Multi-view analysis can overcome traditional hurdles such as occlusions, shadows and changing illumination. This is in contrast with single sensor signal analysis, which is often subject to interpretation ambiguities, due to the lack of accurate model of the scene, and to coincidental adverse scene configurations.
  • the positions of the objects of interest are assumed to be (at least partially) known as a function of the time.
  • embodiments of the present invention infer this knowledge from the analysis of the light fields captured by a distributed set of static cameras.
  • a ground occupancy mask can be computed by merging the foreground likelihood measured in each view. Actual player positions can then be derived through an iterative and occlusion-aware greedy process.
  • Multi view analysis can be used to provide the required inputs to the autonomous team sport production method and system of the present invention and is described in the article "Detection and Recognition of Sports(wo)men from Multiple Views", D. Delannay, N. Danhier, and C. De Vleeschouwer, Third ACM/IEEE International Conference on Distributed Smart Cameras, Como, Italy, September 2009 is incorporated herein by reference in its entirety as appendix 2.
  • Embodiments of the present invention then proceed in two stages.
  • the invention selects a set of so-called relevant parameters to render the scene of interest as a function of time, using a camera located at a point which can be any arbitrary 3D point around the action.
  • the rendering parameters define afield of view for the camera, and depend on the camera infrastructure that has been deployed to capture the images of the scene. For example, embodiments of the present invention make use of a fixed camera, and the rendering parameters define how to crop sub-images within the camera view. In other embodiments an articulated and motorized camera can be used, and the rendering parameters may then refer to the pan, tilt, and zoom parameters of the camera.
  • the notion of relevant parameters has to do with the definition of informative, i.e. displaying the persons and objects of interest, and perceptually pleasant images.
  • embodiments of the present invention assume that multiple (PTZ) cameras are distributed around the scene, and how to select the right camera to render the action at a given time is then determined. This is done by selecting or promoting informative cameras, and avoiding perceptually inopportune switching between cameras.
  • the present invention introduces three fundamental concepts, i.e. "completeness”, “smoothness” and closeness (or “fineness”), to abstract the semantic and narrative requirement of video contents. Based on those concepts, the selection of camera viewpoints and that of temporal segments in the summary can be determined, these two being independent optimization problems.
  • Completeness stands for both the integrity of view rendering in camera/viewpoint selection, and that of story-telling in summarization.
  • a viewpoint of high completeness includes more salient objects, while a story of high completeness consists of more key actions.
  • Smoothness refers to the graceful displacement of the virtual camera viewpoint, and to the continuous story-telling resulting from the selection of contiguous temporal segments. Preserving smoothness is important to avoid distracting the viewer from the story by abrupt changes of viewpoints or constant temporal jumps (Owen, 2007).
  • Closeness or Fineness refers to the amount of details provided about the rendered action. Spatially, it favours close views. Temporally, it implies redundant storytelling, including replays. Increasing the fineness of a video does not only improve the viewing experience, but is also essential in guiding the emotional involvement of viewers by close-up shots.
  • these three concepts are optimised, e.g. maximized to produce a meaningful and visually pleasant content.
  • maximization of the three concepts can resuls in conflicting decisions, under some limited resource constraints, typically expressed in terms of the spatial resolution and temporal duration of the produced content. For example, at fixed output video resolution, increasing completeness generally induces larger viewpoints, which in turns decreases fineness of salient objects. Similarly, increased smoothness of viewpoint movement prevents accurate pursuit of actions of interest along the time. The same observations hold regarding the selection of segments and the organization of stories along the time, under some global duration constraints.
  • embodiments of the present invention relating to computer based methods and systems provide a good balance between the three major factors. For example, quantitative metrics are defined to reflect completeness, fineness/closeness. Constrained optimization can then be used to balance those concepts. In addition, for improved computational efficiency, both production and summarization are envisioned in the divide and conquer paradigm (see fig. 15). This especially makes sense since video contents intrinsically have a hierarchical structure, starting from each frame, shots (set of consecutive frames created by similar camerawork), to semantic segments (consecutive shots logically related to the identical action), and ending with the overall sequence.
  • an event timeframe can be first cut into semantically meaningful temporal segments, such as an offense/defense round of team sports, or an entry in news.
  • semantically meaningful temporal segments such as an offense/defense round of team sports, or an entry in news.
  • several narrative options are considered.
  • Each option defines a local story, which consists of multiple shots with different camera coverage.
  • a local story not only includes shots to render the global action at hand, but also shots for explanative and decorative purposes, e.g., replays and close-up views in sports or graph data in news.
  • the camerawork associated to each shot is planned automatically, taking into account the knowledge inferred about the scene by video analysis modules.
  • Benefits and costs are then assigned to each local story.
  • the cost can simply corresponds to the duration of the summary.
  • the benefit reflects user satisfaction (under some individual preferences), and measures how some general requirements, e.g., the continuity and completeness of the story, are fulfilled.
  • These pairs of benefits and costs are then fed into a summarization engine, which solves a resource allocation problem to find the organization of local stories that achieves the highest benefit under the constrained summary length.
  • Camerawork Planning will be described with reference to an example, e.g. Team Sport Videos basketball video production. Whilst extendable to other contexts (e.g. PTZ camera control), the process has been designed to select which fraction of which camera view should be cropped in a distributed set of still cameras to render the scene at hand in a semantically meaningful and visually pleasant way by assuming the knowledge of players' positions.
  • Step 1 Camera-wise Viewpoint Selection.
  • a viewpoint ⁇ k in the kf h camera view of the i' h frame is defined by the size Sk, and the center c & of the window that is cropped in the ⁇ t h view for actual display. It has to be selected to include the objects of interest, and provide a fine, i.e. high resolution, description of those objects. If there are N salient objects in this frame, and the location of the n l object in the A ⁇ view is denoted by ⁇ nk , we select the optimal viewpoint v*,*, by maximizing a weighted sum of object interests as follows:
  • o / admir denotes the level of interest assigned to the n' h object detected in the scene. Note that assigning distinct weights to team sport players allows focusing on a preferred player, but also implies recognition of each player. A unit weight can be assigned to all players, thereby producing a video that renders the global team sport action.
  • the other components of u are dealing with camera preferences, and are defined in the second step below.
  • the function a(.) modulates the weights of the objects according to their distance to the center of the viewpoint, compared to the size of this window. Intuitively, the weight should be high and positive when the object-of-interest is located in the center of the display window, and should be negative or zero when the object lies outside the viewing area. Many instances are appropriate, e.g. the well-known Mexican Hat function.
  • the function /?(.) reflects the penalty induced by the fact that the native signal captured by the lt h camera has to be sub-sampled once the size of the viewpoint becomes larger than the maximal resolution u res allowed by the user. This function typically decreases with S ⁇ . An appropriate choice consists in setting the function equal to one when £ & ⁇ u res , and in making it decrease afterwards.
  • An example of /?(.) is defined by:
  • Step 2 Frame-wise Camera Selection
  • the viewpoint selected in each view is rated according to the quality of its completeness/closeness trade-off, and to its degree of occlusions.
  • the highest rate should correspond to a view that (1) makes most object of interest visible, and (2) is close to the action, meaning that it presents important objects with lots of details, i.e. a high resolution.
  • the rate hiiyu, u) associated to each camera view is defined as follows:
  • ⁇ U k denotes the weight assigned to the k" 1 camera, while a and ⁇ are defined as in the first step above.
  • ⁇ oi ⁇ X n ki fx measures the occlusion ratio of the n' h object in camera view k, knowing the position of all other objects.
  • the occlusion ratio of an object is defined to be the fraction of pixels of the object that are hidden by other objects when projected on the camera sensor.
  • the value of h / fa nk ,) is directly computed based on camera calibration. When calibration is not available, it can be estimated based on the height of the object detected in view k. Step 3: Smoothing of Camera/Viewpoint Sequences.
  • the parameters of an optimal virtual camera that pans, zooms and switches across views are computed to preserve high ratings of selected viewpoints while minimizing the amount of virtual camera movements.
  • the smoothing process can be implemented based on the definition of two Markov
  • Random Fields At first, w> are taken as observed data on the i image, and assume that they are noise-distorted outputs of some underlying smooth results v*,. Given the smooth viewpoint sequence recovered for each camera, camera-gains hty h , u) of those derived viewpoints are computed, and a smooth camera sequence from the second Markov field is inferred by making the probabilities P(k
  • Multi-view Player Detection and Recognition are obtained in an autonomous production of visual content by relying on the detection (and recognition) of object-of-interest in the scene.
  • the foreground likelihood is computed independently on each view, using standard background modelling techniques. These likelihoods are then fused by projecting them on the ground plane, thereby defining a set of so-called ground occupancy masks. The computation of the ground occupancy mask associated to each view is efficient, and these masks are combined and processed to infer the actual position of players.
  • the computation of the ground occupancy mask G k associated to the k" 1 view is described as follows.
  • the k! h view is the source of a foreground likelihood image F k e[0,l] Mr , where M k is the number of pixels of camera k, 0 ⁇ k ⁇ C.
  • M k is the number of pixels of camera k, 0 ⁇ k ⁇ C.
  • G K in ⁇ is defined to be the integration of the (forward- projection of Fk on a vertical segment anchored in x.
  • this integration can equivalently be computed in F K , along the back-projection of the vertical segment anchored in x.
  • This is in contrast methods which compute the mask by aggregating the projections of the foreground likelihood on a set of planes that are parallel to the ground.
  • these two properties can be achieved by virtually moving (through homography transforms) the camera viewing direction (principal axis) so as to bring the vertical vanishing point at infinity and ensure horizon line is horizontal.
  • the principal axis is set perpendicular to the ground and a polar mapping is performed to achieve the same properties. Note that in some geometrical configurations, these transformations can induce severe skewing of the views.
  • each iteration is then run in two steps.
  • the first step searches for the most likely position of the n' h player, knowing the position of the (n-1) players located in previous iterations.
  • the second step updates the ground occupancy masks of all views to remove the contribution of the newly located player.
  • the first step of iteration n aggregates the ground occupancy mask from all views, and then searches for the denser cluster in this mask. Hence, it computes the aggregated mask as:
  • X n argmax ⁇ G" , ⁇ (y) > y
  • ⁇ (y) denotes a Gaussian kernel centered in y, and whose spatial support corresponds to the typical width of a player.
  • the ground occupancy mask of each view is updated to account for the presence of the n' h player.
  • the typical support of a player silhouette in view A is a rectangular box of width W and height H, and observe that the part of the silhouette that occludes or is occluded by the newly detected player does not bring any information about the potential presence of a player in position ⁇ :.
  • the fraction ⁇ i ⁇ x, X n ) of the silhouette in ground position x that becomes non-informative in the k' h view is estimated, as a consequence of the presence of a player in Jc n . It is then proposed to update the ground occupancy mask and aggregation weight of the tf h camera in position* as follows:
  • the positions x investigated in the refined approach are limited to the 30 local maxima that have been detected by the naive approach.
  • the main technical benefits of the present invention include at least one or a combination of:
  • the present invention includes within its scope further improvements.
  • the present invention includes other criteria for computationally efficient and/or analytically solvable selection of viewpoints. It also includes better representation for salient objects such as using moving particles or flexible body models instead of simple bounding boxes. Furthermore, the selection and smoothing of viewpoints and cameras into four sub-steps in the current version simplifies the formulation. However, they can be solved in a unified estimation because their results affect each other.
  • the present invention also includes other selection criteria of viewpoint and cameras independent of subjective evaluations. Exploitation of a distributed network of cameras to approximate the images that would be captured by a virtual sensor located in an arbitrary position, with arbitrary viewpoint coverage can be used with any of the embodiments of the present invention.
  • the present invention can be used with these works, because in accordance with the present invention a selection is made of the most appropriate viewpoint within a set/space of candidate viewpoints. Hence, the addition of free- viewpoint rendering algorithms to embodiments of the present invention just contributes to enlarge the set of potential candidates.
  • a computer may include a video display terminal, a data input means such as a keyboard, and a graphic user interface indicating means such as a mouse.
  • Computer may be implemented as a general purpose computer, e.g. a UNIX workstation or a personal computer.
  • the computer typically includes a Central Processing Unit (“CPU”), such as a conventional microprocessor of which a Pentium processor supplied by Intel Corp. USA is only an example, and a number of other units interconnected via bus system.
  • the bus system may be any suitable bus system .
  • the computer includes at least one memory.
  • Memory may include any of a variety of data storage devices known to the skilled person such as random-access memory (“RAM”), read-only memory (“ROM”), non-volatile read/write memory such as a hard disc as known to the skilled person.
  • computer may further include random-access memory (“RAM”), read-only memory (“ROM”), as well as a display adapter for connecting system bus to a video display terminal, and an optional input/output (I/O) adapter for connecting peripheral devices (e.g., disk and tape drives) to system bus.
  • RAM random-access memory
  • ROM read-only memory
  • I/O input/output
  • the video display terminal can be the visual output of computer, which can be any suitable display device such as a CRT- based video display well-known in the art of computer hardware.
  • video display terminal can be replaced with a LCD-based or a gas plasma-based flat-panel display.
  • Computer further includes user an interface adapter for connecting a keyboard, mouse, optional speaker.
  • the relevant video required may be input directly into the computer via a video ore graphics interface or from storage devices, after which a processor carries out a method in accordance with the present invention.
  • the relevant video data may be provided on a suitable signal storage medium such as a diskette, a replaceable hard disc, an optical storage device such as a CD-ROM or DVD-ROM, a magnetic tape or similar.
  • a communications adapter may connect the computer to a data network such as the Internet, an Intranet a Local or Wide Area network (LAN or WAN) or a CAN.
  • the computer also includes a graphical user interface that resides within machine- readable media to direct the operation of the computer.
  • Any suitable machine-readable media may retain the graphical user interface, such as a random access memory (RAM), a read-only memory (ROM), a magnetic diskette, magnetic tape, or optical disk (the last three being located in disk and tape drives ).
  • Any suitable operating system and associated graphical user interface e.g., Microsoft Windows, Linux
  • the computer includes a control program that resides within computer memory storage. The control program contains instructions that when executed on the CPU allow the computer to carry out the operations described with respect to any of the methods of the present invention.
  • the present invention also provides a computer program product for carrying out the method of the present invention and this can reside in any suitable memory.
  • a computer program product for carrying out the method of the present invention and this can reside in any suitable memory.
  • Examples of computer readable signal bearing media include: recordable type media such as floppy disks and CD ROMs and transmission type media such as digital and analogue communication links.
  • the present invention also includes a software product which when executed on a suitable computing device carries out any of the methods of the present invention.
  • Suitable software can be obtained by programming in a suitable high level language such as C and compiling on a suitable compiler for the target computer processor or in an interpreted language such as Java and then compiled on a suitable compiler for implementation with the Java Virtual Machine.
  • the present invention provides software, e.g. a computer program having code segments that provide a program that, when executed on a processing engine, provides a virtual director module.
  • the software may include code segments that provide, when executed on the processing engine: any of the methods of the present invention or implement any of the system means of the present invention.
  • the first trade-off arises from the personalization of the production. Specifically, it originates from the conflict between preserving general production rules of sports videos and maximizing satisfaction of user preferences. Some basic rules of video production for basketball games could not be sacrificed for better satisfaction of user preferences, e.g., the scene must always include the ball, and well balanced weighting should be taken between the dominant player and the user-preferred player when rendering an event.
  • the second trade-off is the balance between completeness and closeness of the rendered scene.
  • the intrinsic interest of basketball games partially comes from the complexity of team working, whose clear description requires spatial completeness in camera coverage.
  • many highlighted activities usually happen in a specific and bounded playing area.
  • a close view emphasizing those areas increases the emotional involvement of the audience with the play, by moving the audience closer to the scene. Closeness is also required to generate a view of the game with sufficient spatial resolution under a situation with limited resources, such as small display size or limited bandwidth resources of handheld devices.
  • Fig.l The whole story is first divided into several segments. Optimal viewpoints and cameras are determined locally within each segment by trading off between benefits and costs under specified user-preferences. Furthermore, estimation of optimal camera or viewpoints is performed in a hierarchical structure. The estimation phase takes bottom-up steps from all individual frames to the whole story.
  • Intrinsic hierarchical structure of basketball games provides reasonable grounds for the above vision, and also gives clues on segment separation.
  • a game is divided by rules into a sequence of non-overlapped Appendix 1 ball-possession periods.
  • a ball-possession period is the period of game when the same team holds the ball and makes several trials of scoring.
  • several events might occur during the offence/defence process.
  • events in a basketball game could be classified as clock-events and non-clock-events. Clock-events will not overlap with each other, while non-clock-events might overlap with both clock-/non-clock- events.
  • one ball possession period is a rather fluent period and requires the period-level continuity of viewpoint movement.
  • Input data fed into our system include video data, associated meta-data, and user preferences.
  • video data associated meta-data
  • user preferences For's assume that we have gathered a database of basketball video sequences, which are captured simultaneously by K different cameras. AU cameras are loosely synchronized and produce the same number of frames, i.e., N frames, for each camera.
  • N frames the number of frames, i.e., N frames, for each camera.
  • Mi different salient objects denoted by ⁇ o, m
  • m I, - - - , M t ⁇ , are detected in total from all camera views.
  • the first class includes regions for players, referees, and the ball, which are used for scene understanding.
  • the second class includes the basket, coach bench, and some landmarks of the court, which are used in both scene understanding and camera calibration. Objects of the first class are automatically extracted from the scene typically based on background subtraction algorithm, while those from the second class are manually labeled Appendix 1 because their positions are constant on fixed cameras.
  • Appendix 1 because their positions are constant on fixed cameras.
  • a region r is a set of pixel coordinates that are belonging to this region. If o im does not appear in the k-th camera view, we set ⁇ f c im to the empty set ⁇ . With T 1 and r 2 being two arbitrary regions, we first define several elemental functions on one or two regions as x €ri
  • a parameter set u which includes both narrative and restrictive preferences, such as favorites and device capabilities.
  • viewpoint selection is a highly subjective task that still lacks an objective rule, we have some basic requirements on our viewpoint selection. It should be computational efficient, and should be adaptable under different device resolutions. For a device with high display resolution, we usually prefer a complete view of the whole scene. When the resolution is Appendix 1 limited due to device or channel constraints, we have to sacrifice part of the scene for improved representation of local details. For an object just next to the viewpoint border, it should be included to improve overall completeness of story- telling if it shows high relevance to the current event in later frames, and it should be excluded to prevent the viewpoint sequence from oscillating if it always appears around the border. In order to keep a safe area to deal with this kind of object, we prefer that visible salient objects inside the determined viewpoint are closer to the center while invisible objects should be driven away from the border of viewpoint, as far as possible.
  • viewpoint for scene construction in the z-th frame of the A;-th camera be ⁇ ki .
  • Viewpoint v ⁇ is defined as a rectangular region.
  • the aspect ratio of all viewpoints we have only three free parameters, i.e., the horizontal center V k1x , the horizontal center V k i y , and the width v k i w , to tune.
  • viewpoint v fci is defined as a weighted sum of attentional interests from all visible salient objects in that frame, i.e.,
  • w kim (v ki , u) consists of three major parts: the exponential part which controls the concentrating strength of salient objects around the center according to the pixel resolution of device display; the Appendix 1 zero-crossing part V( ⁇ k i m W k i) which separates positive interests from negative interests at the border of viewpoint; and the appended fraction part In A(v k i) which calculates the density of interests to evaluate the closeness and is set as a logarithm function. Note that V( ⁇ f ej m
  • a smooth camera sequence will be generated from determined viewpoints.
  • p k i ⁇ log P(c ⁇ fc
  • v ⁇ , u) we have to trade-off between minimizing camera-switching and maximizing the overall gain of cameras.
  • the smoothness of camera sequence is modelled by a Gibbs canonical distribution, which reads,
  • a short video clip with about 1200 frames is used to demonstrate behavioral characteristics of our system, especially its adaptivity under limited Appendix 1
  • the 3-rd camera i.e., the top- view with wide-angle lens
  • u DEV 640
  • u DEV 160
  • their viewpoints are also broader, which proves that a larger resolution prefers a wider view.
  • This camera is selected because it provides a side view of the right court with salient object gathered closer than other camera views due to projective geometry.
  • viewpoints are not continuous under different resolution.
  • the methods presented in this paper aim at dethe summary, e.g. through highlight and/or replay of preferred tecting and recognizing players on a sport-field, based on a player's actions[4].
  • distributed set of loosely synchronized cameras Detection assumes player verticality, and sums the cumulative projection of the multiple views' foreground activity masks on a set of II.
  • SYSTEM OVERVIEW planes that are parallel to the ground plane. After summation, large projection values indicate the position of the player on the To demonstrate the concept of autonomous and personalground plane. This position is used as an anchor for the player ized production, the European FP7 APIDIS research project bounding box projected in each one of the views.
  • supactivity mask is not only projected on the ground plane, as ported by the provided dataset, the acquisition sensors cover recommended in [9], but on a set of planes that are parallel a basket-ball court. Distributed analysis and interpretation of to the ground.
  • an original heuristic is implemented to the scene is then exploited to decide what to show about handle occlusions, and alleviate the false detections occurring an event, and how to show it, so as to produce a video at the intersection of the masks projected from distinct playcomposed of a valuable subset from the streams provided ers 'silhouettes by distinct views.
  • Our simulations demonstrate by each individual camera. In particular, the position of the that those two contributions quite significantly improve the players provides the required input to drive the autonomous detection performance. selection of viewpoint parameters [5], whilst identification and
  • Sections III, V tracking of the detected players supports personalization of and IV respectively focus on the detection, tracking, and
  • Fig. 1 Players tracks computation and labeling pipeline.
  • the dashed arrow reflects the optional inclusion of the digit recognition results within the appearance model considered for tracking.
  • the authors in [9], [10] adopt a bottom-up approach, and project the points of the foreground likelihood (background subtracted silhouettes) of each view to the ground plane.
  • the change probability maps computed in each view are warped to the ground plane based on homographies that have been inferred off-line.
  • the projected maps are then multiplied together and thresholded to define the patches of the ground plane for which the appearance has Fig. 2.
  • Multi-view people detection Foreground masks are projected on a changed compared to the background model and according to set of planes that are parallel to the ground plane to define a ground plane the single-view change detection algorithm. occupancy map, from which players' position is directly inferred.
  • the first category of methods has the advantage to be offers increased performance since not only the feet, but the computationally efficient, since the decision about ground entire object silhouette is considered to make a decision.
  • plane occupancy is directly taken from the observation of Our approach is an attempt to take the best out of both the projection of the change detection masks of the different categories. It proposes a computationally efficient bottom-up Appendix 2 approach that is able to exploit the entire a priori knowledge As L increases, the computation of Gi in a ground position we have about the object silhouette. Specifically, the bottom- x tends towards the integration of the projection of Bi up computation of the ground occupancy mask described in on a vertical segment anchored in x.
  • Section III-B exploits the fact that the basis of the silhouette equivalently be computed in Bu along the back-projection of lies on the ground plane (similarly to previous bottom-up the vertical segment. To further speed up the computations, solutions), but also that the silhouette is a roughly rectangular we observe that, through appropriate transformation of Bu it vertical shape (which was previously reserved to top-down is possible to shape the back-projected integration domains approaches).
  • Section UI-C proposes so that they correspond to segments of vertical lines in a simple greedy heuristic to resolve the interference occurring the transformed view, thereby making the computation of between the silhouettes projected from distinct views by integrals particularly efficient through the principle of integral distinct objects. Our experimental results reveal that this interimages.
  • Figure 3 illustrates that specific transformation for ference was the source of many false detections while inferring one particular view.
  • the transformation has been designed the actual objects positions from the ground occupancy mask. to address a double objective.
  • points of the 3D space Until now, this phenomenon had only been taken into account located on the same vertical line have to be projected on the by the top-down approach described in [7], through a complex same column in the transformed view (vertical vanishing point iterative approximation of the joint posterior probabilities at infinity).
  • ground plane occupancy mask comto the vertical line standing above a given point from the putation ground plane simply project on the column of the transformed
  • the computation of the ground occupancy mask now explain how to infer the position of the people standing Gi associated to the i th view is described as follows.
  • the i th view is the source of a binary background a dense cluster on the sum of ground occupancy masks, and subtracted silhouette image Bi € ⁇ 0, 1 ⁇ M ⁇ where M 1 is the (ii) the number of people to detect is equal to a known value number of pixels of camera i, 1 ⁇ i ⁇ C.
  • Fig. 3 Efficient computation of the ground occupancy mask: the original view (on the left) is mapped to a plane through a combination of nomographics that are chosen so that (1) verticality is preserved during projection from 3D scene to transformed view, and (2) ratio of heights between 3D scene and projected view is preserved for objects that lies on the same line in the transformed view.
  • Camera i produce a residual mask. The process iterates until sufficient players have been found.
  • G° (x) to be the ground Fig. 4. Impact of occlusions on the update of ground occupancy mask associated to camera i. Dashed part of the vertical silhouette standing in occupancy mask Gi associated to the i th view (see Secpi (xi) and Pi (X 2 ) are known to be labeled as foreground since a player is tion UI-B), and set ⁇ ?(x) to 1 when x is covered by the known to be standing in X n . Hence they become useless to infer whether a i th view, and to 0 otherwise. Each iteration is then run in player is located in Xi and X 2 , respectively. two steps.
  • the first step searches for the most likely position of the n th player, knowing the position of the In the second step, the ground occupancy mask of each view (n — 1) players located in previous iterations.
  • the second step is updated to account for the presence of the n th player. In updates the ground occupancy masks of all views to remove the ground position x, we consider that typical support of a the contribution of the newly located player.
  • player silhouette in view i is a rectangular box of width W
  • the first step of iteration n aggregates the ground and height H, and observe that the part of the silhouette that occupancy mask from all views, and then searches for the occludes or is occluded by the newly detected player does denser cluster in this mask. Hence, it computes the aggregated not bring any information about the potential presence of a mask G n at iteration n as player in position x.
  • oti(x, x n ) denote the fraction of the silhouette in ground position x that becomes non-informative in view i as a consequence of the presence of a player in
  • Figure 4 depicts a plane Vi that is orthogonal to the by ground, while passing through the i th camera and the player
  • X n argmax ⁇ G n (x), fc(y) >, (2) position X n .
  • y bi and fi which correspond to the points at which the rays
  • fc(y) denotes a Gaussian kernel centered in y and whose originated in the i th camera and passing through the head and spatial support corresponds to the typical width of a player. feet of the player, intersect the ground plane and the plane parallel to ground at height H, respectively.
  • the ground occupancy mask of a group of players is not equal to the sum of ground occupancy masks projected by each individual (bi) to be the distance between f ⁇ (bi) and the vertical line player. supporting player n in Vi.
  • pi(x) to denote Appendix 2 the orthogonal projection of x on ? makeup and let d t (x) measure the distance between x and V 1 .
  • the first and second factors reflect the misalignment of x and x n in V 1 and orthogonally to V 1 , respectively.
  • the space algorithms such as a tendency to get caught in bad local of the lattice is known as the spatial domain, while the color minima, we believe that it compares very favorably to any information corresponds to the range domain.
  • the location joint formulation of the problem typically solved based on and range vectors are concatenated in a joint spatial-range iterative proximal optimization techniques.
  • domain and a multivariate kernel is defined as the product of two radially symmetric kernels in each domain, which allows
  • box image is segmented into regions. Digit candidate regions
  • h r which is set to 8 in selected regions are classified into '0-9' digits or bin classes, our simulations.
  • the parameter h a trade-offs the run time of and the identity of the player is defined by majority vote, segmentation and subsequent filtering and classification stages. based on the results obtained in different views.
  • h r link together partial tracks using shirt color estimation. In the has been set to 4, while M has been fixed to 20. future, graph matching techniques should be used to evaluate longer horizon matching hypothesis. More sophisticated high
  • the height and width of valid regions ranges between over 180 different and regularly spaced time instants in the two values that are defined relatively to the bounding box interval from 18:47:00 to 18:50:00, which corresponds to a size. Since the size of the bounding is defined according temporal segment for which a manual ground truth is available. to real-world metrics, the size criterion implicitly adapts This ground truth information consists in the positions of the range of height and width values to the perspective players and referees in the coordinate reference system of the effect resulting from the distance between the detected court. We consider that two objects cannot be matched if the object and the camera. measured distance on the ground is larger than 30 cm. Figure 6
  • candidate digit regions are composed of either a single or a Three methods are compared, and for each of them we assess pair of regions that fulfill the above criteria. Our proposed algorithm to mitigate false detections. As a first
  • Fig. 6 ROC analysis of player detection performance. the position and identity of athletes playing on a sport field, surrounded by a set of loosely synchronized cameras. Detection relies on the definition of a ground occupancy map, while player recognition builds on pre-filtering of segmented regions and on multi-class SVM classification. Experiments on the APIDIS real-life dataset demonstrate the relevance of the proposed approaches.

Abstract

An autonomous computer based method and system is described for personalized production of videos such as team sport videos such as basketball videos from multi- sensored data under limited display resolution. Embodiments of the present invention relate to the selection of a view to display from among the multiple video streams captured by the camera network. Technical solutions are provided to provide perceptual comfort as well as an efficient integration of contextual information, which is implemented, for example, by smoothing generated viewpoint/camera sequences to alleviate flickering visual artefacts and discontinuous story-telling artefacts. A design and implementation of the viewpoint selection process is disclosed that has been verified by experiments, which shows that the method and system of the present invention efficiently distribute the processing load across cameras, and effectively selects viewpoints that cover the team action at hand while avoiding major perceptual artefacts.

Description

SYSTEMS AND METHODS FOR THE AUTONOMOUS PRODUCTION OF VIDEOS FROM MULTI-SENSORED DATA
Field of the Invention
The present invention relates to the integration of information from multiple cameras in a video system, e.g. a television production or intelligent surveillance system and to automatic production of video content, e.g. to render an action involving one or several persons and/or objects of interest.
Technical Background
The APIDIS (Autonomous Production of Images based on Distributed and Intelligent Sensing) project tries to provide a solution to generate personalized contents for improved and low-cost visual representation of controlled scenarios such as sports television, where image quality and perceptual comfort are as essential as efficient integration of contextual information [I].
In the APIDIS context, multiple cameras are distributed around the action of interest, and the autonomous production of content involves three main technical questions regarding those cameras:
(i) how to select optimal viewpoints, i.e. cropping parameters in a given camera, so that they are tailored to limited display resolution,
(ii) how to select the right camera to render the action at a given time, and
(iii) how to smooth camera/viewpoint sequences to remove production artefacts.
Production artefacts consist of both visual artefacts, which mainly means flickering effects due to shaking or fast zoom in/out of viewpoints, and story-telling artefacts such as the discontinuity of story caused by fast camera switching and dramatic viewpoint movements.
Data fusion of multiple cameras has been widely discussed in the literature. These previous works could be roughly classified into three major categories according to their various purposes. Methods in the first category deal with camera calibration and intelligent camera controlling by integrating contextual information of the multi-camera environment [4]. Reconstruction of 3D scene [5] or arbitrary viewpoint video synthesis [2] from multiple cameras is also a hot topic. The third category uses multiple cameras to solve certain problems such as occlusion in various applications, e.g., people tracking [6]. All these works focus much on the extraction of important 3D contextual information, but consider little on the technical questions mentioned above about video production.
Regarding autonomous video production, there are some methods proposed in the literature for selecting the most representative area from a standalone image. Suh et al.[7] defined the optimal cropping region as the minimum rectangle which contained saliency over a given threshold, where the saliency was computed by the visual attention model [8]. In Ref. [9], another attention model based method was proposed, where they discussed more the optimal shifting path of attention than the decision of viewpoint. It is also known to exploit a distributed network of cameras to approximate the images that would be captured by a virtual sensor located in an arbitrary position, with arbitrary viewpoint coverage. For few cameras with quite heterogeneous lens and scene coverage, most of the state-of-the-art free- viewpoint synthesis methods produce blurred results [2] [3].
In Ref. [10] an automatic production system for soccer sports videos is proposed and viewpoint selection based on scene understanding was also discussed. However, this system only switches viewpoints among three fixed shot sizes according to several fixed rules, which leads to uncomfortable visual artefacts due to dramatic changing of shot sizes. Furthermore, they only discussed the single-camera case.
In addition to the above literature survey, several patent applications have considered (omnidirectional) multi-camera systems to produce and edit video content in a semiautomatic way. Three main categories of systems can be identified.
The first category selects one view (i.e. one video) among the ones covered by a predefined set of cameras, based on some activity detection mechanism. In [15], each camera is activated based on some external device, which triggers the video acquisition each time a particular event is detected (e.g. an object entering the field of view). In [16], audio sensors are used to identify the direction in which the video should be captures. The second category captures a rich visual signal, either based on omnidirectional cameras or on wide-angle multi-camera setting, so as to offer some flexibility in the way the scene is rendered at the receiver-end. For example, the systems in [17] and [18] respectively consider multi-camera and omnidirectional viewing systems to capture and broadcast wide-angle video streams. In [17], an interface allows the viewer to monitor the wide-angle video stream(s) to select which portion of the video to unwrap in real time. Further, the operator can stop the playback and control pan-tilt-zoom effects in a particular frame. In [18], the interface is improved based on the automatic detection of the video areas in which an event participant is present. Hence, the viewer gets the opportunity to choose interactively which event participant (s)he would like to look at.
Similarly, [19-21] detect people of interest in a scene (typically a lecturer or a videoconference participant). However, the improvement over [18] is twofold. Firstly, in [19-21], methods are proposed to define automatically a set of candidate shots based on automatic analysis of the scene. Secondly, mechanisms are defined to select automatically a shot among the candidate shots. In [19], the shot definition relies on detection and tracking of the lecturer, and probabilistic rules are used to pseudo- randomly switch from the audience to the lecturer camera during a lecture. In [20] and [21], a list of candidate shots is also defined based on the detection of some particular object of interest (typically a face), but more sophisticated editing effects are considered to create a dynamic (videoconference) rendering. For example, one shot can pan from one person to another, or several faces can be pasted next to each other in a single shot. The edited output video is then constructed by selecting a best shot among the candidate shots for each scene (in [20] and [21], a scene corresponds to a particular period of time). The best shot is selected based on a pre-defined set of cinematic rules, e.g. to avoid too many of the same shot in a row.
It is worth noting that the shot parameters (i.e. the cropping parameters in the view at hand) stay fixed until the camera is switched. Moreover, in [19-21] a shot is directly associated to an object, so that in final, the shot selection ends up in selecting the object(s) to render, which might be difficult and irrelevant in contexts that are more complex than a videoconference or a lecture. Specifically, [19-21] do not select the shot based on the joint processing of the positions of the multiple objects. The third and last category of semi-automatic video production systems differentiates the cameras that are dedicated to scene analysis from the ones that are used to capture the video sequences. In [22], a grid of cameras is used for sport scene analysis purposes. The outputs of the analysis module are then exploited to compute statistics about the game, but also to control pan-tilt-zoom (PTZ) cameras that collect videos of players of interest (typically the one that holds the puck or the ball). [22] must implement all scene analysis algorithms in real time, since it aims at controlling the PTZ parameters of the camera instantaneously, as a function of the action observed in the scene. More importantly and fundamentally, [22] selects the PTZ parameters to capture a specific detected object and not to offer appropriate rendering of a team action, potentially composed of multiple objects-of-interest. In this it is similar to [19-21]. Also, when multiple videos are collected, [22] does not provide any solution to select one of them. It just forwards all the videos to an interface that presents them in an integrated manner to a human operator. This is the source of a bottleneck when many source cameras are considered.
US2008/0129825 discloses control of motorized camera to capture images of an individual tracked object, e.g. for individual sports like athletics competitions. The user selects the camera through a user interface. The location units are attached to the object. Hence they are intrusive.
GB2402011 discloses an automated camera control using event parameters. Based on player tracking and a set of trigger rules, the field of view of cameras is adapted and switched between close, mid and far views. A camera is selected based on trigger events. A trigger event typically corresponds to specific movements or actions of sports(wo)men, e.g. the service of a tennis player, or to Scoreboard information updates.
US2004/0105004A1 relates rendering talks or meetings. Tracking cameras are exploited to render the presenter or a member of the audience who asks a question. The presenter and the audience members are tracked based on sound source localization, using an array of microphones. Given the position of the tracking camera target, the PTZ parameters of the motorized camera are controlled so as to provide a smooth edited video of the target. The described method and system is only suited to follow a single individual person. With respect to the selection of the camera, switching is disclosed between a set of very distinct views (one overview of the room, one view of the slides, one close view on the presenter, and one close view a speaking audience member). The camera selection process is controlled based on event detection (e.g. a new slide appearing, or a member of the audience speaking) and videography rules defined by professionals, to emulate a human video production team.
References
[1] Homepage of the APIDIS project, http://www.apidis.org/
Demo videos related to this paper: http://www.apidis.org/Initial
Results/APIDISP/olQInitialVolQResults.htm
[2] S. Yaguchi, and H. Saito, Arbitrary viewpoint video synthesis from multiple uncalibrated cameras, IEEE Trans. Syst. Man. Cybern. B, 34(2004)
430-439.
[3] N. Inamoto, and H. Saito, Free viewpoint video synthesis and presentation from multiple sporting videos, Electronics and Communications in
Japan (Part III: Fundamental Electronic Science), 90(2006) 40-49.
[4] LH. Chen, and SJ. Wang, An efficient approach for the calibration of multiple PTZ cameras, IEEE Trans. Automation Science and Engineering,
4(2007) 286-293.
[5] P. Eisert, E. Steinbach, and B. Girod, Automatic reconstruction of stationary
3-D objects from multiple uncalibrated camera views, IEEE
Trans. Circuits and Systems for Video Technology, Special Issue on 3D
Video Technology, 10(1999) 261-277.
[6] A. Tyagi, G. Potamianos, J. W. Davis, and S.M. Chu, Fusion of Multiple camera views for kernel-based 3D tracking, WMVC'07, 1(2007) 1-1.
[7] B. Suh, H. Ling, B. B. Bederson, and D.W. Jacobs, Automatic thumbnail cropping and its effectiveness, Proc. ACM UIST 2003, 1(2003) 95-104.
[8] L. Itti, C. Koch, and E. Niebur, A model of saliency-based visual attention for rapid scene analysis, IEEE Trans. Pattern Analysis and Machine
Intelligence, 20(1998) 1254-1259.
[9] X. Xie, H. Liu, W. Y. Ma, H.J. Zhang, "Browsing large pictures under limited display sizes, IEEE Trans. Multimedia, 8(2006) 707-715.
[10] Y. Ariki, S. Kubota, and M. Kumano, Automatic production system of soccor sports video by digital camera work based on situation recognition, ISM'06, 1(2006) 851-860.
[11] J. Owens, Television sports production, 4th Edition, Focal Press, 2007. [12] J. W. Gibbs, Elementary principles in statistical mechanics, Ox Bow Press, 1981.
[13] D. Chandler, Introduction to modern statistical mechanics, Oxford University Press, 1987.
[14] C. De Vleeschouwer, F. Chen, D. Delannay, C. Parisot, C. Chaudy, E. Martrou, and A. Cavallaro, Distributed video acquisition and annotation for sport-event summarization, NEM summit, (2008).
[15] EP1289282 (Al) Video sequence automatic production method and system Inventor: AYER SERGE [CH] ; MOREAUX MICHEL [CH] (+1); Applicant: DARTFISH SA [CH]; EC: H04N5/232 IPC: H04N5/232; H04N5/232;; (IPC 1-7): H04N5/232
[16] US20020105598, EP1352521 AUTOMATIC MULTI-CAMERA VIDEO COMPOSITION; INTEL CORP
[17] US6741250 Method and system for generation of multiple viewpoints into a scene viewed by motionless cameras and for presentation of a view path; BE HERE CORP [18] US20020191071 Automated online broadcasting system and method using an omni-directional camera system for viewing meetings over a computer network; MICROSOFT CORP
[19] US20020196327 Automated video production system and method using expert video production rules for online publishing of lectures; MICROSOFT CORP;Microsoft Corporation
[20] US20060251382 Al System and method for automatic video editing using object recognition MICROSOFT CORP
[21] US20060251384 Automatic video editing for real-time multi -point video conferencing; MICROSOFT CORP
[22] WO200599423 AUTOMATIC EVENT VIDEOING, TRACKING AND CONTENT GENERATION SYSTEM; AMAN JAMES A;BENNETT PAUL MICHAEL
Aspects of the Present Invention
An object of the present invention is to provide computer based methods and systems for the autonomous production of an edited video, composed based on the multiple video streams captured by a network of cameras, distributed around a scene of interest.
The present invention provides an autonomous computer based method and system for personalized production of videos such as team sport videos such as basketball videos from multi-sensored data under limited display resolution. However the invention has a broader application range and is not limited just to this example. Embodiments of the present invention relate to the selection of a view to display from among the multiple video streams captured by the camera network. Technical solutions are provided to provide perceptual comfort as well as an efficient integration of contextual information, which is implemented, for example, by smoothing generated viewpoint/camera sequences to alleviate flickering visual artefacts and discontinuous story-telling artefacts. A design and implementation of the viewpoint selection process is disclosed that has been verified by experiments, which shows that the method and system of the present invention efficiently distribute the processing load across cameras, and effectively selects viewpoints that cover the team action at hand while avoiding major perceptual artefacts.
Accordingly the present invention provides a computer based method for autonomous production of an edited video from multiple video streams captured by a plurality of cameras distributed around a scene of interest, the method comprising:
• detecting objects/persons of interest in the images of the video streams, e.g. knowing their actual 3D world coordinates,
• selecting for each camera the field of view that renders the scene of interest in a way that (allows the viewer to) follows the action carried out by the multiple and interacting objects/persons that have been detected. The field of view parameters refer, for example to the cropping window in a static camera, and/or to the pan-tilt- zoom and position parameters in a motorized and moving camera. The concept of action following can be quantified by measuring the amount of pixels associated to each object/persons of interest in the displayed image. Accurate following of the action results from complete and close rendering, where completeness count the number of objects/persons in the displayed image, while closeness measure the amount of pixels available to describe each object. • building the edited video by selecting and concatenating video segments provided by one or more individual cameras, in a way that maximizes completeness and closeness metrics along the time, while smoothing out the sequence of rendering parameters associated to concatenated segments.
The selecting of rendering parameters can be for all objects or objects-of-interest simultaneously. The knowledge about the position of the objects in the images can be exploited to decide how to render the captured action. The method can include selecting field of view parameters for the camera that renders action as a function of time based on an optimal balance between closeness and completeness metrics. Or example, the field of view parameters refer to the crop in camera view of static cameras and/or to the pan-tilt-zoom or displacement parameters for dynamic and potentially moving cameras.
The closeness and completeness metrics can be adapted according to user preferences and/or resources. For example, a user resource can be encoding resolution. A user preference can be at least one of preferred object, or preferred camera. Images from all views of all cameras can be mapped to the same absolute temporal coordinates based a common unique temporal reference for all camera views. At each time instant, and for each camera view, field of view parameters are selected that optimize the trade-off between completeness and closeness. The viewpoint selected in each camera view can be rated according to the quality of its completeness/closeness trade-off, and to its degree of occlusions. For the temporal segment at hand, the parameters of an optimal virtual camera that pans, zooms and switches across views can be computed to preserve high ratings of selected viewpoints while minimizing the amount of virtual camera movements.
The method can include selecting the optimal field of view in each camera, at a given time instant.
A field of view Vk in the kth camera view is defined by the size Sk and the center Ck of the window that is cropped in the kΛ view for actual display. It is selected to include the objects of interest and to provide a high resolution description of the objects, and an optimal field of view Vk* is selected to maximize a weighted sum of object interests as follows
Figure imgf000010_0001
where, in the above equation:
• /„ denotes the level of interest assigned to the nΛ object detected in the scene.
• xn,k denotes the position of the rP object in camera view k.
• The function m( ) modulates the weights of the nl object according to its distance to the center of the viewpoint window, compared to the size of this window.
• The vector u reflects the user preferences, in particular, its component ures defines the resolution of the output stream, which is generally constrained by the transmission bandwidth or end-user device resolution.
• The function α(.) reflects the penalty induced by the fact that the native signal captured by the kth camera has to be sub-sampled once the size of the viewpoint becomes larger than the maximal resolution ures allowed by the user.
Preferably α(....) decreases with Sk and the function α(....) is equal to one when S*< ures, and decrease afterwards. α(....) is defined by:
Figure imgf000010_0002
where the exponent uci0Se is larger than 1 , and increases as the user prefers full- resolution rendering of zoom-in area, compared to large but sub-sampled viewpoints.
The method includes rating the viewpoint associated to each camera according to the quality of its completeness/closeness trade-off, and to its degree of occlusions. The highest rate should correspond to a view that (1) makes most object of interest visible, and (2) is close to the action, meaning that it presents important objects with lots of details, i.e. a high resolution. Formally, given the interest /„ of each player, the rate /^v*, u) associated to the k* camera view is defined as follows:
Figure imgf000011_0001
where, in the above equation:
/„ denotes the level of interest assigned to the nΛ object detected in the scene. xπ denotes the position of the vP object in the 3D space;
Ok(xn| x) measures the occlusion ratio of the nth object in camera view k, knowing the position of all other objects, the occlusion ratio of an object being defined to be the fraction of pixels of the object that are hidden by other objects when projected on the camera sensor;
The height ^xn) is defined to be the height in pixels of the projection in view k of a reference height of a reference object located in Xn. The value of hk(xn) is directly computed based on camera calibration, or when calibration is not available, it can be estimated based on the height of the object detected in view k. ■ The function /?*(.) reflects the impact of the user preferences in terms of camera view and display resolution. /?*(.) is defined as βk [S,u) = uk - a(S,u)t where w* denotes the weight assigned to the tth camera, and
Figure imgf000011_0002
is defined above.
The method may comprise smoothing the sequence of camera indices and corresponding viewpoint parameters, wherein the smoothing process is for example implemented based on two Markov Random Fields, linear or non-linear low-pass filtering mechanism, or via a graph model formalism, solved based on conventional Viterbi algorithm.
The capturing of the mulitiple video streams may be by static or dynamic cameras.
The present invention also includes a computer based system comprising a processing engine and memory for autonomous production of an edited video from multiple video streams captured by a plurality of cameras distributed around a scene of interest, adapted to carry out any of the methods of the present invention. The system can comprise: a detector for detecting objects in the images of the video streams, first means for selecting one or more camera viewpoints based on joint processing of positions of multiple objects that have been detected, second means for selecting rendering parameters that maximize and smooth out closeness and completeness metrics by concatenating segments in the video streams provided by one or more individual cameras.
The computer based system can have
• means for detecting objects/persons of interest in the images of the video streams, e.g. knowing their actual 3D world coordinates,
• means for selecting for each camera the field of view that renders the scene of interest in a way that (allows the viewer to) follows the action carried out by the multiple and interacting objects/persons that have been detected. The field of view parameters refer, for example to the cropping window in a static camera, and/or to the pan-tilt-zoom and position parameters in a motorized and moving camera. The concept of action following can be quantified by measuring the amount of pixels associated to each object/persons of interest in the displayed image. Accurate following of the action results from complete and close rendering, where completeness count the number of objects/persons in the displayed image, while closeness measure the amount of pixels available to describe each object.
• Means for building the edited video by selecting and concatenating video segments provided by one or more individual cameras, in a way that maximizes completeness and closeness metrics along the time, while smoothing out the sequence of rendering parameters associated to concatenated segments.
The present invention also provides a computer program product that comprises code segments which when executed on a processing engine execute any of the methods of the invention or implement any system according to the invention.
The present invention also includes a non-transitory machine readable signal storage medium storing the computer program product.
The present invention can deal with scenes involving several interacting moving persons/objects of interest. In the following, those scenes are denoted as team actions, and typically correspond to the scenes encountered in team sports context.
Automating the production process allows to:
• Reduce the production costs, by avoiding long and tedious hand-made processes, both for camera control and camera selection;
• Increase the production bandwidth and quality, by potentially handling an infinite number of cameras simultaneously;
• Create personalized content, by repeating the production process several times, with distinct parameters.
An aim of the present invention is to target the production of semantically meaningful, i.e. showing the action of interest, and perceptually comfortable contents from raw multi-sensored data. The system according to the present invention is computer based including memory and a processing engine and is a computationally efficient production system, e.g. based on a divide-and-conquer paradigm (see Fig. 15).
In embodiments, the best field of view is first computed for each individual camera, and then the best camera to render the scene is selected. Together the camera index and its field of view define the viewpoint to render the action. When the camera is fixed, field of view definition is limited to a crop of the image captured by the camera. When the camera is motorized, the field of view directly results from the pan-tilt-zoom parameters of the camera, and can thus capture an arbitrary rectangular portion of the light field reaching the centre of the camera.
To define in a quantitative manner the notion of best field of view or best camera index, the present invention introduces three important concepts, which are "completeness", "closeness" and "smoothness". Completeness stands for the integrity of action rendering. In the context of team action rendering, the completeness measures how well the objects/persons of interest in the scene (typically the players participating to a team sport) are included in the displayed image. Closeness defines the fineness of detail description (typically the average amount of pixels that are available to render the persons/objects of interest), and smoothness is a term referring to the continuity of viewpoint selection . By trading off among those factors, methods are provided for selecting (as a function of time) optimal viewpoints to fit the display resolution and other user preferences, and for smoothing these sequences for a continuous and graceful story-telling. The present invention is completely autonomous and self-governing, in the sense that it can select the pixels to display without any human intervention, based on a default set of production parameters and on the outcomes of people detection systems. But the invention can also deal with user-preferences, such as user's narrative profile, and device capabilities. Narrative preferences can be summarized into four descriptors, i.e., user preferred group of objects or "team", user preferred object or "player", user preferred 'view type' (e.g. close zoom-in or far zoom-out views), and user preferred "camera". All device constraints, such as display resolution, network speed, decoder's performance, are abstracted as the output resolution parameter, which denotes the resolution at which the output video is encoded to be conveyed and displayed at the end-host.
The capability to take those preferences into account depends on the knowledge captured about the scene, e.g. through video analysis tools. For example, an embodiment of the present invention has been implemented in "Detection and Recognition of Sports(wo)men from Multiple Views", D. Delannay, N. Danhier, and C. De Vleeschouwer, Third ACM/IEEE International Conference on Distributed Smart Cameras, Como, Italy, September 2009 to automatically track and recognize the moving players in the scene of interest. This document is included as Appendix 2.
First, in embodiments of the present invention a set of cameras that (partly) cover the same area are considered, which are likely to be activated simultaneously based on any activity detection mechanism which is another important advantage of the present invention over the prior art. The purpose of the invention is thus not to select a camera view based on the fact that some activity was detected in the view. Rather, the objective is to select along the time the camera view and its corresponding variations in parameters such as cropping or PTZ parameters, to best render the action occurring in the covered area. Here quality of rendering refers to the optimization of a trade-off between measures of closeness, completeness, and smoothness.
Second, the present invention has an advantage of dynamically adapting and smoothing out viewpoint parameters with time, which is an improvement over prior art systems in which the shot parameters (e.g. the cropping parameters in the view at hand) stay fixed until the camera is switched.
Third, in embodiments of the present invention a choice between one object or another is not made, but rather a selection is made of the viewpoint based on the joint processing of the positions of the multiple objects that have been detected. In accordance with embodiments of the present invention a selection is made of the viewpoints sequence that is optimal in the way it maximizes and smoothes out closeness and completeness metrics e.g. for all objects simultaneously.
Those differences compared to previous art bring significant benefits when addressing the content production problem, e.g. in a team sport context. It primarily allows following the action of moving and interacting players, which was not possible based on prior art methods.
Preferably, the methods and systems of the present invention capture and produce content automatically, without the need for costly handmade processes (no technical team or cameraman is needed).
As a consequence of its cost-effectiveness, the present invention aims at keeping the production of content profitable even for small- or medium-size targeted audiences. Thereby, it promotes the emergence of novel markets, offering a large choice of contents that are of interest for a relatively small number of users (e.g. the summary of a regional sport event, a university lecture, or a day at the nursery).
In addition, automating the production enables content access personalisation. Generating a personalised video simply consists in (re-)running the production process with input parameters corresponding to the specific preferences or constraints expressed by the user.
An aim of the present invention is to produce a video report of an event based on the concatenation of video (and optionally corresponding audio) segments captured by a set of cameras. In practice, both static and dynamic cameras can be manipulated by the present invention: o Using static sensors adds to cost-effectiveness because it permits to store all relevant content and to process it off-line, to select the fragments of streams that are worth being presented to the viewer.
The autonomous production principles described below could as well be used to control a (set of) dynamic PTZ camera(s). In that case, the information about the location of object-of-interests has to be provided in real-time, e.g. based on the real time analysis of the signal captured by some audio-visual sensors (as done in [refj), or based on information collected from embedded transmitters. Moreover, the space of candidate fields of view is defined by the position and control parameters of the PTZ camera, and not by the cropped image within the view angle covered by the static camera.
The main assumption underlying the networked acquisition setting is the existence of a common unique temporal reference for all camera views, so that the images from all cameras can be mapped to the same absolute temporal co-ordinates of the scene at hand. The cameras are thus assumed to be loosely, but not necessarily tightly, synchronized. Here, the loose synchronization refers to a set of cameras that capture images independently, and that relies on timestamps to associate the images that have been captured at similar, but not necessarily identical, time instants. In contrast, a tight synchronization would refer to synchronized capture of the images by the cameras, as done when acquisition is controlled by a common trigger signal.
To decide about how to render the team action at hand, the invention has to know the position of objects-of-interest in the scene. This knowledge might be an (error-prone) estimate, and can refer either to the position of objects in the 3D scene, or to the position of objects in each one of the camera views.
This information can be provided based on transmitters that are carried by the objects to be tracked in the scene of interest. This knowledge can also be provided by a non- intrusive alternative, e.g. by exploitation of a set of video signals captured by a network of static cameras, e.g. the ones used for video report production, to detect and track the objects-of-interest. The method is described in "Detection and Recognition of Sports(wo)men from Multiple Views, D. Delannay, N. Danhier, and C. De Vleeschouwer, Third ACM/IEEE International Conference on Distributed Smart Cameras, Como, Italy, September 2009" which is incorporated herein by reference in its entirety. It builds on a background reference model to identify the pixels that change in each view. When the multiple views are calibrated, e.g. through an off-line process, the change detection masks that are collected in each view can be merged, e.g. in a ground occupancy mask, to identify the position of objects-of-interest in the 3D space (see for example the approach depicted in Figure 16). Particle filters or graph-based techniques can then be used to link occurrences of the same object along the time line. Note that such detection and tracking techniques are well known to those skilled in the art, and will not be described in detail herein. The embodiment of these algorithms that has been implemented is described in the reference above, and offers the advantage of handling occlusions in a computationally efficient way.
Once the positions of the objects-of-interest are known, the invention supports autonomous production (= selection of viewpoints along the time) of the content captured by the network of static cameras1. The approach is generic in the sense that it can integrate a large range of user preferences including transmission or display resources, semantic interest (like preferred player), or narrative preferences (dealing with the preferred way to visualize the story, e.g. preferred camera or zoom-in factor).
Over a given time period, the present invention aims at selecting the sequence of viewpoints that optimizes scene rendering along the time, with respect to the detected persons/objects-of-interest. Here, a viewpoint refers to a camera index and to the window that is cropped in that particular camera view, for actual display.
The optimization of the sequence of viewpoints builds on a number of notions and principles that can be described as follows.
At each time instant, the optimization of the rendering has to: Maximize the notion of completeness, which measures to which extent the (pixels of the) objects-of-interest are included and visible within the displayed viewpoint. Optionally this involves minimizing the degree of object occlusion, which measures the fraction of an object that is present in the scene, but is (e.g. at least partly) hidden by other objects; Maximize the notion of closeness, which refers to the fineness of details, i.e. the density of pixels or resolution, when rendering the objects-of-interest.
Those two objectives are often antagonists. For this reason, methods and systems according to embodiments of the present invention propose to balance completeness and closeness, optionally as a function of individual user preferences (in terms of viewpoint resolution, or preferred camera or players for example).
Finally, smoothness of transitions between the rendering parameters of consecutive frames of the edited video has also to be taken into account when considering the production of a temporal segment. In other words, it is important to preserve consistency between the camera and for example cropping parameters that are selected along the time line, to avoid distracting the viewer from the story by abrupt changes or constant flickering.
Based on those guiding principles, the three step process depicted in Figure 14 has been developed. It can be described as follows:
Step 1 : At each time instant, and for each camera view, select the variations in parameters such as cropping parameters that optimize the trade-off between completeness and closeness. Optionally, the completeness/closeness trade-off is measured as a function of the user preferences. For example, depending on the resolution at which (s)he accesses the produced content, a user might prefer a small (zoom-in) or a large (zoom-out) viewpoint.
Step 2: Rate the field of view selected in each camera view according to the quality (in terms of user preferences) of its completeness/closeness trade-off, and to its degree of occlusions.
Step 3: For the temporal segment at hand, compute the parameters of an optimal virtual
I ÷ It could as well control the parameters of a moving PTZ camera when the object positions aro inferred in camera that pans, zooms and switches across cameras to preserve high ratings of selected viewpoints while minimizing the amount of virtual camera movements.
The first step consists in selecting the optimal field of view for each camera, at a given time instant. To simplify notations, in the following, we omit the time index t.
A field of view Vk in the k^ static camera is defined by the size Sk and the center Ck of the window that is cropped in the kΛ view for actual display.
It has to be selected to: o Include the objects of interest; o Provide a fine, i.e. high resolution, description of those objects.
The optimal field of view Vk* is selected preferably according to user preferences, to maximize a weighted sum of object interests as follows
γk * =
Figure imgf000019_0001
In the above equation: o /„ denotes the level of interest assigned to the nth object recognized in the scene. This assignment can be done by any suitable method and the present invention assumes that this assignment has been completed and the results can be used by the present invention. These levels of interest can be defined by the user, e.g. once for the entire event, and made available to the present invention. In application scenarios for which object are detected but not labelled, the weight is omitted, i.e. replaced by a constant unitary value. o xn,k denotes the position of the nth object in camera view k. o The function m(.) modulates the weights of the nth object according to its distance to the center of the viewing window, compared to the size of this window. Intuitively, the weight should be high and positive when the object-of-interest is located in the center of the display window, and should be negative or zero when the object lies outside the
I real time. viewing area. Hence, m(.) should be positive between 0 and 0.5, and lower or equal to zero beyond 0.5. Many functions are appropriate, and the choice of a particular instance could for example be driven based on computational issues. Examples of functions are the well-known Mexican hat or Gaussian functions. Another example is provided in detail in a particular embodiment of the invention described in appendix 1 of this application. o The vector u reflects the user constraints or preferences in terms of viewing window resolution and camera index. In particular, its component ures defines the resolution of the output stream, which is generally constrained by the transmission bandwidth or end- user device resolution. Its component ucιose is set to a value larger than 1 that increases to favour close viewpoints compared to large zoom-out views. The other components of u are dealing with camera preferences, and are defined below, while describing the second step of the invention.
D The function α(.) reflects the penalty induced by the fact that the native signal captured by the k"1 camera has to be down-sampled once the size of the viewpoint becomes larger than the maximal resolution ures allowed by the user. This function typically decreases with Sk. An appropriate choice consists in setting the function equal to one when £*< ures, and in making it decrease afterwards. An example of α(.) is defined by
Figure imgf000020_0001
where the exponent uci0Se is larger than 1 , and increases to favour close viewpoints compared to large zoom-out field of views.
It is worth noting that the trade-offs reflected in the above equation can be formulated in many different but equivalent ways. An example of alternative, but equivalent, formulation has been implemented in the embodiment of the invention defined in appendix 1. In this formulation the sum of product has been replaced by a product of sums, without fundamentally affecting the key idea of the invention, which consists in trading-off closeness and completeness according to user constraints (regarding output resolution) and preferences (regarding zoom-out or zoom-in viewpoints). The second step rates the viewpoint associated to each camera according to the quality of its completeness/closeness trade-off, and to its degree of occlusions. The highest rate should correspond to a view that (1) makes most object of interest visible, and (2) is close to the action, meaning that it presents important objects with lots of details, i.e. a high resolution.
Formally, given the interest /„ of each player, the rate Ik(vk, u) associated to the k th camera view is defined as follows:
Figure imgf000021_0001
In the above equation:
/„ denotes the level of interest assigned to the nΛ object detected in the scene.
Xn denotes the position of the nΛ object in the 3D space;
Ok(xn| x) measures the occlusion ratio of the nth object in camera view k, knowing the position of all other objects. The occlusion ratio of an object is defined to be the fraction of pixels of the object that are hidden by other objects when projected on the camera sensor.;
The height Λ*(xn) is defined to be the height in pixels of the projection in view k of a six feet tall vertical object located in Xn. Six feet is the average height of the players. The value of A*(Xn) is directly computed based on camera calibration. When calibration is not available, it can be estimated based on the height of the object detected in view k.
The function /?*(.) reflects the impact of the user preferences in terms of camera view and display resolution. Formally, /?*(.) can be defined as
βk{S,v) = uk - a[β,n),
where Uk denotes the weight assigned to the Λ*Λ camera, and aHβ,n) is defined as above.
Similar to what has been told about the first step, it is worth mentioning that alternative formulation of the same basic idea can be imagined. For example, the embodiment of the invention that is described in the appendix 1 defines the function to maximize based on the product of a closeness factor with a completeness factor, each factor measuring a weighted sum of individual object display resolution and visibility. Hence, it replaces the sum of product by a product of sums, but still follows the same basic idea of taking user preferences into account while trading off two antagonist terms, reflecting the concept of closeness and completeness, respectively.
Similarly, a formulation based on the weighted sum of two terms reflecting the closeness and the completeness concepts described above is also an embodiment of the present invention.
The third and last step consists in smoothing the sequence of camera indices and corresponding viewpoint parameters.
In the proposed embodiment of the invention, the smoothing process is implemented based on the definition of two Markov Random Fields (see Figure 5, and the description of the embodiment below). Other embodiments can as well build on any linear or nonlinear low-pass filtering mechanism to smooth out the sequence of camera indices and viewpoints parameters. The smoothing could also be done through a graph model formalism, solved based on conventional Viterbi algorithm. In that case, graph vertices would correspond to candidate rendering parameters for a given frame, while edges would connect candidate rendering states along the time. The cost assigned to each edge would reflect the disturbance induced by a change of rendering parameters between two consecutive frames.
The automated video production system and method also includes a virtual director, e.g. a virtual director module for selecting and determining which of the multiple camera video streams are a current camera stream to be viewed. The virtual director, at each time instant, and for each camera view, selects the variations in parameters, e.g. in cropping parameters that optimize the trade-off between completeness and closeness. The completeness/closeness trade-off is measured as a function of user preferences. For example, depending on the resolution at which a user accesses the produced content, a user might prefer a small (zoom-in) or a large (zoom-out) viewpoint. The virtual director module also rates the viewpoint selected in each camera view according to the quality (in terms of user preferences) of its completeness/closeness trade-off, and to its degree of occlusions. Finally the virtual director module computes the parameters of an optimal virtual camera that pans, zooms and switches across views for the temporal segment at hand, to preserve high ratings of selected viewpoints while minimizing the amount of virtual camera movements.
Experiments that the viewpoints selected by the virtual director, in accordance with embodiments of the present invention, based on the above functions, match end-user expectations. Even more, subjective tests reveal that viewers generally prefer the viewpoints selected based on the automatic system than the ones selected by a human producer. This is partly explained by the severe load imposed to the human operator when the number of camera increases. Hence, the present invention also alleviates the bottleneck experienced by a human operator, when jointly and simultaneously processing a large number of source cameras.
Brief Description of the Figures
Fig.1 : hierarchical working flow
Fig. 2: : hierarchical structure
Fig. 3: weighting function
Fig. 4: behaviour of viewpoint selection
Fig. 5 model of two-step estimation of viewpoint movement
Fig. 6: camera plans
Fig. 7: sample views from cameras
Fig. 8: short video clip
Fig. 9: viewpoint sequences
Fig. 10: behaviour of camera/viewpoint sequence
Fig. 11 : comparison of camera and viewpoint sequences
Fig. 12: frames in generated sequences
Fig. 13: comparison of generated camera sequences
Fig. 14: 3 step embodiment, of the present invention
Fig. 15: divide and conquer embodiment of the present invention
Fig. 16: use of masks for detection
Further drawings are shown in appendix 2. These drawings refer to appendix 2 and the text of appendix 2 should be read in conjunction with these drawings and the references specific to this appendix.
Detailed Description of the present invention
The present invention provides computer based methods and systems for cost-effective and autonomous generation of video contents from multi-sensored data including automatic extraction of intelligent contents from a network of sensors distributed around the scene at hand. Here, intelligent contents refers to the identification of salient segments within the audiovisual content, using distributed scene analysis algorithms. This knowledge can be exploited to automate the production and personalize the summarization of video contents.
Without loss of generality and without limiting the present invention, only static cameras will mainly be described as an illustrative embodiment.
One input is the positions of objects of interest. To identify salient segments in the raw video content, multi-camera analysis is considered, whereby relevant object detection such as people detection methods relying on the fusion of the foreground likelihood information computed in each view can be used. Multi-view analysis can overcome traditional hurdles such as occlusions, shadows and changing illumination. This is in contrast with single sensor signal analysis, which is often subject to interpretation ambiguities, due to the lack of accurate model of the scene, and to coincidental adverse scene configurations.
In accordance with some embodiments of the present invention, the positions of the objects of interest are assumed to be (at least partially) known as a function of the time. For example, embodiments of the present invention infer this knowledge from the analysis of the light fields captured by a distributed set of static cameras. In such an embodiment a ground occupancy mask can be computed by merging the foreground likelihood measured in each view. Actual player positions can then be derived through an iterative and occlusion-aware greedy process. Multi view analysis can be used to provide the required inputs to the autonomous team sport production method and system of the present invention and is described in the article "Detection and Recognition of Sports(wo)men from Multiple Views", D. Delannay, N. Danhier, and C. De Vleeschouwer, Third ACM/IEEE International Conference on Distributed Smart Cameras, Como, Italy, September 2009 is incorporated herein by reference in its entirety as appendix 2.
Embodiments of the present invention then proceed in two stages.
In a first stage, given the positions of each object of interest with time, the invention selects a set of so-called relevant parameters to render the scene of interest as a function of time, using a camera located at a point which can be any arbitrary 3D point around the action.
Here, the rendering parameters define afield of view for the camera, and depend on the camera infrastructure that has been deployed to capture the images of the scene. For example, embodiments of the present invention make use of a fixed camera, and the rendering parameters define how to crop sub-images within the camera view. In other embodiments an articulated and motorized camera can be used, and the rendering parameters may then refer to the pan, tilt, and zoom parameters of the camera. The notion of relevant parameters has to do with the definition of informative, i.e. displaying the persons and objects of interest, and perceptually pleasant images.
In a second stage, embodiments of the present invention assume that multiple (PTZ) cameras are distributed around the scene, and how to select the right camera to render the action at a given time is then determined. This is done by selecting or promoting informative cameras, and avoiding perceptually inopportune switching between cameras.
Together the camera index and its field of view define the viewpoint to render the action.
To produce semantically meaningful and perceptually comfortable video summaries based on the extraction or interpolation of images from the raw content, the present invention introduces three fundamental concepts, i.e. "completeness", "smoothness" and closeness (or "fineness"), to abstract the semantic and narrative requirement of video contents. Based on those concepts, the selection of camera viewpoints and that of temporal segments in the summary can be determined, these two being independent optimization problems.
• Completeness stands for both the integrity of view rendering in camera/viewpoint selection, and that of story-telling in summarization. A viewpoint of high completeness includes more salient objects, while a story of high completeness consists of more key actions.
• Smoothness refers to the graceful displacement of the virtual camera viewpoint, and to the continuous story-telling resulting from the selection of contiguous temporal segments. Preserving smoothness is important to avoid distracting the viewer from the story by abrupt changes of viewpoints or constant temporal jumps (Owen, 2007).
• Closeness or Fineness refers to the amount of details provided about the rendered action. Spatially, it favours close views. Temporally, it implies redundant storytelling, including replays. Increasing the fineness of a video does not only improve the viewing experience, but is also essential in guiding the emotional involvement of viewers by close-up shots.
In accordance with embodiments of the present invention these three concepts are optimised, e.g. maximized to produce a meaningful and visually pleasant content. In practice, maximization of the three concepts can resuls in conflicting decisions, under some limited resource constraints, typically expressed in terms of the spatial resolution and temporal duration of the produced content. For example, at fixed output video resolution, increasing completeness generally induces larger viewpoints, which in turns decreases fineness of salient objects. Similarly, increased smoothness of viewpoint movement prevents accurate pursuit of actions of interest along the time. The same observations hold regarding the selection of segments and the organization of stories along the time, under some global duration constraints.
Accordingly, embodiments of the present invention relating to computer based methods and systems provide a good balance between the three major factors. For example, quantitative metrics are defined to reflect completeness, fineness/closeness. Constrained optimization can then be used to balance those concepts. In addition, for improved computational efficiency, both production and summarization are envisioned in the divide and conquer paradigm (see fig. 15). This especially makes sense since video contents intrinsically have a hierarchical structure, starting from each frame, shots (set of consecutive frames created by similar camerawork), to semantic segments (consecutive shots logically related to the identical action), and ending with the overall sequence.
For example an event timeframe can be first cut into semantically meaningful temporal segments, such as an offense/defense round of team sports, or an entry in news. For each segment, several narrative options are considered. Each option defines a local story, which consists of multiple shots with different camera coverage. A local story not only includes shots to render the global action at hand, but also shots for explanative and decorative purposes, e.g., replays and close-up views in sports or graph data in news. Given the timestamps and the production strategy (close-up view, replay, etc) of the shots composing a narrative option, the camerawork associated to each shot is planned automatically, taking into account the knowledge inferred about the scene by video analysis modules.
Benefits and costs are then assigned to each local story. For example, the cost can simply corresponds to the duration of the summary. The benefit reflects user satisfaction (under some individual preferences), and measures how some general requirements, e.g., the continuity and completeness of the story, are fulfilled. These pairs of benefits and costs are then fed into a summarization engine, which solves a resource allocation problem to find the organization of local stories that achieves the highest benefit under the constrained summary length.
Camerawork Planning will be described with reference to an example, e.g. Team Sport Videos basketball video production. Whilst extendable to other contexts (e.g. PTZ camera control), the process has been designed to select which fraction of which camera view should be cropped in a distributed set of still cameras to render the scene at hand in a semantically meaningful and visually pleasant way by assuming the knowledge of players' positions. Step 1: Camera-wise Viewpoint Selection.
At each time instant and in each view, it is assumed that the players' supports are known, and select the cropping parameters that optimize the trade-off between completeness and fineness.
Formally, a viewpoint \k, in the kfh camera view of the i'h frame is defined by the size Sk, and the center c& of the window that is cropped in the \th view for actual display. It has to be selected to include the objects of interest, and provide a fine, i.e. high resolution, description of those objects. If there are N salient objects in this frame, and the location of the nl object in the A^ view is denoted by \nk,, we select the optimal viewpoint v*,*, by maximizing a weighted sum of object interests as follows:
Figure imgf000028_0001
In the above equation: o /„ denotes the level of interest assigned to the n'h object detected in the scene. Note that assigning distinct weights to team sport players allows focusing on a preferred player, but also implies recognition of each player. A unit weight can be assigned to all players, thereby producing a video that renders the global team sport action. o The vector u reflects the user constraints and preferences in terms of viewpoint resolution and camera view, u=[uclose ures {uk}\ In particular, its component ures defines the resolution of the output stream, which is generally constrained by the transmission bandwidth or end-user device resolution. Its component uclose is set to a value larger than 1, and increases to favor close viewpoints compared to large zoom-out views. The other components of u are dealing with camera preferences, and are defined in the second step below. o The function a(.) modulates the weights of the objects according to their distance to the center of the viewpoint, compared to the size of this window. Intuitively, the weight should be high and positive when the object-of-interest is located in the center of the display window, and should be negative or zero when the object lies outside the viewing area. Many instances are appropriate, e.g. the well-known Mexican Hat function. o The function /?(.) reflects the penalty induced by the fact that the native signal captured by the lth camera has to be sub-sampled once the size of the viewpoint becomes larger than the maximal resolution ures allowed by the user. This function typically decreases with S^. An appropriate choice consists in setting the function equal to one when £&< ures, and in making it decrease afterwards. An example of /?(.) is defined by:
where
Figure imgf000029_0001
to large zoom-out views.
Step 2: Frame-wise Camera Selection
The viewpoint selected in each view is rated according to the quality of its completeness/closeness trade-off, and to its degree of occlusions. The highest rate should correspond to a view that (1) makes most object of interest visible, and (2) is close to the action, meaning that it presents important objects with lots of details, i.e. a high resolution.
Formally, given the interest /„ of each player, the rate hiiyu, u) associated to each camera view is defined as follows:
h, ( vfa »«)
Figure imgf000029_0002
In the above equation:
Uk denotes the weight assigned to the k"1 camera, while a and β are defined as in the first step above.
oi^Xnki fx ) measures the occlusion ratio of the n'h object in camera view k, knowing the position of all other objects. The occlusion ratio of an object is defined to be the fraction of pixels of the object that are hidden by other objects when projected on the camera sensor.
The height hk(xnkd ιs defined to be the height in pixels of the projection in view k of a six feet tall vertical object located in x^,. Six feet is the average height of the players. The value of h/fank,) is directly computed based on camera calibration. When calibration is not available, it can be estimated based on the height of the object detected in view k. Step 3: Smoothing of Camera/Viewpoint Sequences.
For the temporal segment at hand, the parameters of an optimal virtual camera that pans, zooms and switches across views are computed to preserve high ratings of selected viewpoints while minimizing the amount of virtual camera movements.
The smoothing process can be implemented based on the definition of two Markov
Random Fields. At first, w> are taken as observed data on the i image, and assume that they are noise-distorted outputs of some underlying smooth results v*,. Given the smooth viewpoint sequence recovered for each camera, camera-gains htyh, u) of those derived viewpoints are computed, and a smooth camera sequence from the second Markov field is inferred by making the probabilities P(k|v*,, u) of each camera proportional to the gains hiiyh, u).
Compared to simple Gaussian smoothing filters, this enables adaptive smoothing by setting different smoothing strength on each individual frame. Furthermore, iterative slight smoothing in our method is able to achieve softer results than one-pass strong smoothing.
Multi-view Player Detection and Recognition are obtained in an autonomous production of visual content by relying on the detection (and recognition) of object-of-interest in the scene.
The foreground likelihood is computed independently on each view, using standard background modelling techniques. These likelihoods are then fused by projecting them on the ground plane, thereby defining a set of so-called ground occupancy masks. The computation of the ground occupancy mask associated to each view is efficient, and these masks are combined and processed to infer the actual position of players.
Formally, the computation of the ground occupancy mask Gk associated to the k"1 view is described as follows. At a given time, the k!h view is the source of a foreground likelihood image Fk e[0,l]Mr , where Mk is the number of pixels of camera k, 0 < k < C. Due to the player verticality assumption, vertical line segments anchored in occupied positions on the ground plane support a part of the detected object, and thus back- project on foreground silhouettes in each camera view. Hence, to reflect ground occupancy in x, the value of GK in Λ: is defined to be the integration of the (forward- projection of Fk on a vertical segment anchored in x. Obviously, this integration can equivalently be computed in FK, along the back-projection of the vertical segment anchored in x. This is in contrast methods which compute the mask by aggregating the projections of the foreground likelihood on a set of planes that are parallel to the ground.
To speed up the computations associated to our formulation, it is observed that, through appropriate transformation of FK, it is possible to shape the back-projected integration domain so that it also corresponds to a vertical segment in the transformed view, thereby making the computation of integrals particularly efficient through the principle of integral images. The transformation has been designed to address a double objective. First, points of the 3D space located on the same vertical line have to be projected on the same column in the transformed view (vertical vanishing point at infinity). Second, vertical objects that stand on the ground and whose feet are projected on the same horizontal line of the transformed view have to keep same projected heights ratios. Once the first property is met, the 3D points belonging to the vertical line standing above a given point from the ground plane simply project on the column of the transformed view that stands above the projection of the 3D ground plane point. Hence, Gf,(x) is simply computed as the integral of the transformed view over this vertical back-projected segment. Preservation of height along the lines of the transformed view even further simplifies computations.
For side views, these two properties can be achieved by virtually moving (through homography transforms) the camera viewing direction (principal axis) so as to bring the vertical vanishing point at infinity and ensure horizon line is horizontal. For top views, the principal axis is set perpendicular to the ground and a polar mapping is performed to achieve the same properties. Note that in some geometrical configurations, these transformations can induce severe skewing of the views.
Given the ground occupancy masks GK for all views, we now explain how to infer the position of the people standing on the ground. A priori, in a team sport context, we know that (i) each player induces a dense cluster on the sum of ground occupancy masks, and (ii) the number of people to detect is equal to a known value N, e.g. N = 12 for basket-ball (10 players + 2 referees).
For this reason, in each ground location x, we consider the sum of all projections - normalized by the number of views that actually cover X-, and look for the higher intensity spots in this aggregated ground occupancy mask. To locate those spots, we have first considered a naive greedy approach that is equivalent to an iterative matching pursuit procedure. At each step, the matching pursuit process maximizes the inner product between a translated Gaussian kernel, and the aggregated ground occupancy mask. The position of the kernel which induces the larger inner-product defines the player position. Before running the next iteration, the contribution of the Gaussian kernel is subtracted from the aggregated mask to produce a residual mask. The process iterates until sufficient players have been located.
This approach is simple, but suffers from many false detections at the intersection of the projections of distinct players silhouettes from different views. This is due to the fact that occlusions induce non-linearities in the definition of the ground occupancy mask. In other words, the ground occupancy mask of a group of players is not equal to the sum of ground occupancy masks projected by each individual player. Knowledge about the presence of some people on the ground field affects the informative value of the foreground masks in these locations. In particular, if the vertical line associated to a position x is occluded by/occludes another player whose presence is very likely, this particular view should not be exploited to decide whether there is a player in x or not.
A refinement involves initializing the process by defining Gk {x) = Gk{x) to be the ground occupancy mask associated to the k'Λ view, and set Wk1C*) to 1 when x is covered by the k'Λ view, and to 0 otherwise.
Each iteration is then run in two steps. At iteration n, the first step searches for the most likely position of the n'h player, knowing the position of the (n-1) players located in previous iterations. The second step updates the ground occupancy masks of all views to remove the contribution of the newly located player. Formally, the first step of iteration n aggregates the ground occupancy mask from all views, and then searches for the denser cluster in this mask. Hence, it computes the aggregated mask as:
Figure imgf000033_0001
and then defines the most likely position xn for the nth player by
Xn = argmax < G" , φ(y) > y where φ(y) denotes a Gaussian kernel centered in y, and whose spatial support corresponds to the typical width of a player.
In the second step, the ground occupancy mask of each view is updated to account for the presence of the n'h player. In the ground position x, we consider that the typical support of a player silhouette in view A: is a rectangular box of width W and height H, and observe that the part of the silhouette that occludes or is occluded by the newly detected player does not bring any information about the potential presence of a player in position Λ:. The fraction φi^x, Xn) of the silhouette in ground position x that becomes non-informative in the k'h view is estimated, as a consequence of the presence of a player in Jcn. It is then proposed to update the ground occupancy mask and aggregation weight of the tfh camera in position* as follows:
Gr (x) = maχ(θ,Gk" (x)-φk (x,xn).Gl {xn)), w?i (x) = maχ(θ,wk" (x)-φk (x,xn)).
For improved computational efficiency, the positions x investigated in the refined approach are limited to the 30 local maxima that have been detected by the naive approach.
For completeness, it is noted that the above described update procedure omit the potential interference between occlusions caused by distinct players in the same view. However, the consequence of this approximation is far from being dramatic, since it ends up in omitting part of the information that was meaningful to assess the occupancy in occluded positions, without affecting the information that is actually exploited. Taking those interferences into account would require to back-project the player silhouettes in each view, thereby tending towards a computationally and memory expensive approach. The method and system of the present invention does not suffer from the usual weaknesses of greedy algorithms, such as a tendency to get caught in bad local minima.
The main technical benefits of the present invention include at least one or a combination of:
• The capability to crop appropriate pixels in the image memory and/or control a motorized PTZ, so as to render a team action, i.e. an action involving multiple moving objects/persons of interest, from an arbitrary 3D point.
• The ability to (i) control field of view selection by individual camera, and (ii) select a best camera within a set of camera. Such ability makes it possible to handle a potentially very large number of cameras simultaneously. This is especially true since the rendering parameters selection for a particular camera can be computed independently of other cameras.
• The possibility to reproduce and thus technically personalize the viewpoint selection process according to individual user preferences. For example, in the context of a sport event, coaches (who prefer large viewpoints showing the entire game) have different expectations regarding viewpoint selection than common spectator (who prefer closer and emotionally richer images). Thus these preferences are directly related to technical parameters of how the cameras are controlled. Automating the production process provides a technical solution to what amounts to answering individual requests.
The present invention includes within its scope further improvements. The present invention includes other criteria for computationally efficient and/or analytically solvable selection of viewpoints. It also includes better representation for salient objects such as using moving particles or flexible body models instead of simple bounding boxes. Furthermore, the selection and smoothing of viewpoints and cameras into four sub-steps in the current version simplifies the formulation. However, they can be solved in a unified estimation because their results affect each other. The present invention also includes other selection criteria of viewpoint and cameras independent of subjective evaluations. Exploitation of a distributed network of cameras to approximate the images that would be captured by a virtual sensor located in an arbitrary position, with arbitrary viewpoint coverage can be used with any of the embodiments of the present invention. The present invention can be used with these works, because in accordance with the present invention a selection is made of the most appropriate viewpoint within a set/space of candidate viewpoints. Hence, the addition of free- viewpoint rendering algorithms to embodiments of the present invention just contributes to enlarge the set of potential candidates.
The methods and system of the present invention can be implemented on a computing system which can be utilized with the methods and in a system according to the present invention including computer programs. A computer may include a video display terminal, a data input means such as a keyboard, and a graphic user interface indicating means such as a mouse. Computer may be implemented as a general purpose computer, e.g. a UNIX workstation or a personal computer.
Typically, the computer includes a Central Processing Unit ("CPU"), such as a conventional microprocessor of which a Pentium processor supplied by Intel Corp. USA is only an example, and a number of other units interconnected via bus system. The bus system may be any suitable bus system . The computer includes at least one memory. Memory may include any of a variety of data storage devices known to the skilled person such as random-access memory ("RAM"), read-only memory ("ROM"), non-volatile read/write memory such as a hard disc as known to the skilled person. For example, computer may further include random-access memory ("RAM"), read-only memory ("ROM"), as well as a display adapter for connecting system bus to a video display terminal, and an optional input/output (I/O) adapter for connecting peripheral devices (e.g., disk and tape drives) to system bus. The video display terminal can be the visual output of computer, which can be any suitable display device such as a CRT- based video display well-known in the art of computer hardware. However, with a desktop computer, a portable or a notebook-based computer, video display terminal can be replaced with a LCD-based or a gas plasma-based flat-panel display. Computer further includes user an interface adapter for connecting a keyboard, mouse, optional speaker. The relevant video required may be input directly into the computer via a video ore graphics interface or from storage devices, after which a processor carries out a method in accordance with the present invention. The relevant video data may be provided on a suitable signal storage medium such as a diskette, a replaceable hard disc, an optical storage device such as a CD-ROM or DVD-ROM, a magnetic tape or similar. The results of the method may be transmitted to a further near or remote location. A communications adapter may connect the computer to a data network such as the Internet, an Intranet a Local or Wide Area network (LAN or WAN) or a CAN.
The computer also includes a graphical user interface that resides within machine- readable media to direct the operation of the computer. Any suitable machine-readable media may retain the graphical user interface, such as a random access memory (RAM), a read-only memory (ROM), a magnetic diskette, magnetic tape, or optical disk (the last three being located in disk and tape drives ). Any suitable operating system and associated graphical user interface (e.g., Microsoft Windows, Linux) may direct the CPU. In addition, the computer includes a control program that resides within computer memory storage. The control program contains instructions that when executed on the CPU allow the computer to carry out the operations described with respect to any of the methods of the present invention.
The present invention also provides a computer program product for carrying out the method of the present invention and this can reside in any suitable memory. However, it is important that while the present invention has been, and will continue to be, that those skilled in the art will appreciate that the mechanisms of the present invention are capable of being distributed as a computer program product in a variety of forms, and that the present invention applies equally regardless of the particular type of signal bearing media used to actually carry out the distribution. Examples of computer readable signal bearing media include: recordable type media such as floppy disks and CD ROMs and transmission type media such as digital and analogue communication links. Accordingly, the present invention also includes a software product which when executed on a suitable computing device carries out any of the methods of the present invention. Suitable software can be obtained by programming in a suitable high level language such as C and compiling on a suitable compiler for the target computer processor or in an interpreted language such as Java and then compiled on a suitable compiler for implementation with the Java Virtual Machine. The present invention provides software, e.g. a computer program having code segments that provide a program that, when executed on a processing engine, provides a virtual director module. The software may include code segments that provide, when executed on the processing engine: any of the methods of the present invention or implement any of the system means of the present invention.
Other aspects and advantages of the present invention as well as a more complete understanding thereof will become apparent from the following description taken in conjunction with the embedded and accompanying figures, illustrating by way of example the principles of the invention. Moreover, it is intended that the scope of the invention be determined by the adjoining claims and not by the preceding summary or the following detailed description.
Appendix 1
1. Introduction
Targeting the producing of semantically meaningful and perceptually comfortable contents from raw multi-sensored data, we propose a computationally efficient production system, based on the divide-and-conquer paradigm. We summarize major factors of our target by three keywords, which are "completeness" , "closeness" and "smoothness" . Completeness stands for the integrity of view rendering. Closeness defines the fineness of detail description, and smoothness is a term referring to the continuity of both viewpoint movement and story telling. By trading off among those factors, we develop methods for selecting optimal viewpoints and cameras to fit the display resolution and other user preferences, and for smoothing these sequences for a continuous and graceful story-telling. There are a long list of possible user-preferences, such as user's profile, user's browsing history, and device capabilities. We summarize narrative preferences into four descriptors, i.e., user preferred team, user preferred player, user preferred event, and user preferred camera. AU device constraints, such as display resolution, network speed, decoder's performance, are abstracted as the preferred display resolution. We thus mainly discuss user preferences with these five elements in the present work.
The capability to take those preferences into account obviously depends on the knowledge captured about the scene through video analysis tools, e.g., detecting which team is offending or defending. However and more importantly, it is worth mentioning that our framework is generic in that it can include any kind of user preferences.
In Section 2, we explain the estimation framework of both selection and smoothing of viewpoints and camera views, and give their detailed formulation and implementation. In Section 3, more technical details are given and experiments are made to verify the efficiency of our system. Finally, we conclude this work and list a number of possible paths for future research.
Appendix 1
2. Autonomous Production of Personalized Basketball Videos from Multi-sensored Data
Although it is difficult to define an absolute rule to evaluate the performance of organized stories and determined viewpoints in presenting a generic scenario, production of sport videos has some general principles. [11] For basketball games, we summarize these rules into three major trade-offs.
The first trade-off arises from the personalization of the production. Specifically, it originates from the conflict between preserving general production rules of sports videos and maximizing satisfaction of user preferences. Some basic rules of video production for basketball games could not be sacrificed for better satisfaction of user preferences, e.g., the scene must always include the ball, and well balanced weighting should be taken between the dominant player and the user-preferred player when rendering an event.
The second trade-off is the balance between completeness and closeness of the rendered scene. The intrinsic interest of basketball games partially comes from the complexity of team working, whose clear description requires spatial completeness in camera coverage. However, many highlighted activities usually happen in a specific and bounded playing area. A close view emphasizing those areas increases the emotional involvement of the audience with the play, by moving the audience closer to the scene. Closeness is also required to generate a view of the game with sufficient spatial resolution under a situation with limited resources, such as small display size or limited bandwidth resources of handheld devices.
The final trade-off balances accurate pursuit of actions of interest along the time, and smoothness of viewpoint movement. The need for the audience to know the general situation regarding the game throughout the contest is a primary requirement and main purpose of viewpoint switching. When we mix angles of different cameras for highlighting or other special effects, smoothness of camera switching should be kept in mind to help the audience to rapidly re-orient the play situation after viewpoint movements. [11]
Given the meta-data gathered from multi-sensor video data, we plan viewpoint coverage and camera switching by considering the above three tradeoffs. We give an overview of our production framework in section 2.1, and introduce some notations on meta-data in section 2.2. In section 2.3, we propose our criteria for selecting viewpoint and camera on an individual frame. Smoothing of viewpoint and camera sequences is explained in section 2.4. Appendix 1
2.1. Overview of the Production Framework
It is unavoidable to bring discontinuity to story-telling contents when switching camera views. In order to suppress the influence of this discontinuity, we usually locate dramatic viewpoint or camera switching during the gap between two highlighted events, to avoid possible distraction of the audience from the story. Hence, we can envision our personalized production in the divide and conquer paradigm, as shown in Fig.l. The whole story is first divided into several segments. Optimal viewpoints and cameras are determined locally within each segment by trading off between benefits and costs under specified user-preferences. Furthermore, estimation of optimal camera or viewpoints is performed in a hierarchical structure. The estimation phase takes bottom-up steps from all individual frames to the whole story. Starting from a standalone frame, we optimize the viewpoint in each individual camera view, determine the best camera view from multiple candidate cameras under the selected viewpoints, and finally organize the whole story. When we need to render the story to the audience, a top-down processing is taken, which first divide the video into non-overlapped segments. Corresponding frames for each segment are then picked up, and are displayed on the target device with specified cameras and viewpoints.
Intrinsic hierarchical structure of basketball games provides reasonable grounds for the above vision, and also gives clues on segment separation. As shown in Fig.2, a game is divided by rules into a sequence of non-overlapped Appendix 1 ball-possession periods. A ball-possession period is the period of game when the same team holds the ball and makes several trials of scoring. Within each period, several events might occur during the offence/defence process. According to whether the event is related to the 24-second shot clock, events in a basketball game could be classified as clock-events and non-clock-events. Clock-events will not overlap with each other, while non-clock-events might overlap with both clock-/non-clock- events. In general, one ball possession period is a rather fluent period and requires the period-level continuity of viewpoint movement.
In this paper, we first define the criteria for evaluating viewpoints and cameras on each individual frame. Camera-wise smoothness of viewpoints is then applied to all frames within each ball possession period. Based on determined viewpoints, a camera sequence is selected and smoothed.
2.2. Meta-data and User Preference
Input data fed into our system include video data, associated meta-data, and user preferences. Let's assume that we have gathered a database of basketball video sequences, which are captured simultaneously by K different cameras. AU cameras are loosely synchronized and produce the same number of frames, i.e., N frames, for each camera. On the i-th frame captured at time U, Mi different salient objects, denoted by {o,m|m = I, - - - , Mt}, are detected in total from all camera views. We have two kinds of salient objects defined. The first class includes regions for players, referees, and the ball, which are used for scene understanding. The second class includes the basket, coach bench, and some landmarks of the court, which are used in both scene understanding and camera calibration. Objects of the first class are automatically extracted from the scene typically based on background subtraction algorithm, while those from the second class are manually labeled Appendix 1 because their positions are constant on fixed cameras. We define the m-th salient object as Ojm = [θfcjm|fc = 1 - • - K], where θfcjm is the m-th salient object in the k-th camera.
All salient objects are represented by regions of interest. A region r is a set of pixel coordinates that are belonging to this region. If oim does not appear in the k-th camera view, we set θfcim to the empty set φ. With T1 and r2 being two arbitrary regions, we first define several elemental functions on one or two regions as
Figure imgf000042_0001
x€ri
Center :C(n) = -^- ∑x; (2)
Visibility :V(ri|r2) = [ \ J . ; (3)
Distance :V{τx, r2) = ||C(ri)-C(r2)|| ; (4) which will be used in our later sections.
Furthermore, we define user preference by a parameter set u, which includes both narrative and restrictive preferences, such as favorites and device capabilities.
2.3. Selection of Camera and Viewpoints on Individual Frames
For simplicity, we put aside the smoothing problem in the first step, and start by considering the selection of a proper viewpoint on each standalone frame. We use the following two subsections to explain our solution to this problem from two aspects, i.e., evaluation of various viewpoints on the same camera view and evaluation of different camera views.
2.3.1. Computation of Optimal Viewpoints in Each Individual Camera
Although evaluation of viewpoint is a highly subjective task that still lacks an objective rule, we have some basic requirements on our viewpoint selection. It should be computational efficient, and should be adaptable under different device resolutions. For a device with high display resolution, we usually prefer a complete view of the whole scene. When the resolution is Appendix 1 limited due to device or channel constraints, we have to sacrifice part of the scene for improved representation of local details. For an object just next to the viewpoint border, it should be included to improve overall completeness of story- telling if it shows high relevance to the current event in later frames, and it should be excluded to prevent the viewpoint sequence from oscillating if it always appears around the border. In order to keep a safe area to deal with this kind of object, we prefer that visible salient objects inside the determined viewpoint are closer to the center while invisible objects should be driven away from the border of viewpoint, as far as possible.
We let the viewpoint for scene construction in the z-th frame of the A;-th camera be \ki. Viewpoint v^ is defined as a rectangular region. For natural representation of the scene, we limit the aspect ratio of all viewpoints to be the same aspect ratio of the display device. Therefore, for each v^, we have only three free parameters, i.e., the horizontal center Vk1x, the horizontal center Vkiy, and the width vkiw, to tune. Individual optimal viewpoint is obtained by maximizing the interest gain of applying viewpoint vfci to the i- th frame of the A;-th camera, which is defined as a weighted sum of attentional interests from all visible salient objects in that frame, i.e.,
2fci(vjti|u) = ]P wkim(vki, u)X(ofcim|u), (5) m where T(θfc,m|u) is the interest of a salient object okim under user Preference u. In the present paper, pre-defined interest function 2(okim |u) will give different weighting according to different values of u, which reflects narrative user preferences. For instance, a player specified by the audience is assigned a higher interest than a player not specified, and the ball is given the highest interest so that it is always included in the scene. We will explain a practical setting of X(θfcim|u) with more details in the next section.
We define wkim(vki, u) to weight the attentional significance of a single object within a viewpoint. Mathematically, we take wkιm(\ki, u) in a form as follows,
Figure imgf000043_0001
where we use uDEV to denote limitation of current device resolution in user preference u. Our definition of wkim(vki, u) consists of three major parts: the exponential part which controls the concentrating strength of salient objects around the center according to the pixel resolution of device display; the Appendix 1 zero-crossing part V(θkimWki) which separates positive interests from negative interests at the border of viewpoint; and the appended fraction part In A(vki) which calculates the density of interests to evaluate the closeness and is set as a logarithm function. Note that V(θfejm|vfci) is positive only when salient object Okim is fully contained inside viewpoint v^, which shows the tendency of keeping a salient object intact in viewpoint selection. As shown in Fig.3, the basic idea of our definition is to change the relative importance of completeness and closeness by tuning the sharpness of central peak and modifying the length of tails. When uDEV is small, the exponential part decays quite fast, which tends to emphasize objects closer to the center and ignore objects outside the viewpoint. When UΏEV gets larger, penalties for invisible objects are increased, which is on incentive to be complete and to display all salient objects. Therefore, 2jh(vfci|u) describes the trade-off between completeness (displaying as much objects as possible) and fineness (rendering the objects with a higher resolution) of scene description in individual frames.
A viewpoint that maximizes 2jti(vfci|u) drives visible objects closer to the center and leads to greater separations of invisible objects from the center. We let Vfcj be the optimal viewpoint computed individually for each frame, i.e., vfci = arg max Jfci(vfci|u). (7) Appendix 1
Some examples of optimal v^ under different display resolution are given in Fig.4.
2.3.2. Selection of Camera Views for a Given Frame
Although we use data from multiple sensors, what really matters is not the number of sensors or their positioning, but the way we utilize those viewpoints to produce a unified virtual viewpoint which takes a good balance between local emphasis of details and global overview of scenarios. Since it is difficult to generate high-quality free viewpoint videos with the state-of-art methods, we only consider selecting a camera view from all presented cameras in the present work to make our system more generic. We define c = {Q} as a camera sequence, where q denotes the camera index for the i-th frame. A trivial understanding in evaluating a camera view is that all salient objects should be clearly rendered with few occlusions and high resolution. For the z-th frame in the fc-th camera, we define occlusion rate of salient objects as the normalized ratio of the united area of salient objects with respect to the sum of their individual area, i.e.,
A (J(ofcim n vfci)
∑A [okim n vfci]
Figure imgf000045_0001
Figure imgf000045_0002
to 1 by applying NH[^U) I '(Nfct(v fct) ~ I)- We define the closeness of Appendix 1 objects as average pixel areas used for rendering objects, i.e., ftwL(v*i) = log ^J^ ∑Λ [Okim n vH] . (8)
Also we define the completeness of this camera view as the percentage of included salient objects, i.e.,
Figure imgf000046_0001
Accordingly, the interest gain of choosing the k-th. camera for the i-th frame is evaluated by Tj(A;|vto, u), which reads,
7?OCfv Λ2
Zi(fc|vw> u) = wk(u)πϊt(vki)π∞ (vw, u) exp[- fci !, fa; ]. (10)
We weights the support of current user-preference to camera k by tϋfc(u), which assigns a higher value to camera k if it is specified by the user and assigns a lower value if it is not specified. We then define the probability of taking the /c-th camera for the i-th frame under {v^} as
Figure imgf000046_0002
3
2.4- Generation of Smooth Viewpoint/Camera Sequences
A video sequence with individually optimized viewpoints will have obvious fluctuations, which leads to uncomfortable visual artifacts. We solve this problem by generating a smooth moving sequence of both cameras and viewpoints based on their individual optima. We use a graph in Fig.5 to explain this estimation procedure, which covers two steps of the whole system, i.e., camera- wise smoothing of viewpoint movements and generation of a smooth camera sequence based on determined viewpoints. At first, we take Vfcj as observed data and assume that they are noise-distorted outputs of some underlying smooth results v/tj. We use statistical inference to recover one smooth viewpoint sequence for each camera. Taking camera-gains of those derived viewpoints into consideration, we then generate a smooth camera sequence. Appendix 1
2.4-1- Camera-wise Smoothing of Viewpoint Movement
We start from the smoothness of viewpoint movement on a video from the same camera. There are two contradictory strengths that drive the optimization of viewpoint movement: on one hand, optimized viewpoints should be closer to the optimal viewpoint of each individual frame; on the other hand, inter-frame smoothness of viewpoints prevents dramatic switching from occurring. Accordingly, we model smooth viewpoint movement as a Gaussian Markov Random Field (MRF), where the camera- wise smoothness is modeled as the priori of viewpoint configuration, i.e.,
P({VU|U) 22-; ∑ Z-:, / ftIp*) )' (12)
Figure imgf000047_0001
Figure imgf000047_0002
where Λ/ϊ is the neighborhood of the z-th frame, while a conditional distribution
P({vM}|u,{vt,}) = π3 L) (14)
Figure imgf000047_0003
describes the noise that produces the final results. We add a parameter βhi to control the flexibility of current frame in smoothing. A smaller βki can be set to increase the tendency of current frame in approaching its locally optimal Appendix 1 viewpoint. Estimation of optimal viewpoints {vki} is done by maximizing the posterior probability of {vfci} over observed {vfci}, i.e., F({vfej}|u, {vfci}), which is expressed by a Gibbs canonical distribution [12], i.e.,
P({vH}|u, {vtl}) = ∑ eX 'γ} (16)
{vfc,} with
Figure imgf000048_0001
In statistical physics, the optimal configuration of largest posterior probability is determined by minimizing the following free energy [13]:
^V = ^V) _ (ln p(K.}|U) {9fc.})) (18)
where (x) — ) is the expectation value of a quantity x. We then rion by considering the normalization constraint of
Figure imgf000048_0002
£V = ^v + η{1 _ Y^ P({vfci}|u, {vfa})) (19)
{V*:,} where η is a Lagrangian multiplier. We use the Mean-field approximation [13] which assumes that P ({vfcj}|u, {v^}) « rii -P(v*;ilu) {^fci}) to decouple two-body correlations. By taking differential of £VP with respect to {vfcj}) and setting it to zero, we obtain the optimal estimation for
Figure imgf000048_0003
Figure imgf000048_0004
o = dP(vk a ix £\uV,P {vfci})
2 j€Λf, Ix 2βkiσ. 2x
Therefore, we have posterior probability
> . (21)
Figure imgf000048_0005
Appendix 1
Since it is a Gaussian distribution whose mean value has the maximized probability, the optimal viewpoint for V^x is solved as
Σ σLβki (vkjx) + VkixGlx vlix = (Vk1x) = im v , 2 , (22)
^ = <«**> =
Figure imgf000049_0001
υtiw (24) with optimal results for υ^ and VkiW also given by similar derivation. We use v^ in following sections to denote the optimal viewpoint represented by
Vfcfa. vk*iy> and Kiw
2.4-2. Smoothing of Camera Sequence
A smooth camera sequence will be generated from determined viewpoints. For simplicity, we use pki ≡ log P(c{ = fc|v^, u) to shorten the formulation, which is computed by using Eq.11. We have to trade-off between minimizing camera-switching and maximizing the overall gain of cameras. We use another MRF to model these two kinds of strengths. The smoothness of camera sequence is modelled by a Gibbs canonical distribution, which reads,
F((C}| {v;,}, u) = ;p^C) , (25,
with
WC = "7∑^P« - ∑ ∑"*<W (26) i,fe i je/V, where <*„• is a parameter to normalize the relative strength of smoothing with respect to the size of neighborhood, which reads
Figure imgf000049_0003
Appendix 1
7 is a hyper-parameter for controlling the smoothing strength. We use the Mean-field approximation which assumes that
Figure imgf000050_0001
u) s=s F^
Figure imgf000050_0002
u) again to achieve the optimal estimation. We omit the detailed derivation and only show the final result, which derives that the marginal probability of taking camera k for the i-th frame is
Figure imgf000050_0003
rule until reaching convergence,
Figure imgf000050_0004
After convergence, we select the camera which maximizes (<5ci,fc) , i-e., c^ argmax ^.fc)0 - (30)
3. Experimental Results and Discussions
We organized a data-acquisition in the city of Namur, Belgium, under real game environment, where seven cameras were used to record four games. All those videos are publicly distributed in the website of APIDIS project [1] and more detailed explanation about the acquisition settings could be found in Ref.[14]. Briefly, those cameras are all Arecont Vision AV2100M IP cameras, whose positions in the basketball court are shown in Fig.6. The fish-eye lenses used for the top view cameras are Fujinon FE185C086HA-1 lenses. Frames from seven cameras were all sent to a server, where the arrival time of each frame was used in synchronizing different cameras. In Fig.7, samples images from all the seven cameras are given. Due to the limited number of Appendix 1 cameras, we set most of the cameras to cover the left court. As a result, we will mainly focus on the left court to investigate the performance of our system in personalized production of sports videos.
Since video production still lacks an objective rule for performance evaluation. Many parameters are heuristically determined based on subjective evaluation. We defined several salient objects and relationship between object type and interest is given in Table 1. If the user shows special interests on one salient object, the weight will be multiplied by a factor 1.2. For viewpoint smoothing, we set all βu to 1 for camera-wise viewpoint smoothing in the following experiments. We also let σ\x = σ\y = σχw = σ\ and <^2x = Cr2J, = σ2u) = σ2.
A short video clip with about 1200 frames is used to demonstrate behavioral characteristics of our system, especially its adaptivity under limited Appendix 1
Table 1: Weighting of Different Salient Objects
Figure imgf000052_0001
display resolution. This clip covers three ball-possession periods and includes five events in total. In Fig.8, we show time spans of all events, whose most highlighted moments are also marked out by red solid lines. In the final version of this project, meta-data should be generated by automatic understanding of the scene. In the present paper which focuses on personalized production, we first evaluate our methods on manually collected meta-data. We will explore the efficiency of each individual processing step of our method, and then make an overall evaluation based on finally generated outputs. Due to the page limitation, numerical results are given and depicted by graphs in the present paper while their corresponding videos are only available in the website of APIDIS project. [1] Reviewers are invited to download video samples produced based on different user preferences to subjectively evaluate the efficiency and relevance of the proposed approach.
We start from investigating the performance of our method for individual selection of viewpoints. Camera-wise sequences of automatically determined viewpoints by our method are put into a table in Fig.9, where widths of optimal viewpoints under three different display resolutions, i.e., 160x120, 320x240, and 640x480 are displayed for all the seven cameras. Weak viewpoint smoothing has been applied to improve the readability of generated videos, where the smoothing strength is set to
Figure imgf000052_0002
= 4. From comparison of results under three different display resolutions, the most obvious finding is that a higher display resolution leads to a larger viewpoint width while a lower display resolution prefers a smaller viewpoint size, just as we have Appendix 1
expected from our selection criterion. Since camera 1, 6, and 7 only cover half the court, their sizes of viewpoints will be fixed when all players are in the other half court, which explains the flat segments in their corresponding sub-graphs. From the video data, we could further confirm that even when the display resolution is very low, our system will extract a viewpoint of a reasonable size where the ball is scaled to a visible size. Although in some frames only the ball is displayed for the lowest display resolution, it will not cause a problem because those frames will be filtered out by post camera selection.
Viewpoint sizes of smoothed sequences under different smoothing strengths are compared in Fig.lO(a). With all other parameters being the same, the ratio of σ2 to σ\ is tuned for all the five cases. A higher ratio of σ2 to σ\ corresponds to a stronger smoothing process while a smaller ratio means weaker smoothing. When σ2/ 'σ\ = 1 where very weak smoothing is applied, we obtain a quite accidented sequence, which results in a flickering video with a lot of dramatic viewpoint movements. With the increasing of σ2/σi ratio, the curve of viewpoint movement becomes to have less sharp peaks, which provides perceptually more comfortable contents. Another important observation is that generated sequences will be quite different from our initial selection based on saliency information, if too strong smoothing has been performed with a very large σ2/ fσx. This will cause such problems as the favorite player or the ball is out of the smoothed viewpoint. Ratio
Figure imgf000053_0001
should be determined by considering the trade-off between locally optimized viewpoints and globally smoothed viewpoint sequences. By visually checking the generated videos, we consider that results with a weak smoothing such Appendix 1 as σ2/σi = 4 are already perceptually acceptable by viewing the demo video.
We then verify our smoothing algorithm for camera sequence. Smoothed camera sequences under various smoothing strength 7 are depicted in Fig.lO(b). The smoothing process takes the probability defined in Eq.11 as initial values, and iterates the fixed-point updating rule with a neighborhood of size thirty until convergence. A camera sequence without smoothing corresponds to the topmost sub-graph in Fig.lO(b), while the sequence with the strongest smoothing is plotted in the bottom sub-graph. It is clear that there are many dramatic camera switches in an unsmoothed sequence, which leads to even more annoying visual artifacts than fluctuated viewpoint position, as we can see from the generated videos. Therefore, we prefer strong smoothing on camera sequences and will use 7 = 0.8 in following experiments.
In Fig.11 (a) and (b), we compare viewpoints and cameras in generated sequences with respect to different display resolutions, respectively. From top to bottom, we show results for display resolution ιtDEV = 160, 320, and 640 in three sub-graphs. When the same camera is selected, we observe that a larger viewpoint is preferred by a higher display resolution. When different cameras are selected, we need to consider both the position of selected camera and the position of determined viewpoint in evaluating the coverage of output scene. Again, we confirm that sizes of viewpoints increase when display resolution becomes larger. Before the 400-th frame, the event occurs in the right court. We find that the 3-rd camera, i.e., the top- view with wide-angle lens, appears more often in the sequence of uDEV = 640 than that of uDEV = 160 and their viewpoints are also broader, which proves that a larger resolution prefers a wider view. Although the 2-nd camera appears quite often in uDEV = 160, its corresponding viewpoints are much smaller in width. This camera is selected because it provides a side view of the right court with salient object gathered closer than other camera views due to projective geometry. For the same reason, the 3-rd camera appears more often in txDEV = 160 when the game moves to the left court from the 450-th frame to the 950-th frame. This conclusion is further confirmed by thumbnails in Fig.12, where frames from index 100 to 900 are arranged into a table for the above three display resolutions.
Due to the fact that different cameras were selected, viewpoints determined under uDEV = 640 seem to be closer than those under uDEV = 320 in the last five columns of Fig.12. This reflects the inconsistency of relative importance between completeness and closeness in viewpoint selection. Appendix 1
Since only centeral points of salient objects are calculated in the criteria for viewpoint selection, result viewpoints are not continuous under different resolution. Although camera-7 is similar to camera-1 with linear zooming in, their optimal viewpoints might have different emphases on completeness and closeness. This consistency also exists in separated selection of cameras and viewpoints. If viewpoint selection focuses more on closeness and camera selection focuses more on completeness, a small cropping area on camera-7 will be first selected in viewpoint selection for uDEV = 320, and then be rejected in the following camera selection due to insufficient completeness. Subjective test will help us to tune the relative weighting of completeness and closeness. It is more important to implement simultaneous selection of viewpoints and cameras, which requires both inclusion of positional information of cameras such as using homography, and an analytically solvable criterion for viewpoint selection. These issues are our major work in the near future.
Appendix 1
In all above experiments, no narrative user-preferences are included. If the user has special interests on a certain camera view, we could assign a higher weighting Wfc(u) to the specified camera. In our case, we set Wi1 (u) = 1.0 for not-specified cameras and Wf.(u) = 1.2 for a user-specified camera. We compare the camera sequences under different preferences in Fig.13. As we can easily see from the graph, a camera appears more times when it is specified, which reflects the user preference on camera views. As for user preference on teams or players, the difference between viewpoints with and without user-preferences is difficult to tell without a well-defined evaluation rule, because all players are always cluttered together during the game. In fact, we are more interested in reflecting user-preferences on players or teams by extracting their relative frames. We thus omit the results on player or team selection, but explore them later along with results from our future work on video summarization.
4. Concluding Remarks
An autonomous system for producing personalized videos from multiple camera views has been proposed. We discussed automatic adaptation of viewpoints with respect to display resolution and scenario contents, data fusion within multiple camera views, and smoothness of viewpoint and camera sequences for fluent story telling. There are four major advantages of our methods: 1) Semantic oriented. Rather than using low-features such as edges or appearance of frames, our production is based on semantic understanding of the scenario, which could deal with more complex semantic user preference. 2) Computationally efficient. We take a divide-and-conquer strategy and consider a hierarchical processing, which is efficient in dealing with long video contents because its overall time is almost linearly proportional to the number of events included. 3) Genericity. Since our sub-methods in each individual steps are all independent from the definition of salient objects and interests, this framework is not limited to basketball videos, but able to be applied to other controlled scenarios. 4) Unsupervised. Although there are some parameters left to set by users, the system is unsupervised.
Appendix 2
The methods presented in this paper aim at dethe summary, e.g. through highlight and/or replay of preferred tecting and recognizing players on a sport-field, based on a player's actions[4]. distributed set of loosely synchronized cameras. Detection assumes player verticality, and sums the cumulative projection of the multiple views' foreground activity masks on a set of II. SYSTEM OVERVIEW planes that are parallel to the ground plane. After summation, large projection values indicate the position of the player on the To demonstrate the concept of autonomous and personalground plane. This position is used as an anchor for the player ized production, the European FP7 APIDIS research project bounding box projected in each one of the views. Within this (www.apidis.org) has deployed a multi-camera acquisition bounding box, the regions provided by mean-shift segmentation system around a basket-ball court. The acquisition setting are sorted out based on contextual features, e.g. relative size and position, to select the ones that are likely to correspond to a consists in a set of 7 calibrated IP cameras, each one collecting digit Normalization and classification of the selected regions then 2 Mpixels frames at a rate higher than 20 frames/sec. After provides the number and identity of the player. Since the player an approximate temporal synchronization of the video streams, number can only be read when it faces towards the camera, this paper investigates how to augment the video dataset based graph-based tracking is considered to propagate the identity of on the detection, tracking, and recognition of players. a player along its trajectory.
Figure 1 surveys our proposed approach to compute and
I. INTRODUCTION label players tracks. After joint multiview detection of people
In today's society, content production and content consumpstanding on the ground field at each time instant, a graph- tion are confronted with a fundamental mutation. Two combased tracking algorithm matches positions that are sufficiently plementary trends are observed. On the one hand, individuals close -in position and appearance- between successive frames, become more and more heterogeneous in the way they access thereby defining a set of potentially interrupted disjoint tracks, the content. They want to access dedicated content through a also named partial tracks. In parallel, as depicted in Figure 5, personalized service, able to provide what they are interested image analysis and classification is considered for each frame in, when they want it and through the communication channel of each view, to recognize the digits that potentially appear of their choice. On the other hand, individuals and organizaon the shirts of detected objects. This information is then tions get easier access to the technical facilities required to be aggregated over time to label the partial tracks. involved in the content creation and diffusion process.
In this paper, we describe video analysis tools that parThe major contributions of this paper have to be found in ticipate to the future evolutions of the content production the proposed people detection solution, which is depicted in industry towards automated infrastructures allowing content Figure 2. In short, the detection process follows a bottom- to be produced, stored, and accessed at low cost and in a up approach to extract denser clusters in a ground plane personalized and dedicated manner. More specifically, our occupancy map that is computed based on the projection of targeted application considers the autonomous and personforeground activity masks. Two fundamental improvements are alized summarization of sport events, without the need for proposed compared to the state-of-the art. First, the foreground costly handmade processes. In the application scenario supactivity mask is not only projected on the ground plane, as ported by the provided dataset, the acquisition sensors cover recommended in [9], but on a set of planes that are parallel a basket-ball court. Distributed analysis and interpretation of to the ground. Second, an original heuristic is implemented to the scene is then exploited to decide what to show about handle occlusions, and alleviate the false detections occurring an event, and how to show it, so as to produce a video at the intersection of the masks projected from distinct playcomposed of a valuable subset from the streams provided ers 'silhouettes by distinct views. Our simulations demonstrate by each individual camera. In particular, the position of the that those two contributions quite significantly improve the players provides the required input to drive the autonomous detection performance. selection of viewpoint parameters [5], whilst identification and
The rest of the paper is organized as follows. Sections III, V, tracking of the detected players supports personalization of and IV respectively focus on the detection, tracking, and
Part of this work has been funded by the FP7 European project APIDIS, recognition problems. Experimental results are presented in and by the Belgian NSF. Section VI to validate our approach. Section VII concludes. Appendix 2
Figure imgf000058_0001
Fig. 1. Players tracks computation and labeling pipeline. The dashed arrow reflects the optional inclusion of the digit recognition results within the appearance model considered for tracking.
III. MULTI-VIEW PEOPLE DETECTION
Keeping track of people who occlude each other using a
Figure imgf000058_0002
set of C widely spaced, calibrated, stationary, and (loosely) synchronized cameras is an important question because this Aggregated ground occupancy mask (half of field) kind of setup is common to applications ranging from (sport) event reporting to surveillance in public space. In this section, we consider a change detection approach to infer the position of players on the ground field, at each time instant.
A. Related work
Detection of people from the foreground activity masks computed in multiple views has been investigated in details in the past few years. We differentiate two classes of approaches.
On the one hand, the authors in [9], [10] adopt a bottom-up approach, and project the points of the foreground likelihood (background subtracted silhouettes) of each view to the ground plane. Specifically, the change probability maps computed in each view are warped to the ground plane based on homographies that have been inferred off-line. The projected maps are then multiplied together and thresholded to define
Figure imgf000058_0003
the patches of the ground plane for which the appearance has Fig. 2. Multi-view people detection. Foreground masks are projected on a changed compared to the background model and according to set of planes that are parallel to the ground plane to define a ground plane the single-view change detection algorithm. occupancy map, from which players' position is directly inferred.
On the other hand, the works in [2], [7], [1] adopt a top- down approach. They consider a grid of points on the ground plane, and estimate the probabilities of occupancy of each views. In contrast, the complexity of the second category of point in the grid based on the back-projection of some kind algorithms depends on the number of ground plane points to of generative model in each one of the calibrated multiple be investigated (chosen to limit the area to be monitored), views. Hence, they all start from the ground plane, and validate and on the computational load associated to the validation of occupancy hypothesis based on associated appearance model each occupancy hypothesis. This validation process generally in each one of the views. The approaches proposed in this involves back-projection of a 3D-world template in each second category mainly differ based on the kind of generative one of the views. With that respect, we note that, due to model they consider (rectangle or learned dictionary), and on lens and projection distortions, even the warping of simple the way they decide about occupancy in each point of the grid 3D rectangular template generally results in non-rectangular (combination of multiple view-based classifiers in [2], probapatterns in each one of the views, thereby preventing the bilistic occupancy grid inferred from background subtraction use of computationally efficient integral images techniques. masks in [7], and sparsity constrained binary occupancy map Hence, in most practical cases, the second kind of approach for [I]). is significantly more complex than the first one. In return, it
The first category of methods has the advantage to be offers increased performance since not only the feet, but the computationally efficient, since the decision about ground entire object silhouette is considered to make a decision. plane occupancy is directly taken from the observation of Our approach is an attempt to take the best out of both the projection of the change detection masks of the different categories. It proposes a computationally efficient bottom-up Appendix 2 approach that is able to exploit the entire a priori knowledge As L increases, the computation of Gi in a ground position we have about the object silhouette. Specifically, the bottom- x tends towards the integration of the projection of Bi up computation of the ground occupancy mask described in on a vertical segment anchored in x. This integration can Section III-B exploits the fact that the basis of the silhouette equivalently be computed in Bu along the back-projection of lies on the ground plane (similarly to previous bottom-up the vertical segment. To further speed up the computations, solutions), but also that the silhouette is a roughly rectangular we observe that, through appropriate transformation of Bu it vertical shape (which was previously reserved to top-down is possible to shape the back-projected integration domains approaches). As a second contribution, Section UI-C proposes so that they correspond to segments of vertical lines in a simple greedy heuristic to resolve the interference occurring the transformed view, thereby making the computation of between the silhouettes projected from distinct views by integrals particularly efficient through the principle of integral distinct objects. Our experimental results reveal that this interimages. Figure 3 illustrates that specific transformation for ference was the source of many false detections while inferring one particular view. The transformation has been designed the actual objects positions from the ground occupancy mask. to address a double objective. First, points of the 3D space Until now, this phenomenon had only been taken into account located on the same vertical line have to be projected on the by the top-down approach described in [7], through a complex same column in the transformed view (vertical vanishing point iterative approximation of the joint posterior probabilities at infinity). Second, vertical objects that stand on the ground of occupancy. In contrast, whilst approximate, our approach and whose feet are projected on the same horizontal line of the appears to be both efficient and effective. transformed view have to keep same projected heights ratios.
Once the first property is met, the 3D points belonging
B. Proposed approach: ground plane occupancy mask comto the vertical line standing above a given point from the putation ground plane simply project on the column of the transformed
Similar to [9], [10], [7], [1], our approach carries out single- view that stands above the projection of the 3D ground plane view change detection independently on each view to compute point. Hence, Gi (x) is simply computed as the integral of the a change probability map. To this purpose, a conventional transformed view over this vertical back-projected segment. background subtraction algorithm based on mixture of gaus- Preservation of height along the lines of the transformed view sians modeling is implemented. To fusion the resulting binary even further simplifies computations. foreground silhouettes, our method projects them to build a For side views, these two properties can be achieved by ground occupancy mask. However, in contrast to previous virtually moving -through homography transforms- the camera bottom-up approaches [9], [10], we do not consider projection viewing direction (principal axis) so as to bring the vertical on the ground plane only, but on a set of planes that are vanishing point at infinity and ensure horizon line is horizontal. parallel to the ground plane, and cut the object to detect at For top views, the principal axis is set perpendicular to the different heights. Under the assumption that the object of ground and a polar mapping is performed to achieve the same interest stands roughly vertically, the cumulative projection properties. Note that in some geometrical configurations, these of all those projections on a virtual top view plane actually transformations can induces severe skewing of the views. reflects ground plane occupancy. This section explains how the mask associated to each view is computed. The next section C. Proposed approach: people detection from ground occuinvestigates how to merge the information provided by the pancy multiple views to detect people. Given the ground occupancy masks Gi for all views, we
Formally, the computation of the ground occupancy mask now explain how to infer the position of the people standing Gi associated to the ith view is described as follows. At a on the ground. A priori, we know that (i) each player induces given time, the ith view is the source of a binary background a dense cluster on the sum of ground occupancy masks, and subtracted silhouette image Bi € {0, 1}M\ where M1 is the (ii) the number of people to detect is equal to a known value number of pixels of camera i, 1 < i < C. As explained above, K, e.g. K = 12 for basket-ball (players + referees). Bi is projected on a set of L reference planes that are defined For this reason, in each ground location x, we consider the to be parallel to the ground plane, at regular height intervals, sum of all projections -normalized by the number of views and up to the typical height of a player. Hence, for each view that actually cover x-, and look for the higher intensity spots i, we define G\ to be the projection of the ith binary mask on in this aggregated ground occupancy mask (see Figure 2 for the jth plane. G\ is computed by applying the homography an example of aggregated ground occupancy mask). To locate warping each pixel from camera i to its corresponding position those spots, we have first considered a naive greedy approach on the 3th reference plane, with 0 < j < L. By construction, that is equivalent to an iterative matching pursuit procedure. points from Bi that are labeled to 1 because of the presence At each step the matching pursuit process maximizes the of a player in the jth reference plane project to corresponding inner product between a translated Gaussian kernel, and the top view position in Gj. Hence, the summation Gi of the aggregated ground occupancy mask. The position of the kernel projections obtained at different heights and from different which induces the larger inner-product defines the player views is expected to highlight top view positions of vertically position. Before running the next iteration, the contribution of standing players. the Gaussian kernel is subtracted from the aggregated mask to Appendix 2
Figure imgf000060_0001
Fig. 3. Efficient computation of the ground occupancy mask: the original view (on the left) is mapped to a plane through a combination of nomographics that are chosen so that (1) verticality is preserved during projection from 3D scene to transformed view, and (2) ratio of heights between 3D scene and projected view is preserved for objects that lies on the same line in the transformed view.
Camera i produce a residual mask. The process iterates until sufficient players have been found.
This approach is simple, but suffers from many false detections at the intersection of the projections of distinct players silhouettes from different views. This is due to the fact that occlusions induce non-linearities' in the definition of the ground occupancy mask. Hence, once some people are known to be present on the ground field affect the information that can be retrieved from the binary change masks in each views. In particular, if the vertical line associated to a position x is occluded by/occludes another player whose presence is very likely, this particular view should not be exploited to decide whether there is a player in x or not.
Figure imgf000060_0002
For this reason, we propose to refine our naive approach as bi follows.
To initialize the process, we define G° (x) to be the ground Fig. 4. Impact of occlusions on the update of ground occupancy mask associated to camera i. Dashed part of the vertical silhouette standing in occupancy mask Gi associated to the ith view (see Secpi (xi) and Pi (X2) are known to be labeled as foreground since a player is tion UI-B), and set ω?(x) to 1 when x is covered by the known to be standing in Xn. Hence they become useless to infer whether a ith view, and to 0 otherwise. Each iteration is then run in player is located in Xi and X2, respectively. two steps. At iteration n, the first step searches for the most likely position of the nth player, knowing the position of the In the second step, the ground occupancy mask of each view (n — 1) players located in previous iterations. The second step is updated to account for the presence of the nth player. In updates the ground occupancy masks of all views to remove the ground position x, we consider that typical support of a the contribution of the newly located player. player silhouette in view i is a rectangular box of width W
Formally, the first step of iteration n aggregates the ground and height H, and observe that the part of the silhouette that occupancy mask from all views, and then searches for the occludes or is occluded by the newly detected player does denser cluster in this mask. Hence, it computes the aggregated not bring any information about the potential presence of a mask Gn at iteration n as player in position x. Let oti(x, xn) denote the fraction of the silhouette in ground position x that becomes non-informative in view i as a consequence of the presence of a player in
∑i=i <(x) xn. To estimate this ratio, we consider the geometry of the and then defines the most likely position Xn for the nth player problem. Figure 4 depicts a plane Vi that is orthogonal to the by ground, while passing through the ith camera and the player
Xn = argmax < Gn(x), fc(y) >, (2) position Xn. In Vu we consider two points of interest, namely y bi and fi, which correspond to the points at which the rays, where fc(y) denotes a Gaussian kernel centered in y and whose originated in the ith camera and passing through the head and spatial support corresponds to the typical width of a player. feet of the player, intersect the ground plane and the plane parallel to ground at height H, respectively. We denote fi
1 In other words, the ground occupancy mask of a group of players is not equal to the sum of ground occupancy masks projected by each individual (bi) to be the distance between f\ (bi) and the vertical line player. supporting player n in Vi. We also consider pi(x) to denote Appendix 2 the orthogonal projection of x on ?„ and let dt(x) measure the distance between x and V1. Based on those definitions, the ratio α,(x, Xn) is estimated by at(x, xn) = [(S - min(||pι(x) - xn||, δ))/δ] .[l - mintøW/W, !)] (3) with δ being equal to /, or 6,, depending on whether pi(x) lies ahead or behind xn, with respect to the camera. In (3), the first and second factors reflect the misalignment of x and xn in V1 and orthogonally to V1, respectively.
Given α,(x, xn), the ground occupancy mask and aggregation weight of the ith camera in position x are updated as
Figure imgf000061_0001
follows: Digit region selection
Figure imgf000061_0002
O) (4) based on contextual information ωt n+1 (x) = max«(x) - Q1(X1 Xn) (5)
For improved computational efficiency, we limit the positions x investigated in the refined approach to the 30 local maxima that have been detected by the naive approach. Normalization
For completeness, we note that the above described update procedure omit the potential interference between occlusions tU caused by distinct players in the same view. However, the consequence of this approximation is far from being dramatic, Multi-features digit since it ends up in, without affecting the information that SVM classification is actually exploited. Taking those interferences into account Fig. 5. Recognition of digits printed on players' shirts through segmentation, would require to back-project the player silhouettes in each selection, and classification of regions that are likely to represent digits view, thereby tending towards a computationally and memory expensive top-down approach such as the one presented in [7]. approach differs from [12] in the way each one of those steps
Moreover, it is worth mentioning that, in a top-down is implemented. context, the authors in [1] or in [7] propose formulations that simultaneously search for the K positions that best exOur segmentation step is based on the mean-shift algoplain the multiple foreground masks observations. However, rithm [6], which is a pattern recognition technique that is jointly considering all positions increases the dimensionality particularly well suited to delineate denser regions in some of the problem, and dramatically impact the computational arbitrarily structured feature space. In the mean-shift image load. Since our experimental results show that our proposed segmentation, the image is typically represented as a two- method does not suffer from the usual weaknesses of greedy dimensional lattice of 3-dimensional L*u*v pixels. The space algorithms, such as a tendency to get caught in bad local of the lattice is known as the spatial domain, while the color minima, we believe that it compares very favorably to any information corresponds to the range domain. The location joint formulation of the problem, typically solved based on and range vectors are concatenated in a joint spatial-range iterative proximal optimization techniques. domain, and a multivariate kernel is defined as the product of two radially symmetric kernels in each domain, which allows
IV. PLAYERS DIGIT RECOGNITION for the independent definition of the bandwidth parameters ha
This section considers the recognition of the digital charand hr for the spatial and range domains, respectively [6]. acters printed on the sport shirts of athletes. The proposed Local maxima of the joint domain density are then computed, approach is depicted in Figure 2. For each detected position on and modes that are closer than hs in the spatial domain and the ground plane, a 0.8m x 2m conservative bounding box is hr in the range domain are pruned into significant modes. projected in each one of the views. Each box is then processed Each pixel is then associated with a significant mode of the according to an approach that is similar to the coarse-to-fine joint domain density located in its neighborhood. Eventually, method introduced in [12]. In the initial step, the bounding spatial regions that contain less than M pixels are eliminated. box image is segmented into regions. Digit candidate regions In our case, since there is a strong contrast between digit and are then filtered out based on contextual attributes. Eventually, shirt, we can afford a high value for hr, which is set to 8 in selected regions are classified into '0-9' digits or bin classes, our simulations. The parameter ha trade-offs the run time of and the identity of the player is defined by majority vote, segmentation and subsequent filtering and classification stages. based on the results obtained in different views. Our proposed Indeed, a small hr value defines a smaller kernel, which makes Appendix 2 the segmentation faster but also results in a larger number of unlikely matches, and a high level analysis module is used to regions to process in subsequent stages. In our simulations, hr link together partial tracks using shirt color estimation. In the has been set to 4, while M has been fixed to 20. future, graph matching techniques should be used to evaluate longer horizon matching hypothesis. More sophisticated high
To filter out regions that obviously do not correspond to level analysis should also be implemented, e.g. to exploit the digits, we rely on the following observations: available player recognition information or to duplicate the
• Valid digit regions never touch the border of the (conserpartial tracks that follow two players that are very close to vative) bounding box; each other.
• Valid digit regions are surrounded by a single homogeneously colored region. In practice, our algorithm selects VI. EXPERIMENTAL VALIDATION the regions for which the neighbors of the 4 extreme A. Player detection and tracking (top/bottom, right/left) points of the region belong to the To evaluate our player detection algorithm, we have measame region; sured the average missed detection and false detection rates
• The height and width of valid regions ranges between over 180 different and regularly spaced time instants in the two values that are defined relatively to the bounding box interval from 18:47:00 to 18:50:00, which corresponds to a size. Since the size of the bounding is defined according temporal segment for which a manual ground truth is available. to real-world metrics, the size criterion implicitly adapts This ground truth information consists in the positions of the range of height and width values to the perspective players and referees in the coordinate reference system of the effect resulting from the distance between the detected court. We consider that two objects cannot be matched if the object and the camera. measured distance on the ground is larger than 30 cm. Figure 6
For completeness, it is worth mentioning that some particular presents several ROC curves, each curve being obtained by fonts split some digits in two distinct regions. For this reason, varying the detection threshold for a given detection method. candidate digit regions are composed of either a single or a Three methods are compared, and for each of them we assess pair of regions that fulfill the above criteria. our proposed algorithm to mitigate false detections. As a first
The (pairs of) regions that have been selected as eligible for and reference method, we consider the approach followed subsequent processing are then normalized and classified. Norby [9], [10], which projects the foreground masks of all views malization implies horizontal alignment of the major principal only on the ground plane. The poor performance of this latter axis, as derived through computation of moments of inertia, approach is mainly due to the shadows of the players, and and conversion to a 24 x 24 binary mask. Classification is to the small contribution of players' feet to the foreground based on the 'one-against-one' multi-class SVM strategy [8], masks. To validate this interpretation, in the second method, as recommended and implemented by the LIBSVM library [3]. we have projected the foreground masks on a single plane A two-class SVM is trained for each pair of classes, and a located one meter above the ground plane. Doing this, the majority vote strategy is exploited to infer the class (0 to 9 digit shadows influence is drastically attenuated, whilst the main or bin class) from the set of binary classification decisions. In contribution now originates from the body center parts, which practice, to feed the classifier each region sample is described are usually well represented in the foreground masks. We by a 30-dimensional feature vector, namely: observe significant improvements compared to [9], [10]. The third and last detection method presented in Figure 6 is our
• 1 value to define the number of holes in the region; proposed method. We observe that the benefit obtained from
> 3 values corresponding to second order moments mO2, our ground occupancy integration is striking. The improvement m20, and m22; brought by our false alarm detector is also quite obvious. In
> 2 values to define the center of mass of the region; addition, the cross in Figure 6 presents an operating point
• 2 x 12 values to define the histogram of the region along achieved after rudimentary tracking of detected positions. We vertical and horizontal axis. observe that taking into account temporal consistency can still
Numbers with two digits are reconstructed based on the detecfurther improve the detection results. tion of two adjacent digits. To run our simulations, we have In the APIDIS setup, all areas of the basket court are not trained the SVM classifier based on more than 200 manually covered by the same number of cameras. Figure 7 shows the segmented samples of each digit, and on 1200 samples of influence of the camera coverage on the missed and false the bin class. The bin class samples correspond to non-digit detections rates. It also shows that in the areas with high regions that are automatically segmented in one of the views, coverage, most of the missed detections are due to players and whose size is consistent with the one of a digit. standing very close one to another.
V. DETECTED PLAYERS TRACKING B. Player recognition
To track detected players, we have implemented a rudiTo validate the player recognition pipeline, we have selected mentary whilst effective algorithm. The tracks propagation is 190 bounding boxes of players from side views cameras. currently done over a 1 -frame horizon, based on the Munkres In each selected bounding box, the digit was visible and general assignment algorithm[l l]. Gating is used to prevent could be read by a human viewer, despite possibly significant Appendix 2
Based on those observations, we are reasonably confident that the recognition performance of our system will be sufficiently good to assign a correct label to short segments of as player trajectories, thereby providing a valuable tool both to or raise tracking ambiguities or to favor a preferred player during video summary production. f 0.6 1 0.5
«
I 0.4 03
Figure imgf000063_0003
TABLE I 02 PLAYER RECOGNITION PERFORMANCE. 0.1
Figure imgf000063_0001
VII. CONCLUSION
We have presented video processing algorithms to define
Fig. 6. ROC analysis of player detection performance. the position and identity of athletes playing on a sport field, surrounded by a set of loosely synchronized cameras. Detection relies on the definition of a ground occupancy map, while player recognition builds on pre-filtering of segmented regions and on multi-class SVM classification. Experiments on the APIDIS real-life dataset demonstrate the relevance of the proposed approaches.
REFERENCES
[1] A. Alahi, Y. Boursier, L. Jacques, and P. Vandergheynst, "A sparsity constrained inverse problem to locate people in a network of cameras," in Proceedings of the 16th International Conference on Digital Signal Processing (DSP), Santorini, Greece, July 2006.
[2] J. Berclaz, F. Fleuret, and P. Fua, "Principled detection-by-classification from multiple views," in Proceedings of the International Conference on Computer Vision Theory and Application (VlSAPP), vol. 2, Funchal, Madeira, Portugal, January 2008, pp. 375-382.
Figure imgf000063_0002
[3] C-C. Chang and C-J. Lin, "LIBSVM: A library for support vector machines," in http://www.csie.ntu.edu.tw/ cilin/papersflibsvm.pdf.
Camera coverage [4] F. Chen and C. De Vleeschouwer, "A resource allocation framework for summarizing team sport videos," in IEEE International Conference on Image Processing, Cairo, Egypt, November 2009.
Fig. 7. Player detection performance wrt camera coverage.
[5] , "Autonomous production of basket-ball videos from multi- sensored data with personalized viewpoints," in Proceedings of the 10th International Workshop on Image Analysis for Multimedia Interactive Services, London, UK, May 2009. appearance distortions. Table I summarizes our recognition [6] D. Comaπiciu and P. Meer, "Mean shift: a robust approach toward results. The recognition rate is above 73%. More interestingly, feature space analysis," IEEE Transactions on Pattern Analysis and we observe that when the digit was not recognized, it was most Machine Intelligence, vol. 24, no. 5, pp. 603-619, May 2002.
[7] F. Fleuret, J. Berclaz, R. Lengagne, and P. Fua, "Multi-camera people often assigned to the bin class, or did not pass the contextual tracking with a probabilistic occupancy map," IEEE Transactions on analysis due to segmentation error. Moreover, the remaining Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 267-282, 4% of false positive do not include any real mismatch between February 2008.
[8] C-W. Hsu and C-J. Lin, "A comparison of methods for multiclass two digits. Actually, 75% of false positives were due to the support vector machines," IEEE Transactions on Neural Networks, miss of one digit in a two-digits number. In other cases, vol. 13, no. 2, pp. 415^t25, March 2002. two digits have been recognized, the correct one and a false [9] S. Khan and M. Shah, "A multiview approach to tracing people in crowded scenes using a planar homography constraint," in Proceedings detected one. of the 9th European Conference on Computer Vision (ECCV), vol. 4,
Besides, a more detailed analysis has revealed that most Graz, Austria, May 2006, pp. 133-146. of the non-recognized players were standing on the opposite [10] A. Lanza, L. Di Stefano, J. Berclaz, F. Fleuret, and P. Fua, "Robust multiview change detection," in British Machine Vision Conference side of the field, compared to the camera view from which (BMVC), Warwick, UK, September 2007. the bounding box was extracted. In this case, the the height [1 1] J. Munkres, "Algorithms for the assignment and transportation probof the digit decreases to less than 15 pixels, which explains lems," in SIAM J. Control, vol. 5, 1957, pp. 32-38. [12] Q. Ye, Q. Huang, S. Jiang, Y. Liu, and W. Gao, "Jersey number the poor recognition performance, below 50%. In contrast, a detection in sports video for athlete identification," in Proceedings of the camera located on the same side of the field than the player SPIE, Visual Communications and Image Processing, vol. 5960, Beijing, achieves close to 90% correct recognition rate. China, July 2005, pp. 1599-1606.

Claims

Claims
1. A computer based method for autonomous production of an edited video from multiple video streams captured by a plurality of cameras distributed around a scene of interest, the method comprising:
• detecting objects in the images of the video streams,
• selecting for each camera, a field of view based on joint processing of positions of multiple objects that have been detected,
• building the edited video by selecting and concatenating video segments provided by one or more individual cameras.
2. The method of claim 1 , wherein the building is done in a way that maximizes completeness and closeness metrics along the time, while smoothing out the sequence of rendering parameters associated to concatenated segments.
3. The method of claim 1 or 2, further comprising selecting rendering parameters for all objects or objects-of-interest simultaneously.
4. The method of claim 1 or 2 or 3, wherein knowledge about the position of the objects in the images is exploited to decide how to render the captured action.
5. The method of any previous claim further comprising selecting field of view parameters for the camera that renders action as a function of time based on an optimal balance between closeness and completeness metrics.
6. The method of claim 5, wherein the field of view parameters refer to the crop in camera view of static cameras or to the pan-tilt-zoom or displacement parameters for dynamic and potentially moving cameras.
7. The method of claim 5 or 6, wherein the closeness and completeness metrics are adapted according to user preferences and/or resources.
8. The method of claim 7, wherein a user resource is encoding resolution.
9. The method of claim 7 or 8 wherein a user preference is at least one of preferred object, or preferred camera.
10. The method of any previous claim wherein images from all views of all cameras are mapped to the same absolute temporal coordinates based a common unique temporal reference for all camera views.
11. The method of any previous claim further comprising at each time instant, and for each camera view, the selection of field of view parameters that optimize the trade-off between completeness and closeness.
12. The method of claim 11, further comprising rating the viewpoint selected in each camera view according to the quality of its completeness/closeness trade-off, and to its degree of occlusions.
13. The method of claim 12, further comprising, for the temporal segment at hand, computing the parameters of an optimal virtual camera that pans, zooms and switches across views to preserve high ratings of selected viewpoints while minimizing the amount of virtual camera movements.
14. The method of any of claims 9 to 12, further comprising selecting the optimal field of view in each camera, at a given time instant.
15. The method of claim 14, wherein a field of view Vk in the kΛ camera view is defined by the size Sk and the center Ck of the window that is cropped in the kth view for actual display and is selected to include the objects of interest and to provide a high resolution description of the objects, and an optimal field of view Vk* is selected to maximize a weighted sum of object interests as follows
Figure imgf000065_0001
where, in the above equation: • /„ denotes the level of interest assigned to the nΛ object detected in the scene.
• xn,k denotes the position of the nΛ object in camera view k.
• The function m( ) modulates the weights of the nth object according to its distance to the center of the viewpoint window, compared to the size of this window.
• The vector u reflects the user preferences, in particular, its component ures defines the resolution of the output stream, which is generally constrained by the transmission bandwidth or end-user device resolution.
• The function α(.) reflects the penalty induced by the fact that the native signal captured by the kΛ camera has to be sub-sampled once the size of the viewpoint becomes larger than the maximal resolution ures allowed by the user.
16. The method of claim 15, wherein α(....) decreases with Sk and the function α(....) is equal to one when Sk< ures, and decrease afterwards.
17. The method of claim 16, wherein α(....) is defined by:
Figure imgf000066_0001
where the exponent uci0Se is larger than 1, and increases as the user prefers full- resolution rendering of zoom-in area, compared to large but sub-sampled viewpoints.
18. The method of any of the claims 12 to 17, further rating the viewpoint associated to each camera according to the quality of its completeness/closeness trade-off, and to its degree of occlusions.
19. the method of claim 18, wherein the highest rate correspond to a view that makes most object of interest visible, and is close to the action.
20. the method of claim 18 or 19, wherein, given the interest /„ of each player, the rate h(yk, u) associated to the k* camera view is defined as follows:
Figure imgf000067_0001
where, in the above equation:
/„ denotes the level of interest assigned to the nΛ object detected in the scene.
Xn denotes the position of the n* object in the 3D space; ■ Ok(xn| x) measures the occlusion ratio of the nth object in camera view k, knowing the position of all other objects, the occlusion ratio of an object being defined to be the fraction of pixels of the object that are hidden by other objects when projected on the camera sensor;
The height Λ*(xn) is defined to be the height in pixels of the projection in view k of a reference height of a reference object located in Xn. The value of ^xn) is directly computed based on camera calibration, or when calibration is not available, it can be estimated based on the height of the object detected in view k.
■ The function /?*(.) reflects the impact of the user preferences in terms of camera view and display resolution.
21. The method of claim 20, wherein /?*(.) is defined as
βk(S,u) = uk - a(S,u)t
where w* denotes the weight assigned to the \th camera, and a(S,u) is defined as in claim 16 or 17.
22. The method of any of the claims 13 to 21 further comprising smoothing the sequence of camera indices and corresponding viewpoint parameters, wherein the smoothing process is for example implemented based on two Markov Random Fields, linear or non-linear low-pass filtering mechanism, or via a graph model formalism, solved based on conventional Viterbi algorithm.
23. The method according to any of the previous claims wherein the capturing of the mulitiple video streams is by static or dynamic cameras.
24. Computer based system comprising a processing engine and memory for autonomous production of an edited video from multiple video streams captured by a plurality of cameras distributed around a scene of interest, the system comprising: detector for detecting objects in the images of the video streams, first means for selecting one or more camera viewpoints based on joint processing of positions of multiple objects that have been detected, second means for selecting rendering parameters that maximize and smooth out closeness and completeness metrics by concatenating segments in the video streams provided by one or more individual cameras.
25. The system of claim 24, wherein the second means for selecting rendering parameters operates on all objects simultaneously.
26. The system of claim 24 or 25, further comprising third means for selecting camera and image parameter variations for the camera view that render action as a function of time for a set of joint closeness and completeness metrics.
27. The system of claim 26, wherein the third means for selecting camera and image parameter variations is adapted to crop in the camera view of a static camera or to control the control parameters of a dynamic camera.
28. The system of claim 26 or 27, wherein the set of joint closeness and completeness metrics are personizable according to user preferences and/or resources.
29. The system of claim 28, wherein a user resource is encoding or wherein a user preference is at least one of preferred object, preferred 'kind of view1, preferred camera.
30. The system of any of the claims 24 to 29 further comprising means for mapping images from all views of all cameras to the same absolute temporal coordinates based a common unique temporal reference for all camera views.
31. The system of any of the claims 24 to 30 further comprising fourth means for selecting the variations of parameters that optimize the trade-off between completeness and closeness at each time instant, and for each camera view.
32. The system of claim 31, wherein the completeness/closeness trade-off is measured as a function of the user preferences.
33. The system of claim 31 or 32, further comprising means for rating the viewpoint selected in each camera view according to the quality of its completeness/closeness trade-off, and to its degree of occlusions.
34. The system of claim 33, further comprising means for computing the parameters of an optimal virtual camera that pans, zooms and switches across views to preserve high ratings of selected viewpoints while minimizing the amount of virtual camera movements, for the temporal segment at hand,
35. The system of any of claims 31 to 34, further comprising fifth means for selecting the optimal viewpoint in each camera view, at a given time instant.
36. The system of claim 35, wherein the fifth means for selecting the optimal viewpoint is adapted, for a viewpoint Vk in the k* camera view is defined by the size Sk and the center Ck of the window that is cropped in the k^ view for actual display and is selected to include the objects of interest and to provide a high resolution, is adapted to select a description of the objects and an optimal viewpoint Vk* to maximize a weighted sum of object interests as follows
Figure imgf000069_0001
where, in the above equation: o /„ denotes the level of interest assigned to the nΛ object detected in the scene. xn,k denotes the position of the n111 object in camera view k. o The function m( ) modulates the weights of the nft object according to its distance to the center of the viewpoint window, compared to the size of this window.o The vector u reflects the user preferences, in particular, its component ures defines the resolution of the output stream, which is generally constrained by the transmission bandwidth or end-user device resolution. o The function α(.) reflects the penalty induced by the fact that the native signal captured by the kΛ camera has to be sub-sampled once the size of the viewpoint becomes larger than the maximal resolution ures allowed by the user.
37. The system of claim 36, wherein α(....) decreases with Sk and the function α(....) is equal to one when iS*< ures, and decrease afterwards.
38. The system of claim 37, wherein α(....) is defined by:
Figure imgf000070_0001
where the exponent uci0Se is larger than 1 , and increases as the user prefers full- resolution rendering of zoom-in area, compared to large but sub-sampled viewpoints.
39. The system of any of the claims 33 to 38, further comprising sixth means for selecting the camera at a given time instant that makes most object of interest visible, and is close to the action, whereby an optimal camera index k* is selected according to an equation that is similar or equivalent to:
F rt -ø,(xJx)Λ(xJ-&(SΛu)
Figure imgf000070_0002
where, in the above equation: ■ /„ denotes the level of interest assigned to the n* object detected in the scene.
Xn denotes the position of the nth object in the 3D space;
Ok(xn| x) measures the occlusion ratio of the nΛ object in camera view k, knowing the position of all other objects, the occlusion ratio of an object being defined to be the fraction of pixels of the object that are hidden by other objects when projected on the camera sensor;
The height Λ*(x,,) is defined to be the height in pixels of the projection in view k of a reference height of a reference object located in Xn. The value of AA(X11) is directly computed based on camera calibration, or when calibration is not available, it can be estimated based on the height of the object detected in view k. The function /?*(.) reflects the impact of the user preferences in terms of camera view and display resolution.
40. The system of claim 39, wherein /?*(.) is defined as
βk(S9u) = uk - a(S,u)9 where w* denotes the weight assigned to the kfh camera, and a(S,u) is defined as in claim 37 or 38.
41. The system of any of the claims 37 to 40 further comprising means for smoothing the sequence of camera indices and corresponding viewpoint parameters, wherein the means for smoothing is adapted to smooth based on two Markov Random Fields, by a linear or non-linear low-pass filtering mechanism, by a graph model formalism, solved based on conventional Viterbi algorithm.
42. A computer program product that comprises code segments which when executed on a processing engine execute any of the methods of claims 1 to 20 or implement the system according to any of the claims 24 to 41.
43. A machine readable signal storage medium storing the computer program product of claim 42.
PCT/BE2010/000039 2009-05-07 2010-05-07 Systems and methods for the autonomous production of videos from multi-sensored data WO2010127418A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
BRPI1011189-1A BRPI1011189B1 (en) 2009-05-07 2010-05-07 COMPUTER-BASED SYSTEM FOR SELECTING OPTIMUM VIEWING POINTS AND NON TRANSIENT MACHINE-READABLE SIGNAL STORAGE MEANS
PL10737234T PL2428036T3 (en) 2009-05-07 2010-05-07 Systems and methods for the autonomous production of videos from multi-sensored data
MX2011011799A MX2011011799A (en) 2009-05-07 2010-05-07 Systems and methods for the autonomous production of videos from multi-sensored data.
US13/319,202 US8854457B2 (en) 2009-05-07 2010-05-07 Systems and methods for the autonomous production of videos from multi-sensored data
EP10737234.4A EP2428036B1 (en) 2009-05-07 2010-05-07 Systems and methods for the autonomous production of videos from multi-sensored data
ES10737234.4T ES2556601T3 (en) 2009-05-07 2010-05-07 Systems and methods for the autonomous production of videos from multiple data detected
CA2761187A CA2761187C (en) 2009-05-07 2010-05-07 Systems and methods for the autonomous production of videos from multi-sensored data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB0907870.0A GB0907870D0 (en) 2009-05-07 2009-05-07 Systems and methods for the autonomous production of videos from multi-sensored data
GB0907870.0 2009-05-07

Publications (1)

Publication Number Publication Date
WO2010127418A1 true WO2010127418A1 (en) 2010-11-11

Family

ID=40833634

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/BE2010/000039 WO2010127418A1 (en) 2009-05-07 2010-05-07 Systems and methods for the autonomous production of videos from multi-sensored data

Country Status (9)

Country Link
US (1) US8854457B2 (en)
EP (1) EP2428036B1 (en)
BR (1) BRPI1011189B1 (en)
CA (1) CA2761187C (en)
ES (1) ES2556601T3 (en)
GB (1) GB0907870D0 (en)
MX (1) MX2011011799A (en)
PL (1) PL2428036T3 (en)
WO (1) WO2010127418A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012092790A1 (en) * 2011-01-07 2012-07-12 Lu Yong Method and system for collecting, transmitting, editing and integrating, broadcasting, and receiving signal
CN104216941A (en) * 2013-05-31 2014-12-17 三星Sds株式会社 Data analysis apparatus and method
EP2851900A1 (en) * 2013-09-18 2015-03-25 Nxp B.V. Media content real time analysis and automated semantic summarization
EP3142116A1 (en) * 2015-09-14 2017-03-15 Thomson Licensing Method and device for capturing a video in a communal acquisition
US10027954B2 (en) 2016-05-23 2018-07-17 Microsoft Technology Licensing, Llc Registering cameras in a multi-camera imager
CN108401167A (en) * 2017-02-08 2018-08-14 三星电子株式会社 Electronic equipment and server for video playback
RU2665045C2 (en) * 2015-10-29 2018-08-27 Акционерное общество "Российская корпорация ракетно-космического приборостроения и информационных систем" (АО "Российские космические системы") System for modeling situations relating to conflicts and/or competition
EP3249651B1 (en) * 2016-05-23 2018-08-29 Axis AB Generating a summary video sequence from a source video sequence
KR20190010650A (en) * 2016-05-25 2019-01-30 캐논 가부시끼가이샤 Information processing apparatus, image generation method, control method,
US10326979B2 (en) 2016-05-23 2019-06-18 Microsoft Technology Licensing, Llc Imaging system comprising real-time image registration
US10339662B2 (en) 2016-05-23 2019-07-02 Microsoft Technology Licensing, Llc Registering cameras with virtual fiducials
WO2021207569A1 (en) * 2020-04-10 2021-10-14 Stats Llc End-to-end camera calibration for broadcast video
CN115119050A (en) * 2022-06-30 2022-09-27 北京奇艺世纪科技有限公司 Video clipping method and device, electronic equipment and storage medium
ES2957182A1 (en) * 2022-05-31 2024-01-12 Pronoide S L SYSTEM AND METHOD FOR CREATION OF INSTRUCTIONAL VIDEOS THROUGH THE USE OF COMMAND SEQUENCES (Machine-translation by Google Translate, not legally binding)

Families Citing this family (133)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5627498B2 (en) * 2010-07-08 2014-11-19 株式会社東芝 Stereo image generating apparatus and method
CA2772206C (en) 2011-03-24 2016-06-21 Kabushiki Kaisha Topcon Omnidirectional camera
US9557885B2 (en) 2011-08-09 2017-01-31 Gopro, Inc. Digital media editing
US9723223B1 (en) 2011-12-02 2017-08-01 Amazon Technologies, Inc. Apparatus and method for panoramic video hosting with directional audio
US9516225B2 (en) 2011-12-02 2016-12-06 Amazon Technologies, Inc. Apparatus and method for panoramic video hosting
US9838687B1 (en) 2011-12-02 2017-12-05 Amazon Technologies, Inc. Apparatus and method for panoramic video hosting with reduced bandwidth streaming
US9202526B2 (en) 2012-05-14 2015-12-01 Sstatzz Oy System and method for viewing videos and statistics of sports events
US20130300832A1 (en) * 2012-05-14 2013-11-14 Sstatzz Oy System and method for automatic video filming and broadcasting of sports events
US20140125806A1 (en) * 2012-05-14 2014-05-08 Sstatzz Oy Sports Apparatus and Method
JP6201991B2 (en) * 2012-06-28 2017-09-27 日本電気株式会社 Camera position / orientation evaluation apparatus, camera position / orientation evaluation method, and camera position / orientation evaluation program
US20140071349A1 (en) * 2012-09-12 2014-03-13 Nokia Corporation Method, apparatus, and computer program product for changing a viewing angle of a video image
US9265991B2 (en) 2012-10-25 2016-02-23 Sstatzz Oy Method and system for monitoring movement of a sport projectile
US8874139B2 (en) 2012-10-25 2014-10-28 Sstatzz Oy Position location system and method
US8968100B2 (en) 2013-02-14 2015-03-03 Sstatzz Oy Sports training apparatus and method
US9462301B2 (en) 2013-03-15 2016-10-04 Google Inc. Generating videos with multiple viewpoints
US9881206B2 (en) 2013-04-09 2018-01-30 Sstatzz Oy Sports monitoring system and method
US9843623B2 (en) 2013-05-28 2017-12-12 Qualcomm Incorporated Systems and methods for selecting media items
GB2517730A (en) * 2013-08-29 2015-03-04 Mediaproduccion S L A method and system for producing a video production
US20150139601A1 (en) * 2013-11-15 2015-05-21 Nokia Corporation Method, apparatus, and computer program product for automatic remix and summary creation using crowd-sourced intelligence
US10878770B2 (en) * 2013-12-02 2020-12-29 Nvidia Corporation Method and system for customizing optimal settings using end-user preferences
US9918110B2 (en) 2013-12-13 2018-03-13 Fieldcast Llc Point of view multimedia platform
US11250886B2 (en) 2013-12-13 2022-02-15 FieldCast, LLC Point of view video processing and curation platform
US10622020B2 (en) 2014-10-03 2020-04-14 FieldCast, LLC Point of view video processing and curation platform
US10015527B1 (en) 2013-12-16 2018-07-03 Amazon Technologies, Inc. Panoramic video distribution and viewing
US9912743B2 (en) 2014-02-28 2018-03-06 Skycapital Investors, Llc Real-time collection and distribution of information for an event organized according to sub-events
US9652667B2 (en) 2014-03-04 2017-05-16 Gopro, Inc. Automatic generation of video from spherical content using audio/visual analysis
US9870621B1 (en) 2014-03-10 2018-01-16 Google Llc Motion-based feature correspondence
NL2012399B1 (en) * 2014-03-11 2015-11-26 De Vroome Poort B V Autonomous camera system for capturing sporting events.
US10108254B1 (en) 2014-03-21 2018-10-23 Google Llc Apparatus and method for temporal synchronization of multiple signals
ES2845933T3 (en) 2014-04-03 2021-07-28 Pixellot Ltd Automatic television production method and system
US9600723B1 (en) 2014-07-03 2017-03-21 Google Inc. Systems and methods for attention localization using a first-person point-of-view device
US9449229B1 (en) 2014-07-07 2016-09-20 Google Inc. Systems and methods for categorizing motion event candidates
US9158974B1 (en) 2014-07-07 2015-10-13 Google Inc. Method and system for motion vector-based video monitoring and event categorization
US10127783B2 (en) 2014-07-07 2018-11-13 Google Llc Method and device for processing motion events
US10140827B2 (en) 2014-07-07 2018-11-27 Google Llc Method and system for processing motion event notifications
US9170707B1 (en) 2014-09-30 2015-10-27 Google Inc. Method and system for generating a smart time-lapse video clip
US9501915B1 (en) 2014-07-07 2016-11-22 Google Inc. Systems and methods for analyzing a video stream
US9685194B2 (en) 2014-07-23 2017-06-20 Gopro, Inc. Voice-based video tagging
WO2016014724A1 (en) * 2014-07-23 2016-01-28 Gopro, Inc. Scene and activity identification in video summary generation
US9792502B2 (en) 2014-07-23 2017-10-17 Gopro, Inc. Generating video summaries for a video using video summary templates
USD782495S1 (en) 2014-10-07 2017-03-28 Google Inc. Display screen or portion thereof with graphical user interface
US9734870B2 (en) 2015-01-05 2017-08-15 Gopro, Inc. Media identifier generation for camera-captured media
JP6529267B2 (en) * 2015-01-23 2019-06-12 キヤノン株式会社 INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREOF, PROGRAM, AND STORAGE MEDIUM
US9679605B2 (en) 2015-01-29 2017-06-13 Gopro, Inc. Variable playback speed template for video editing application
WO2016138121A1 (en) 2015-02-24 2016-09-01 Plaay, Llc System and method for creating a sports video
US10186012B2 (en) 2015-05-20 2019-01-22 Gopro, Inc. Virtual lens simulation for video and photo cropping
US9361011B1 (en) 2015-06-14 2016-06-07 Google Inc. Methods and systems for presenting multiple live video feeds in a user interface
US10291845B2 (en) 2015-08-17 2019-05-14 Nokia Technologies Oy Method, apparatus, and computer program product for personalized depth of field omnidirectional video
US10104286B1 (en) 2015-08-27 2018-10-16 Amazon Technologies, Inc. Motion de-blurring for panoramic frames
US9894393B2 (en) 2015-08-31 2018-02-13 Gopro, Inc. Video encoding for reduced streaming latency
US10609379B1 (en) 2015-09-01 2020-03-31 Amazon Technologies, Inc. Video compression across continuous frame edges
JP6566799B2 (en) * 2015-09-07 2019-08-28 キヤノン株式会社 Providing device, providing method, and program
US9843724B1 (en) 2015-09-21 2017-12-12 Amazon Technologies, Inc. Stabilization of panoramic video
US9721611B2 (en) 2015-10-20 2017-08-01 Gopro, Inc. System and method of generating video from video clips based on moments of interest within the video clips
US10204273B2 (en) 2015-10-20 2019-02-12 Gopro, Inc. System and method of providing recommendations of moments of interest within video clips post capture
US10356456B2 (en) * 2015-11-05 2019-07-16 Adobe Inc. Generating customized video previews
US10095696B1 (en) 2016-01-04 2018-10-09 Gopro, Inc. Systems and methods for generating recommendations of post-capture users to edit digital media content field
US10109319B2 (en) 2016-01-08 2018-10-23 Gopro, Inc. Digital media editing
US10083537B1 (en) 2016-02-04 2018-09-25 Gopro, Inc. Systems and methods for adding a moving visual element to a video
US9972066B1 (en) 2016-03-16 2018-05-15 Gopro, Inc. Systems and methods for providing variable image projection for spherical visual content
US10402938B1 (en) 2016-03-31 2019-09-03 Gopro, Inc. Systems and methods for modifying image distortion (curvature) for viewing distance in post capture
US9838730B1 (en) 2016-04-07 2017-12-05 Gopro, Inc. Systems and methods for audio track selection in video editing
US9838731B1 (en) 2016-04-07 2017-12-05 Gopro, Inc. Systems and methods for audio track selection in video editing with audio mixing option
US9794632B1 (en) 2016-04-07 2017-10-17 Gopro, Inc. Systems and methods for synchronization based on audio track changes in video editing
US10506237B1 (en) 2016-05-27 2019-12-10 Google Llc Methods and devices for dynamic adaptation of encoding bitrate for video streaming
US10250894B1 (en) 2016-06-15 2019-04-02 Gopro, Inc. Systems and methods for providing transcoded portions of a video
US9922682B1 (en) 2016-06-15 2018-03-20 Gopro, Inc. Systems and methods for organizing video files
US9998769B1 (en) 2016-06-15 2018-06-12 Gopro, Inc. Systems and methods for transcoding media files
FR3052949B1 (en) * 2016-06-17 2019-11-08 Alexandre Courtes METHOD AND SYSTEM FOR TAKING VIEWS USING A VIRTUAL SENSOR
US10045120B2 (en) 2016-06-20 2018-08-07 Gopro, Inc. Associating audio with three-dimensional objects in videos
US10185891B1 (en) 2016-07-08 2019-01-22 Gopro, Inc. Systems and methods for compact convolutional neural networks
US10380429B2 (en) 2016-07-11 2019-08-13 Google Llc Methods and systems for person detection in a video feed
US10957171B2 (en) 2016-07-11 2021-03-23 Google Llc Methods and systems for providing event alerts
US10192415B2 (en) 2016-07-11 2019-01-29 Google Llc Methods and systems for providing intelligent alerts for events
US10469909B1 (en) 2016-07-14 2019-11-05 Gopro, Inc. Systems and methods for providing access to still images derived from a video
US10395119B1 (en) 2016-08-10 2019-08-27 Gopro, Inc. Systems and methods for determining activities performed during video capture
JP6918455B2 (en) * 2016-09-01 2021-08-11 キヤノン株式会社 Image processing equipment, image processing methods and programs
US9836853B1 (en) 2016-09-06 2017-12-05 Gopro, Inc. Three-dimensional convolutional neural networks for video highlight detection
US10282632B1 (en) 2016-09-21 2019-05-07 Gopro, Inc. Systems and methods for determining a sample frame order for analyzing a video
US10268898B1 (en) 2016-09-21 2019-04-23 Gopro, Inc. Systems and methods for determining a sample frame order for analyzing a video via segments
JP6812181B2 (en) * 2016-09-27 2021-01-13 キヤノン株式会社 Image processing device, image processing method, and program
US10002641B1 (en) 2016-10-17 2018-06-19 Gopro, Inc. Systems and methods for determining highlight segment sets
US10284809B1 (en) 2016-11-07 2019-05-07 Gopro, Inc. Systems and methods for intelligently synchronizing events in visual content with musical features in audio content
US10262639B1 (en) 2016-11-08 2019-04-16 Gopro, Inc. Systems and methods for detecting musical features in audio content
JP6539253B2 (en) * 2016-12-06 2019-07-03 キヤノン株式会社 INFORMATION PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND PROGRAM
US10534966B1 (en) 2017-02-02 2020-01-14 Gopro, Inc. Systems and methods for identifying activities and/or events represented in a video
US10339443B1 (en) 2017-02-24 2019-07-02 Gopro, Inc. Systems and methods for processing convolutional neural network operations using textures
US10127943B1 (en) 2017-03-02 2018-11-13 Gopro, Inc. Systems and methods for modifying videos based on music
US10185895B1 (en) 2017-03-23 2019-01-22 Gopro, Inc. Systems and methods for classifying activities captured within images
US10083718B1 (en) 2017-03-24 2018-09-25 Gopro, Inc. Systems and methods for editing videos based on motion
JP6922369B2 (en) * 2017-04-14 2021-08-18 富士通株式会社 Viewpoint selection support program, viewpoint selection support method and viewpoint selection support device
US10187690B1 (en) 2017-04-24 2019-01-22 Gopro, Inc. Systems and methods to detect and correlate user responses to media content
US10395122B1 (en) 2017-05-12 2019-08-27 Gopro, Inc. Systems and methods for identifying moments in videos
US10599950B2 (en) 2017-05-30 2020-03-24 Google Llc Systems and methods for person recognition data management
US11783010B2 (en) 2017-05-30 2023-10-10 Google Llc Systems and methods of person recognition in video streams
US10402687B2 (en) 2017-07-05 2019-09-03 Perceptive Automata, Inc. System and method of predicting human interaction with vehicles
US10614114B1 (en) 2017-07-10 2020-04-07 Gopro, Inc. Systems and methods for creating compilations based on hierarchical clustering
US10402698B1 (en) 2017-07-10 2019-09-03 Gopro, Inc. Systems and methods for identifying interesting moments within videos
US10402656B1 (en) 2017-07-13 2019-09-03 Gopro, Inc. Systems and methods for accelerating video analysis
KR102535031B1 (en) * 2017-07-19 2023-05-22 삼성전자주식회사 Display apparatus, the control method thereof and the computer program product thereof
US10432987B2 (en) * 2017-09-15 2019-10-01 Cisco Technology, Inc. Virtualized and automated real time video production system
US11134227B2 (en) 2017-09-20 2021-09-28 Google Llc Systems and methods of presenting appropriate actions for responding to a visitor to a smart home environment
US10664688B2 (en) 2017-09-20 2020-05-26 Google Llc Systems and methods of detecting and responding to a visitor to a smart home environment
EP3752877A4 (en) 2018-02-17 2021-11-03 Dreamvu, Inc. System and method for capturing omni-stereo videos using multi-sensors
USD931355S1 (en) 2018-02-27 2021-09-21 Dreamvu, Inc. 360 degree stereo single sensor camera
USD943017S1 (en) 2018-02-27 2022-02-08 Dreamvu, Inc. 360 degree stereo optics mount for a camera
JP7132730B2 (en) * 2018-03-14 2022-09-07 キヤノン株式会社 Information processing device and information processing method
US10574975B1 (en) 2018-08-08 2020-02-25 At&T Intellectual Property I, L.P. Method and apparatus for navigating through panoramic content
US11138694B2 (en) 2018-12-05 2021-10-05 Tencent America LLC Method and apparatus for geometric smoothing
US10818077B2 (en) 2018-12-14 2020-10-27 Canon Kabushiki Kaisha Method, system and apparatus for controlling a virtual camera
JP2020134973A (en) * 2019-02-12 2020-08-31 キヤノン株式会社 Material generation apparatus, image generation apparatus and image processing apparatus
WO2020176873A1 (en) 2019-02-28 2020-09-03 Stats Llc System and method for generating trackable video frames from broadcast video
US10728443B1 (en) 2019-03-27 2020-07-28 On Time Staffing Inc. Automatic camera angle switching to create combined audiovisual file
US10963841B2 (en) 2019-03-27 2021-03-30 On Time Staffing Inc. Employment candidate empathy scoring system
JP7310252B2 (en) * 2019-04-19 2023-07-19 株式会社リコー MOVIE GENERATOR, MOVIE GENERATION METHOD, PROGRAM, STORAGE MEDIUM
US11577146B1 (en) 2019-06-07 2023-02-14 Shoot-A-Way, Inc. Basketball launching device with off of the dribble statistic tracking
US11400355B1 (en) 2019-06-07 2022-08-02 Shoot-A-Way, Inc. Basketball launching device with a camera for detecting made shots
US11763163B2 (en) 2019-07-22 2023-09-19 Perceptive Automata, Inc. Filtering user responses for generating training data for machine learning based models for navigation of autonomous vehicles
US11615266B2 (en) 2019-11-02 2023-03-28 Perceptive Automata, Inc. Adaptive sampling of stimuli for training of machine learning based models for predicting hidden context of traffic entities for navigating autonomous vehicles
US11127232B2 (en) 2019-11-26 2021-09-21 On Time Staffing Inc. Multi-camera, multi-sensor panel data extraction system and method
US11893795B2 (en) 2019-12-09 2024-02-06 Google Llc Interacting with visitors of a connected home environment
CA3173977A1 (en) * 2020-03-02 2021-09-10 Visual Supply Company Systems and methods for automating video editing
US11023735B1 (en) 2020-04-02 2021-06-01 On Time Staffing, Inc. Automatic versioning of video presentations
CN114007059A (en) * 2020-07-28 2022-02-01 阿里巴巴集团控股有限公司 Video compression method, decompression method, device, electronic equipment and storage medium
US11482004B2 (en) 2020-07-29 2022-10-25 Disney Enterprises, Inc. Fast video content matching
US11144882B1 (en) 2020-09-18 2021-10-12 On Time Staffing Inc. Systems and methods for evaluating actions over a computer network and establishing live network connections
US11516517B2 (en) 2021-03-19 2022-11-29 Sm Tamjid Localized dynamic video streaming system
EP4099704A1 (en) * 2021-06-04 2022-12-07 Spiideo AB System and method for providing a recommended video production
US11430486B1 (en) * 2021-06-11 2022-08-30 Gopro, Inc. Provision of supplemental content for use in a video edit
US11727040B2 (en) 2021-08-06 2023-08-15 On Time Staffing, Inc. Monitoring third-party forum contributions to improve searching through time-to-live data assignments
US11423071B1 (en) 2021-08-31 2022-08-23 On Time Staffing, Inc. Candidate data ranking method using previously selected candidate data
US11907652B2 (en) 2022-06-02 2024-02-20 On Time Staffing, Inc. User interface and systems for document creation
US11712610B1 (en) 2023-01-11 2023-08-01 Shoot-A-Way, Inc. Ultrasonic shots-made detector for basketball launching device

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996031047A2 (en) * 1995-03-31 1996-10-03 The Regents Of The University Of California Immersive video
US5745126A (en) * 1995-03-31 1998-04-28 The Regents Of The University Of California Machine synthesis of a virtual video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
EP1206129A1 (en) * 2000-11-13 2002-05-15 Wells & Verne Investments Ltd Computer-aided image producing system
US20020105598A1 (en) 2000-12-12 2002-08-08 Li-Cheng Tai Automatic multi-camera video composition
US20020191071A1 (en) 2001-06-14 2002-12-19 Yong Rui Automated online broadcasting system and method using an omni-directional camera system for viewing meetings over a computer network
US20020196327A1 (en) 2001-06-14 2002-12-26 Yong Rui Automated video production system and method using expert video production rules for online publishing of lectures
EP1289282A1 (en) 2001-08-29 2003-03-05 Dartfish SA Video sequence automatic production method and system
US6741250B1 (en) 2001-02-09 2004-05-25 Be Here Corporation Method and system for generation of multiple viewpoints into a scene viewed by motionless cameras and for presentation of a view path
US20040105004A1 (en) 2002-11-30 2004-06-03 Yong Rui Automated camera management system and method for capturing presentations using videography rules
GB2402011A (en) 2003-05-20 2004-11-24 British Broadcasting Corp Automated camera control using event parameters
WO2005099423A2 (en) 2004-04-16 2005-10-27 Aman James A Automatic event videoing, tracking and content generation system
US20060251384A1 (en) 2005-05-09 2006-11-09 Microsoft Corporation Automatic video editing for real-time multi-point video conferencing
EP1798691A2 (en) * 2003-03-14 2007-06-20 British Broadcasting Corporation Method and apparatus for generating a desired view of a scene from a selected viewpoint
US20080129825A1 (en) 2006-12-04 2008-06-05 Lynx System Developers, Inc. Autonomous Systems And Methods For Still And Moving Picture Production

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5969755A (en) * 1996-02-05 1999-10-19 Texas Instruments Incorporated Motion based event detection system and method
US6711278B1 (en) * 1998-09-10 2004-03-23 Microsoft Corporation Tracking semantic objects in vector image sequences
US20050128291A1 (en) * 2002-04-17 2005-06-16 Yoshishige Murakami Video surveillance system
US7505604B2 (en) * 2002-05-20 2009-03-17 Simmonds Precision Prodcuts, Inc. Method for detection and recognition of fog presence within an aircraft compartment using video images
US9766089B2 (en) * 2009-12-14 2017-09-19 Nokia Technologies Oy Method and apparatus for correlating and navigating between a live image and a prerecorded panoramic image

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5745126A (en) * 1995-03-31 1998-04-28 The Regents Of The University Of California Machine synthesis of a virtual video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
WO1996031047A2 (en) * 1995-03-31 1996-10-03 The Regents Of The University Of California Immersive video
EP1206129A1 (en) * 2000-11-13 2002-05-15 Wells & Verne Investments Ltd Computer-aided image producing system
EP1352521A2 (en) 2000-12-12 2003-10-15 Intel Corporation Automatic multi-camera video composition
US20020105598A1 (en) 2000-12-12 2002-08-08 Li-Cheng Tai Automatic multi-camera video composition
US6741250B1 (en) 2001-02-09 2004-05-25 Be Here Corporation Method and system for generation of multiple viewpoints into a scene viewed by motionless cameras and for presentation of a view path
US20020191071A1 (en) 2001-06-14 2002-12-19 Yong Rui Automated online broadcasting system and method using an omni-directional camera system for viewing meetings over a computer network
US20020196327A1 (en) 2001-06-14 2002-12-26 Yong Rui Automated video production system and method using expert video production rules for online publishing of lectures
EP1289282A1 (en) 2001-08-29 2003-03-05 Dartfish SA Video sequence automatic production method and system
US20040105004A1 (en) 2002-11-30 2004-06-03 Yong Rui Automated camera management system and method for capturing presentations using videography rules
EP1798691A2 (en) * 2003-03-14 2007-06-20 British Broadcasting Corporation Method and apparatus for generating a desired view of a scene from a selected viewpoint
GB2402011A (en) 2003-05-20 2004-11-24 British Broadcasting Corp Automated camera control using event parameters
WO2005099423A2 (en) 2004-04-16 2005-10-27 Aman James A Automatic event videoing, tracking and content generation system
US20060251384A1 (en) 2005-05-09 2006-11-09 Microsoft Corporation Automatic video editing for real-time multi-point video conferencing
US20060251382A1 (en) 2005-05-09 2006-11-09 Microsoft Corporation System and method for automatic video editing using object recognition
US20080129825A1 (en) 2006-12-04 2008-06-05 Lynx System Developers, Inc. Autonomous Systems And Methods For Still And Moving Picture Production

Non-Patent Citations (23)

* Cited by examiner, † Cited by third party
Title
"Autonomous production of basket-ball videos from multi-sensored data with personalized viewpoints", PROCEEDINGS OF THE 10TH INTERNATIONAL WORKSHOP ON IMAGE ANALYSIS FOR MULTIMEDIA INTERACTIVE SERVICES, May 2009 (2009-05-01)
A. ALAHI; Y. BOURSIER; L. JACQUES; P. VANDERGHEYNST: "A sparsity constrained inverse problem to locate people in a network of cameras", PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP), July 2006 (2006-07-01)
A. LANZA; L. DI STEFANO; J. BERCLAZ.; F. FLEURET; P. FUA: "Robust multiview change detection", BRITISH MACHINE VISION CONFERENCE (BMVC), September 2007 (2007-09-01)
A. TYAGI; G. POTAMIANOS; J.W. DAVIS; S.M. CHU, FUSION OF MULTIPLE CAMERA VIEWS FOR KERNEL-BASED 3D TRACKING, WMVC'07, vol. 1, 2007, pages 1 - 1
B. SUH; H. LING; B.B. BEDERSON; D.W. JACOBS: "Automatic thumbnail cropping and its effectiveness", PROC. ACM UIST 2003, vol. 1, 2003, pages 95 - 104, XP058109377, DOI: doi:10.1145/964696.964707
C. DE VLEESCHOUWER; F. CHEN; D. DELANNAY; C. PARISOT; C. CHAUDY; E. MARTROU; A. CAVALLARO: "Distributed video acquisition and annotation for sport-event summarization", NEM SUMMIT, 2008
C.-C. CHANG; C.-J. LIN, LIBSVM: A LIBRARY FOR SUPPORT VECTOR MACHINES, Retrieved from the Internet <URL:http:llwww.csie.ntu.edu.twlcjlinlpapersllibsvm.pdf>
C.-W. HSU; C.-J. LIN: "A comparison of methods for multiclass support vector machines", IEEE TRANSACTIONS ON NEURAL NETWORKS, vol. 13, no. 2, March 2002 (2002-03-01), pages 415 - 425
D. COMANICIU; P. MEER: "Mean shift: a robust approach toward feature space analysis", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 24, no. 5, May 2002 (2002-05-01), pages 603 - 619, XP002323848, DOI: doi:10.1109/34.1000236
D. DELANNAY; N. DANHIER; C. DE VLEESCHOUWER: "Detection and Recognition of Sports(wo)men from Multiple Views", THIRD ACM/IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED SMART CAMERAS, September 2009 (2009-09-01)
F. CHEN; C. DE VLEESCHOUWER: "A resource allocation framework for summarizing team sport videos", IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, November 2009 (2009-11-01)
F. FLEURET; J. BERCLAZ; R. LENGAGNE; P. FUA: "Multi-camera people tracking with a probabilistic occupancy map", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 30, no. 2, February 2008 (2008-02-01), pages 267 - 282, XP002713892
I.H. CHEN; S.J. WANG: "An efficient approach for the calibration of multiple PTZ cameras", IEEE TRANS. AUTOMATION SCIENCE AND ENGINEERING, vol. 4, 2007, pages 286 - 293, XP011176333, DOI: doi:10.1109/TASE.2006.884040
J. BERCLAZ; F. FLEURET; P. FUA: "Principled detection-by-classification from multiple views", PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATION (VISAPP), vol. 2, January 2008 (2008-01-01), pages 375 - 382
J. MUNKRES: "Algorithms for the assignment and transportation problems", SIAM J. CONTROL, vol. 5, 1957, pages 32 - 38
L. ITTI; C. KOCH; E. NIEBUR: "A model of saliency-based visual attention for rapid scene analysis", IEEE TRANS. PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 20, 1998, pages 1254 - 1259, XP001203933, DOI: doi:10.1109/34.730558
N. INAMOTO; H. SAITO: "Free viewpoint video synthesis and presentation from multiple sporting videos", ELECTRONICS AND COMMUNICATIONS IN JAPAN (PART III: FUNDAMENTAL ELECTRONIC SCIENCE), vol. 90, 2006, pages 40 - 49
P. EISERT; E. STEINBACH; B. GIROD: "Automatic reconstruction of stationary 3-D objects from multiple uncalibrated camera views", IEEE TRANS. CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, SPECIAL ISSUE ON 3D VIDEO TECHNOLOGY, vol. 10, 1999, pages 261 - 277, XP000906613, DOI: doi:10.1109/76.825726
Q. YE; Q. HUANG; S. JIANG; Y. LIU; W. GAO: "Jersey number detection in sports video for athlete identification", PROCEEDINGS OF THE SPIE, VISUAL COMMUNICATIONS AND IMAGE PROCESSING, vol. 5960, July 2005 (2005-07-01), pages 1599 - 1606
S. KHAN; M. SHAH: "A multiview approach to tracing people in crowded scenes using a planar homography constraint", PROCEEDINGS OF THE 9TH EUROPEAN CONFERENCE ON COMPUTER VISION (ECCV), vol. 4, May 2006 (2006-05-01), pages 133 - 146, XP019036531
S. YAGUCHI; H. SAITO: "Arbitrary viewpoint video synthesis from multiple uncalibrated cameras", IEEE TRANS. SYST. MAN. CYBERN. B, vol. 34, 2004, pages 430 - 439
X. XIE; H. LIU; W.Y. MA; H.J. ZHANG: "Browsing large pictures under limited display sizes", IEEE TRANS. MULTIMEDIA, vol. 8, 2006, pages 707 - 715, XP055176318, DOI: doi:10.1109/TMM.2006.876294
Y. ARIKI; S. KUBOTA; M. KUMANO: "Automatic production system of soccor sports video by digital camera work based on situation recognition", ISM'06, vol. 1, 2006, pages 851 - 860, XP031041884

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9584878B2 (en) 2011-01-07 2017-02-28 Yong Lu Method and system for collecting, transmitting, editing and integrating, broadcasting, and receiving signal
WO2012092790A1 (en) * 2011-01-07 2012-07-12 Lu Yong Method and system for collecting, transmitting, editing and integrating, broadcasting, and receiving signal
CN104216941A (en) * 2013-05-31 2014-12-17 三星Sds株式会社 Data analysis apparatus and method
EP2851900A1 (en) * 2013-09-18 2015-03-25 Nxp B.V. Media content real time analysis and automated semantic summarization
US9703461B2 (en) 2013-09-18 2017-07-11 Nxp B.V. Media content creation
EP3142116A1 (en) * 2015-09-14 2017-03-15 Thomson Licensing Method and device for capturing a video in a communal acquisition
RU2665045C2 (en) * 2015-10-29 2018-08-27 Акционерное общество "Российская корпорация ракетно-космического приборостроения и информационных систем" (АО "Российские космические системы") System for modeling situations relating to conflicts and/or competition
EP3249651B1 (en) * 2016-05-23 2018-08-29 Axis AB Generating a summary video sequence from a source video sequence
US10326979B2 (en) 2016-05-23 2019-06-18 Microsoft Technology Licensing, Llc Imaging system comprising real-time image registration
US10339662B2 (en) 2016-05-23 2019-07-02 Microsoft Technology Licensing, Llc Registering cameras with virtual fiducials
US10027954B2 (en) 2016-05-23 2018-07-17 Microsoft Technology Licensing, Llc Registering cameras in a multi-camera imager
KR102129792B1 (en) 2016-05-25 2020-08-05 캐논 가부시끼가이샤 Information processing device, image generation method, control method and program
KR20190010650A (en) * 2016-05-25 2019-01-30 캐논 가부시끼가이샤 Information processing apparatus, image generation method, control method,
CN108401167A (en) * 2017-02-08 2018-08-14 三星电子株式会社 Electronic equipment and server for video playback
EP3361744A1 (en) * 2017-02-08 2018-08-15 Samsung Electronics Co., Ltd. Electronic device and server for video playback
US10880590B2 (en) 2017-02-08 2020-12-29 Samsung Electronics Co., Ltd Electronic device and server for video playback
CN108401167B (en) * 2017-02-08 2022-06-10 三星电子株式会社 Electronic device and server for video playback
WO2021207569A1 (en) * 2020-04-10 2021-10-14 Stats Llc End-to-end camera calibration for broadcast video
US11861806B2 (en) 2020-04-10 2024-01-02 Stats Llc End-to-end camera calibration for broadcast video
ES2957182A1 (en) * 2022-05-31 2024-01-12 Pronoide S L SYSTEM AND METHOD FOR CREATION OF INSTRUCTIONAL VIDEOS THROUGH THE USE OF COMMAND SEQUENCES (Machine-translation by Google Translate, not legally binding)
CN115119050A (en) * 2022-06-30 2022-09-27 北京奇艺世纪科技有限公司 Video clipping method and device, electronic equipment and storage medium
CN115119050B (en) * 2022-06-30 2023-12-15 北京奇艺世纪科技有限公司 Video editing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CA2761187A1 (en) 2010-11-11
GB0907870D0 (en) 2009-06-24
PL2428036T3 (en) 2016-04-29
CA2761187C (en) 2023-01-17
BRPI1011189B1 (en) 2021-08-24
BRPI1011189A2 (en) 2018-07-10
EP2428036B1 (en) 2015-09-16
US20120057852A1 (en) 2012-03-08
EP2428036A1 (en) 2012-03-14
MX2011011799A (en) 2012-07-23
ES2556601T3 (en) 2016-01-19
US8854457B2 (en) 2014-10-07

Similar Documents

Publication Publication Date Title
EP2428036B1 (en) Systems and methods for the autonomous production of videos from multi-sensored data
US11682208B2 (en) Methods and apparatus to measure brand exposure in media streams
Lai et al. Semantic-driven generation of hyperlapse from 360 degree video
Chen et al. An autonomous framework to produce and distribute personalized team-sport video summaries: A basketball case study
Chen et al. Personalized production of basketball videos from multi-sensored data under limited display resolution
D’Orazio et al. A review of vision-based systems for soccer video analysis
EP1955205B1 (en) Method and system for producing a video synopsis
US10861159B2 (en) Method, system and computer program product for automatically altering a video stream
Nie et al. Dynamic video stitching via shakiness removing
EP3513566A1 (en) Methods and systems of spatiotemporal pattern recognition for video content development
Oskouie et al. Multimodal feature extraction and fusion for semantic mining of soccer video: a survey
WO1999005865A1 (en) Content-based video access
Mi et al. Recognizing actions in wearable-camera videos by training classifiers on fixed-camera videos
Nieto et al. An automatic system for sports analytics in multi-camera tennis videos
Wang et al. Personal multi-view viewpoint recommendation based on trajectory distribution of the viewing target
Parisot et al. Consensus-based trajectory estimation for ball detection in calibrated cameras systems
Mei et al. Structure and event mining in sports video with efficient mosaic
Wang et al. Personal viewpoint navigation based on object trajectory distribution for multi-view videos
Chen et al. Multi-sensored vision for autonomous production of personalized video summaries
De Vleeschouwer et al. Digital access to libraries
DATA De Vleeschouwer et al.
Wang Viewing support system for multi-view videos
Cai Video anatomy: spatial-temporal video profile
Guntur et al. Automated virtual camera during mobile video playback

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10737234

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 13319202

Country of ref document: US

Ref document number: 2761187

Country of ref document: CA

Ref document number: MX/A/2011/011799

Country of ref document: MX

WWE Wipo information: entry into national phase

Ref document number: 2010737234

Country of ref document: EP

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: PI1011189

Country of ref document: BR

ENP Entry into the national phase

Ref document number: PI1011189

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20111107