WO2016156842A1 - 3d scene co-ordinate capture & storage - Google Patents

3d scene co-ordinate capture & storage Download PDF

Info

Publication number
WO2016156842A1
WO2016156842A1 PCT/GB2016/050895 GB2016050895W WO2016156842A1 WO 2016156842 A1 WO2016156842 A1 WO 2016156842A1 GB 2016050895 W GB2016050895 W GB 2016050895W WO 2016156842 A1 WO2016156842 A1 WO 2016156842A1
Authority
WO
WIPO (PCT)
Prior art keywords
video content
data
data file
computing device
model
Prior art date
Application number
PCT/GB2016/050895
Other languages
French (fr)
Inventor
Trusit Dave
Original Assignee
Optimed Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Optimed Limited filed Critical Optimed Limited
Publication of WO2016156842A1 publication Critical patent/WO2016156842A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • G06F19/321
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/08Bandwidth reduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/004Annotating, labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/024Multi-user, collaborative environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Definitions

  • the present invention relates to improvements in creating and editing video data.
  • the present invention enables users to edit 3-D video data and disseminate the edited video to multiple users in a low bandwidth manner.
  • the present invention has particular uses in medicine.
  • models of various body parts may be constructed and rendered to enable a user to explore the properties of the model.
  • a patient may have one of a number of conditions within their eye. Such conditions may manifest themselves physically, for example retinal detachment or cataracts.
  • An animated model of such conditions could be useful to aid clinicians in explaining the condition to patients in the consulting room, and for sharing the explanation as a streamed movie with the patients themselves to allow for better understanding of their condition.
  • models may be shared with medical professionals for training or diagnostics purposes. Such methods may also be useful in the process of patient informed consent.
  • an animated model may be edited by at least one user (such as a first medical professional). It is also often desirable to share the edited animation amongst a plurality of end users (for example one or more patient or possibly another medical professional).
  • video editing and dissemination is a resource intensive process.
  • a method and system provide for editing and rendering video content across one or more user devices by, at a first computing device: defining a first model of a three dimensional object to be rendered in the video content; assigning a plurality of reference points to the first model of the three dimensional object; and for a plurality of frames of the video content: determining transformation data for one or more of the reference points, wherein said transformation data comprises information representative of one or more coordinates or representative of a change in one or more coordinates, wherein said coordinates describe a position of the one or more reference points; and recording the transformation data in a data file; subsequently, at the first computing device or at a further computing device, providing a user interface to enable a user to edit the transformation data in the data file so as to change the shape of the model and subsequently forwarding the edited data file to a central server; and at the central server; rendering the video content based on the received edited data file for playback across one or more user devices.
  • the present invention allows editing to be performed on a data file in place of a video file, and rendering to be performed centrally at a central server.
  • the editing of such a data file is less complicated and resource intensive than editing an equivalent rendered video file.
  • a rendered video file for a given piece of video content is typically larger than the equivalent data file provided by the present invention, thus the invention reduces the bandwidth involved in transmitting the edited video content prior to dissemination to end users.
  • rendering the video content centrally at a central server rendering processes are not required at a user device performing the editing, further reducing the processing demands on the user device.
  • the editing may include the modification of the three dimensional model to be rendered.
  • the shape of the model can be changed and elements may be added to/removed from the model by editing the coordinates of one or more of the reference points in the data file.
  • the editing may also include removal of a section of video content (i.e. deletion of data pertaining to a group of one or more video content frames), and annotation of the video content.
  • a user may annotate the video content by associating a text caption, a still image, audio data or video data with the data file, for inclusion when the video content is rendered.
  • a reference to one or more external files containing the annotation data is added to the data file and associated with one or more video content frames.
  • the annotation data may be added to the data file itself.
  • annotation data may be removed from video content during editing. Beneficially, this allows a user to efficiently add or remove annotation content to the video content in an efficient manner, using reduced processing resources as compared to those required to introduce or delete annotation content into a rendered video file.
  • the method is implemented by one or more software components configured to run on the first computing device and on the central server.
  • the one or more software components are preferable included in at least one computer readable medium.
  • Figure 1 is a flow diagram illustrating a method for constructing a model of a 3-D object for use in video content and recording its transformation over time;
  • Figure 2 is a flow diagram illustrating a method for editing video content in accordance with an aspect of the invention
  • Figure 3 is a schematic of a system for editing and rendering video content across multiple user devices in accordance with an aspect of the invention.
  • the present invention provides a system in which video content can be efficiently edited through the use of data files in place of video files, the data files being rendered centrally for dissemination.
  • the use of data files for editing purposes reduces the processing requirements for editing at a user device, as compared with editing a rendered video file, and reduces the bandwidth required in sharing video content between users for editing.
  • the processing requirements for rendering are not required at the device of a user who has edited the content.
  • an aspect of the present invention is to provide one or more users with the ability to view a video, and subsequently edit a video.
  • a low bandwidth methodology which reduces the size of downloaded files and reduces processor requirement at a device, for displaying the edited video on multiple devices.
  • Figure 1 shows a flow chart of an exemplary method 100 for constructing and tracking a model of a 3-D object to be used as a video content subject.
  • a reference coordinate system is defined at step S102, and the 3-D object for which a model is to be created is identified at step SI 04.
  • a number of reference points are assigned to the object in step SI 06, wherein each reference point defines a particular position on the object, and has an initial position relative to the coordinate system described by coordinates.
  • Steps SI 04 and SI 06 may be performed automatically by appropriate software, or by user input via a user interface, provided at an electronic client device. The number of reference points required to accurately describe a given model will depend on the complexity of the 3-D object being modelled.
  • the user may manipulate the model.
  • Such manipulation may include transforming the model (for example translating, rotating, enlarging, magnifying and/or deforming).
  • Deformation of the model may include changing the shape of the model, and adding elements to the model and removing elements from the model.
  • the user may perform such manipulation via a user interface implemented by software in a known manner.
  • the location of at least one reference point with respect to the coordinate system may change over time.
  • transformation data describing the transformation of the model with time is deduced. In some embodiments, this involves determining a new set of coordinates for the one or more reference points at particular time intervals. In other embodiments data describing the change in coordinates of the one or more reference points at given time intervals is determined.
  • the transformation data of the one or more reference points between frames can describe the transformation of the model over time.
  • the set time intervals can define frames for use in video content.
  • Steps S102-S108 describe one specific method of constructing and tracking transformation between video content frames of a model.
  • Other known methods for creating a virtual model of a 3-D object and tracking it may also be used, wherein transformation data is yielded that describes the transformation of the model.
  • the transformation data for the one or more reference points over a plurality of frames is written to a data file.
  • the data file is a text-based file, for example using an XML format.
  • this allows the data to be easily modified in accordance with user editing as described in more detail below with respect to figure 2.
  • a virtual model may correspond to a body part exhibiting physical indications of a medical condition.
  • the model could be used for the purpose of visualising the condition in video content for use by physicians, such as for training or diagnostic purposes.
  • the model may correspond to a human eye, exhibiting a condition such as retinal detachment.
  • the model may be transformed over a series of frames for the purpose of creating video content to be viewed by ophthalmologists or their patients.
  • the video content may comprise an animation of the model of the eye, starting with a whole view of the eye, magnifying (i.e.
  • FIG. 2 shows a method 200 for editing video content.
  • the method 200 may be implemented via software at a client device.
  • a captured model is manipulated by a user, and the manipulations recorded as transform data in a data file as described above with respect to Figure 1. These steps may be performed at the client device, or may have been performed at a different device. If performed at a different device, the data file is provided to the client device.
  • the software converts the transform data recorded in the data file to playback instructions, describing the manipulations of the model over a plurality of frames of video content.
  • the playback instructions are then provided to a playback application, which displays the frames of the video content to the user at step S207.
  • the video content need not be rendered into a common video file format, rather the format may be specific to the playback application.
  • this playback proceeds without first rendering the video content into a video file.
  • the playback application includes functionality allowing a user to identify frames for removal in a video file of the specific format for the application.
  • this provides a user with a visual indication of the video content before editing, and before rendering the video content into a commonly used video format for dissemination to others.
  • This embodiment may also be useful in the context of a clinician showing the video content to a patient attending a clinic, wherein the clinician may manipulate a model describing a certain medical condition and play back the content to the patient in their presence.
  • the patient would preferably be presented with video content by the software, i.e.
  • the animated model of the 3-D object (plus any annotations as described below), and would not see the contents of the data file such as coordinate data. Accordingly the clinician can use the software to quickly create appropriate content for the patient without the need to render a video file.
  • This could be used in all medical contexts, for example a medical professional could efficiently animate a model representing an anatomical part exhibiting a medical condition that manifests itself physically, and playback the animation in the presence of the patient, for the purpose of explaining the patient's condition.
  • an ophthalmologist might diagnose a patient as having suffered a retinal detachment.
  • the ophthalmologist may animate a model of the human eye exhibiting a simulated retinal detachment, and playback the animation using the playback application without rendering the content in a video file, to give the patient context to their condition.
  • the user can choose to edit the video content without rendering of the content into a video file being required.
  • step S210 the user is able to choose a given section of content for removal via the user interface, corresponding to a given time period (and hence a certain set of frames).
  • step S212 the software identifies the frame representing the start of the section to be removed, and the frame representing the end of the section to be removed. The software then removes information corresponding to the start and end frames, and all frames in between, from the data file at step S214.
  • the data file is then saved at step S216. Accordingly, only the data file, which is preferably text based has been modified - no rendered video file (e.g. an AVI or mp4 file) has been modified.
  • this provides a more efficient method for editing video content, since the processing demands required for editing a data file (for example a text based XML file) are less than those for editing a video file.
  • captions comprising text to display as part of the video content can be inserted by the user.
  • Such caption text may be stored as part of the data file itself, and associated with a particular frame or group of frames.
  • the caption text may be stored as a separate caption file, and an indication of the caption file may be associated with the desired frames in the data file, wherein the indication prompts the inclusion of the caption text when the video content is rendered by a central server as described below with respect to Figure 3.
  • Such text caption may be used by a medical professional to provide colleagues or patients with information further describing a medical condition demonstrated in the video content, such as possible treatments or consequences of the condition.
  • drawing data may be associated with one or more frames of the video content.
  • this drawing data can be stored in the data file itself.
  • the drawing data may be stored in a separate drawing file, and a reference to the drawing file is associated with the appropriate video content frames and stored in the data file.
  • this allows a user to annotate the video content using a drawing input. For example a medical professional might add handwritten notes or diagrams, or use a drawing input to highlight a certain feature on the model in the video content.
  • a user can insert audio data, image data and/or video data into the video content, by adding an indication referencing an audio/image/video file to one or more frames in the data file.
  • the invention allows a user to easily add audio, image, and/or video data to the video content by editing the data file to include a reference to a different pre-existing file - as with the caption and drawing addition described above, such operations are far simpler than editing a video file to include the extra data, and require reduced processing resources.
  • Such annotation by text based caption data or drawing data, or by inserting audio, image or video data can be performed during the initial recording of the model, or afterwards, via editing the data file.
  • a user is able to annotate video content by adding caption and/or drawings (or insert audio/image/video data) to using the software during playback via the playback application. Accordingly the user is able to annotate the video content in real time. This is useful for the example when a clinician is showing the video content to a patient attending the clinic via the playback application. The clinician is able to draw the patient's attention to a particular feature of the video content by drawing an indication (such as an arrow, or a ring around the feature of interest) which is then overlaid on the video playback for the patient to see.
  • an indication such as an arrow, or a ring around the feature of interest
  • the method 200 also allows for the model itself to be edited.
  • a user is able to deform the model (for example change the shape of the model, add elements to the model and remove elements from the model) by changing the reference points and/or the transformation data associated with the part of the model to be changed in the data file.
  • a user may modify all the transformation data in the data file that describes the coordinates of a particular reference point so as to offset the particular reference point, that it to say, move the position of the reference point and deform the model.
  • additional transformation data describing further reference points can be added to the data file thereby adding a feature to the model.
  • transformation data describing one or more reference points may be removed from the data file, thereby removing an associated feature or features from the video content.
  • this editing can be performed using software without having to render the video content.
  • the software enables the user to edit the transformation data in the data file associated with each of the reference points individually, thereby allowing the user to deform the model as described above.
  • Deformation of the model is a redefining of the model, distorting the model or otherwise changing its shape. Deformation can include for example, warping, stretching, adding elements and removing elements.
  • the edited data file can be provided to other client devices for rendering and editing, as opposed to a rendered video file.
  • Such data files may be of smaller size than equivalent video files, and thus bandwidth requirements for transmitting data are reduced.
  • a medical professional may wish to explain a certain condition to a colleague or patient. They may capture a model of a given anatomical item that exhibits the condition and create a data file using the method 100 of figure 1, or receive the data file from another party. The medical professional may then review the video content using the playback application, and decide that part of the content is not required, and that further contextual information is required for their colleague or patient. They could then remove the desired frames, and annotate the video content by adding caption text or a drawing to further explain or describe the condition and a suitable point in the video content.
  • the medical professional may have a data file that describes video content showing a healthy part of the human anatomy, and they could then modify the model (for example by changing its shape or adding elements to it) in order to represent a particular medical condition.
  • the medical professional could then playback the video content without rendering using the playback application in the presence of the patient (preferably only showing the animated model and accompanying annotations, and not the coordinate data describing the animation per se), or have the video content rendered into a common video file format for dissemination to the patient remotely.
  • an ophthalmologist may have video content showing a model of an eye exhibiting a condition such as retinal detachment.
  • the ophthalmologist may have a patient suffering from retinal detachment, and wish to use the video content to explain the condition to the patient.
  • the ophthalmologist could then add caption text to be displayed for a group of frames corresponding to a close-up of the retinal detachment, to indicate the condition to the patient, and perhaps also draw over the animation to provide further contextual information.
  • the ophthalmologist could include an audio recording describing the condition, its implications for the patient and possible treatments as applicable.
  • the ophthalmologist could edit the model being displayed in the video content by modifying the data file so as to remove parts of the model considered to be extraneous detail in order to better illustrate the retinal detachment.
  • the edited video content could then be displayed to the patient by the playback application without rendering, if both the patient and a user device implementing the software and playback application were present at the ophthalmologist's clinic.
  • the video content is rendered as a video file using a common file format for sharing with the patient via a video streaming service as described below with respect to figure 3.
  • Figure 3 shows a schematic of a system 300 for editing and rendering video content across multiple devices.
  • One or more client devices 302 304 306 are provided, which are in communication with a central server 310 via one or more network connections 307 308 309.
  • the client device is a computer device such as a desktop computer, laptop computer, tablet computer, smartphone etc.
  • the central server 310 is connected to a media server 312, which is connected to the internet 314.
  • the client devices 302 304 306 are connected to the server 310 via a single network.
  • At least one client device 302 304 306 in the system 300 is configured to perform the methods 100 200 of figures 1 and 2. Such a client device 302 304 306 is thus able to perform the capture of a model of a 3-D object, manipulation of the object and recording the manipulation as transform data in a data file. This client device 302 304 306 would also be able to convert a data file to playback instructions for playback by a playback application, and edit the video content via modification of the data file.
  • Client devices 302 304 306, can comprise a mouse, keyboard, touch screen display or graphics tablet, or any other user input device.
  • the at least one client device 302 sends the data file to other client devices 304 306, which may perform the playback conversion and editing described in the method 200 of figure 2.
  • client devices 304 306 may perform the playback conversion and editing described in the method 200 of figure 2.
  • this allows multiple users operating different client devices to efficiently edit video content by editing the data file as opposed to editing a rendered video file.
  • the client device 302 304 306 transmits the data file (as edited, if a user has chosen to edit the video content) to the central server 310 via a network 307.
  • the central server 310 saves the transform data in the data file, thus updating the data as multiple users edit the video content.
  • the central server 310 also makes the saved data file available to download to the multiple client devices 302 304 306 for further editing.
  • the client devices 302 304 306 also transmit the external files to the server for storage.
  • the server also permits the download of these external files to other client devices 302 304 306, advantageously allowing them to be reviewed by other users wishing to edit the video content.
  • a user may also edit an external file that is to be included as part of the video content at a client device 302 304 306.
  • the central server 310 interprets the data file, and renders the video content, including the content of any external text, audio, image or video data from external files if applicable, into a video file format (for example an AVI or mp4 file). Beneficially this reduces the processing requirements at client devices 302 304 306, since the data file does not have to be converted to a video file format via rendering at the device end of the system 300.
  • a video file format for example an AVI or mp4 file.
  • the rendered video file is then provided to a media server 312.
  • the media server provides the rendered video file for dissemination to other users over the internet 314.
  • Such dissemination may be via streaming, download or progressive download. Accordingly multiple users are provided with up to date, edited video content in a format that can be accessed by many end users, without requiring a specialist playback application.
  • the central server performs the dissemination tasks itself, without the need for a further media server.
  • the methods 100 200 of figures 1 and 2 may be implemented by one or more software components.
  • the software components may be included in at least one computer readable medium, which when read by the components of system 300, causes the system 300 to perform the methods 100 200 of figures 1 and 2 in their entirety or in part.
  • the present invention provides an efficient system for the editing, rendering, and dissemination of video content, by providing a data file that can be edited and shared between users in place of a video file, rendering all video content centrally to a video file format once edited, and centrally disseminating the rendered video file to end users.

Abstract

A method for editing and rendering video content across one or more user devices, the method comprising the steps of: at a first computing device: defining a first model of a three dimensional object to be rendered in the video content; assigning a plurality of reference points to the first model of the three dimensional object (S106); for a plurality of frames of the video content: determining transformation data for one or more of the reference points (S108), wherein said transformation data comprises information representative of one or more coordinates or representative of a change in one or more coordinates, wherein said coordinates describe a position of the one or more reference points; and recording the transformation data in a data file (S110); subsequently, at the first computing device or at a further computing device, providing a user interface to enable a user to edit the data file and subsequently forwarding the edited data file to a central server; and at the central server; rendering the video content based on the received edited data file for playback across one or more user devices.

Description

3D Scene Co-ordinate Capture & Storage
Field of invention
The present invention relates to improvements in creating and editing video data. In particular the present invention enables users to edit 3-D video data and disseminate the edited video to multiple users in a low bandwidth manner. The present invention has particular uses in medicine.
Background to the invention
In multiple environments it is known to capture and render videos of models using 3-
D models. In particular in a medical context, models of various body parts, such as the eye, may be constructed and rendered to enable a user to explore the properties of the model.
It is also known to provide video data from a first user to multiple users. Often the video is streamed from a central server and rendered on the host device. In videos of 3-D models the amount of data and processing required to render an animation of a 3- D model may be high and onerous on the device which renders the movie. Furthermore, the physical size of the file containing the video of the 3-D model may be high, placing further requirements on the system during the uploading and downloading of the video.
It is known in animation environments to define a model of the object to be visualised using a matrix of coordinates which define the object. It is also known to define the animation/movement of the object between frames by defining the transformation of the object by recording the coordinates of the object, or the change in coordinates of the object. When rendering the animation the coordinates are used by the program engine rendering the animation to determine the motion of the object and produce the animation effect. Such techniques are used to reduce the data and bandwidth required to animate an object. The use of such models may be of use in a medical context. For example, a model may be created representing a certain part of the human anatomy exhibiting a condition that has a physical manifestation. This model may be animated, to provide a visual representation of the condition to aid in diagnostic or training purposes amongst medical professionals, or for explaining the condition to a patient.
For example, in ophthalmology environments a patient may have one of a number of conditions within their eye. Such conditions may manifest themselves physically, for example retinal detachment or cataracts. An animated model of such conditions could be useful to aid clinicians in explaining the condition to patients in the consulting room, and for sharing the explanation as a streamed movie with the patients themselves to allow for better understanding of their condition. Similarly such models may be shared with medical professionals for training or diagnostics purposes. Such methods may also be useful in the process of patient informed consent.
In such contexts it may be useful for an animated model to be edited by at least one user (such as a first medical professional). It is also often desirable to share the edited animation amongst a plurality of end users (for example one or more patient or possibly another medical professional). However video editing and dissemination is a resource intensive process.
Summary of invention
In order to address at least some of the issues above, there is provided a method and system as described in the appended claim set.
In an embodiment there is provided a method and system provide for editing and rendering video content across one or more user devices by, at a first computing device: defining a first model of a three dimensional object to be rendered in the video content; assigning a plurality of reference points to the first model of the three dimensional object; and for a plurality of frames of the video content: determining transformation data for one or more of the reference points, wherein said transformation data comprises information representative of one or more coordinates or representative of a change in one or more coordinates, wherein said coordinates describe a position of the one or more reference points; and recording the transformation data in a data file; subsequently, at the first computing device or at a further computing device, providing a user interface to enable a user to edit the transformation data in the data file so as to change the shape of the model and subsequently forwarding the edited data file to a central server; and at the central server; rendering the video content based on the received edited data file for playback across one or more user devices. Accordingly, the present invention allows editing to be performed on a data file in place of a video file, and rendering to be performed centrally at a central server. Advantageously, the editing of such a data file is less complicated and resource intensive than editing an equivalent rendered video file. Additionally, a rendered video file for a given piece of video content is typically larger than the equivalent data file provided by the present invention, thus the invention reduces the bandwidth involved in transmitting the edited video content prior to dissemination to end users. Moreover, by rendering the video content centrally at a central server, rendering processes are not required at a user device performing the editing, further reducing the processing demands on the user device. The editing may include the modification of the three dimensional model to be rendered. For example, the shape of the model can be changed and elements may be added to/removed from the model by editing the coordinates of one or more of the reference points in the data file. The editing may also include removal of a section of video content (i.e. deletion of data pertaining to a group of one or more video content frames), and annotation of the video content. For example, a user may annotate the video content by associating a text caption, a still image, audio data or video data with the data file, for inclusion when the video content is rendered. In a preferred embodiment, a reference to one or more external files containing the annotation data is added to the data file and associated with one or more video content frames. In other embodiments, the annotation data may be added to the data file itself. Similarly, annotation data may be removed from video content during editing. Beneficially, this allows a user to efficiently add or remove annotation content to the video content in an efficient manner, using reduced processing resources as compared to those required to introduce or delete annotation content into a rendered video file.
In an embodiment the method is implemented by one or more software components configured to run on the first computing device and on the central server. The one or more software components are preferable included in at least one computer readable medium.
Brief description of drawings
Embodiments of the invention are now described, by way of example only, with reference to the accompanying drawing in which:
Figure 1 is a flow diagram illustrating a method for constructing a model of a 3-D object for use in video content and recording its transformation over time;
Figure 2 is a flow diagram illustrating a method for editing video content in accordance with an aspect of the invention; and Figure 3 is a schematic of a system for editing and rendering video content across multiple user devices in accordance with an aspect of the invention.
Detailed description of an embodiment
The present invention provides a system in which video content can be efficiently edited through the use of data files in place of video files, the data files being rendered centrally for dissemination. Advantageously, the use of data files for editing purposes reduces the processing requirements for editing at a user device, as compared with editing a rendered video file, and reduces the bandwidth required in sharing video content between users for editing. Additionally, by rendering the content at a central server, the processing requirements for rendering are not required at the device of a user who has edited the content. Accordingly an aspect of the present invention is to provide one or more users with the ability to view a video, and subsequently edit a video. Furthermore, there is provided a low bandwidth methodology, which reduces the size of downloaded files and reduces processor requirement at a device, for displaying the edited video on multiple devices.
An example of such a system, and the methods used to implement it, is described below. Figure 1 shows a flow chart of an exemplary method 100 for constructing and tracking a model of a 3-D object to be used as a video content subject. A reference coordinate system is defined at step S102, and the 3-D object for which a model is to be created is identified at step SI 04. After identifying the object, a number of reference points are assigned to the object in step SI 06, wherein each reference point defines a particular position on the object, and has an initial position relative to the coordinate system described by coordinates. Steps SI 04 and SI 06 may be performed automatically by appropriate software, or by user input via a user interface, provided at an electronic client device. The number of reference points required to accurately describe a given model will depend on the complexity of the 3-D object being modelled.
At step SI 07, the user may manipulate the model. Such manipulation may include transforming the model (for example translating, rotating, enlarging, magnifying and/or deforming). Deformation of the model may include changing the shape of the model, and adding elements to the model and removing elements from the model. The user may perform such manipulation via a user interface implemented by software in a known manner.
As the user manipulates the model, the location of at least one reference point with respect to the coordinate system may change over time. At step S108, transformation data describing the transformation of the model with time is deduced. In some embodiments, this involves determining a new set of coordinates for the one or more reference points at particular time intervals. In other embodiments data describing the change in coordinates of the one or more reference points at given time intervals is determined. Thus the transformation data of the one or more reference points between frames (either by noting the position or change in position of the reference point), can describe the transformation of the model over time. The set time intervals can define frames for use in video content. Thus the method allows the tracking of manipulations of the model over a plurality of frames.
Steps S102-S108 describe one specific method of constructing and tracking transformation between video content frames of a model. Other known methods for creating a virtual model of a 3-D object and tracking it may also be used, wherein transformation data is yielded that describes the transformation of the model.
At step SI 10, the transformation data for the one or more reference points over a plurality of frames is written to a data file. Preferably the data file is a text-based file, for example using an XML format. Advantageously this allows the data to be easily modified in accordance with user editing as described in more detail below with respect to figure 2.
The method of Figure 1 may be used in a medical context, wherein a virtual model may correspond to a body part exhibiting physical indications of a medical condition. The model could be used for the purpose of visualising the condition in video content for use by physicians, such as for training or diagnostic purposes. For example, the model may correspond to a human eye, exhibiting a condition such as retinal detachment. In this case, the model may be transformed over a series of frames for the purpose of creating video content to be viewed by ophthalmologists or their patients. As a specific example, the video content may comprise an animation of the model of the eye, starting with a whole view of the eye, magnifying (i.e. zooming in) the model to show the interior of the eye and magnifying further to show the retina of the eye, and indicating a retinal detachment. Such a video could allow an ophthalmologist to provide useful context when explaining retinal detachment to a patient unfamiliar with the anatomy of the eye. Figure 2 shows a method 200 for editing video content. The method 200 may be implemented via software at a client device.
At steps S202 and S204, a captured model is manipulated by a user, and the manipulations recorded as transform data in a data file as described above with respect to Figure 1. These steps may be performed at the client device, or may have been performed at a different device. If performed at a different device, the data file is provided to the client device. Optionally, at step S206 the software converts the transform data recorded in the data file to playback instructions, describing the manipulations of the model over a plurality of frames of video content. The playback instructions are then provided to a playback application, which displays the frames of the video content to the user at step S207. In this instance the video content need not be rendered into a common video file format, rather the format may be specific to the playback application. Accordingly this playback proceeds without first rendering the video content into a video file. Preferably in this embodiment the playback application includes functionality allowing a user to identify frames for removal in a video file of the specific format for the application. Advantageously this provides a user with a visual indication of the video content before editing, and before rendering the video content into a commonly used video format for dissemination to others. This embodiment may also be useful in the context of a clinician showing the video content to a patient attending a clinic, wherein the clinician may manipulate a model describing a certain medical condition and play back the content to the patient in their presence. The patient would preferably be presented with video content by the software, i.e. the animated model of the 3-D object (plus any annotations as described below), and would not see the contents of the data file such as coordinate data. Accordingly the clinician can use the software to quickly create appropriate content for the patient without the need to render a video file. This could be used in all medical contexts, for example a medical professional could efficiently animate a model representing an anatomical part exhibiting a medical condition that manifests itself physically, and playback the animation in the presence of the patient, for the purpose of explaining the patient's condition. To give a specific example, an ophthalmologist might diagnose a patient as having suffered a retinal detachment. In order to explain this condition to the patient, the ophthalmologist may animate a model of the human eye exhibiting a simulated retinal detachment, and playback the animation using the playback application without rendering the content in a video file, to give the patient context to their condition.
At step S208, the user can choose to edit the video content without rendering of the content into a video file being required.
If the user chooses to edit the video content, for example by removing a section of the recording, the method proceeds to step S210, if not, the method proceeds to step S216. In step 210, the user is able to choose a given section of content for removal via the user interface, corresponding to a given time period (and hence a certain set of frames). In step S212, the software identifies the frame representing the start of the section to be removed, and the frame representing the end of the section to be removed. The software then removes information corresponding to the start and end frames, and all frames in between, from the data file at step S214. The data file is then saved at step S216. Accordingly, only the data file, which is preferably text based has been modified - no rendered video file (e.g. an AVI or mp4 file) has been modified. Advantageously, this provides a more efficient method for editing video content, since the processing demands required for editing a data file (for example a text based XML file) are less than those for editing a video file.
Preferably, several types of editing are provided for, in addition to the deletion of segments of video content. For example, captions comprising text to display as part of the video content can be inserted by the user. Such caption text may be stored as part of the data file itself, and associated with a particular frame or group of frames. Alternatively the caption text may be stored as a separate caption file, and an indication of the caption file may be associated with the desired frames in the data file, wherein the indication prompts the inclusion of the caption text when the video content is rendered by a central server as described below with respect to Figure 3. Such text caption may be used by a medical professional to provide colleagues or patients with information further describing a medical condition demonstrated in the video content, such as possible treatments or consequences of the condition. Similarly, a user can insert drawing data into the video content. The software allows a drawing input to be made. This input may be made using a mouse, a touch screen, graphics tablet, or any other appropriate input device known in the art. Drawing data describing the drawing input may be associated with one or more frames of the video content. In some embodiments this drawing data can be stored in the data file itself. Alternative the drawing data may be stored in a separate drawing file, and a reference to the drawing file is associated with the appropriate video content frames and stored in the data file. Advantageously, this allows a user to annotate the video content using a drawing input. For example a medical professional might add handwritten notes or diagrams, or use a drawing input to highlight a certain feature on the model in the video content. This could provide further context regarding a particular condition exhibited by a patient, and would be of benefit in many medical contexts. Similarly to above, a user can insert audio data, image data and/or video data into the video content, by adding an indication referencing an audio/image/video file to one or more frames in the data file. Thus the invention allows a user to easily add audio, image, and/or video data to the video content by editing the data file to include a reference to a different pre-existing file - as with the caption and drawing addition described above, such operations are far simpler than editing a video file to include the extra data, and require reduced processing resources.
Such annotation by text based caption data or drawing data, or by inserting audio, image or video data can be performed during the initial recording of the model, or afterwards, via editing the data file.
In some embodiments, a user is able to annotate video content by adding caption and/or drawings (or insert audio/image/video data) to using the software during playback via the playback application. Accordingly the user is able to annotate the video content in real time. This is useful for the example when a clinician is showing the video content to a patient attending the clinic via the playback application. The clinician is able to draw the patient's attention to a particular feature of the video content by drawing an indication (such as an arrow, or a ring around the feature of interest) which is then overlaid on the video playback for the patient to see.
Preferably the method 200 also allows for the model itself to be edited. A user is able to deform the model (for example change the shape of the model, add elements to the model and remove elements from the model) by changing the reference points and/or the transformation data associated with the part of the model to be changed in the data file. For example a user may modify all the transformation data in the data file that describes the coordinates of a particular reference point so as to offset the particular reference point, that it to say, move the position of the reference point and deform the model. Alternatively, additional transformation data describing further reference points can be added to the data file thereby adding a feature to the model. Similarly transformation data describing one or more reference points may be removed from the data file, thereby removing an associated feature or features from the video content. Again, this editing can be performed using software without having to render the video content. Preferably the software enables the user to edit the transformation data in the data file associated with each of the reference points individually, thereby allowing the user to deform the model as described above.
Deformation of the model is a redefining of the model, distorting the model or otherwise changing its shape. Deformation can include for example, warping, stretching, adding elements and removing elements.
In addition, the edited data file can be provided to other client devices for rendering and editing, as opposed to a rendered video file. Such data files may be of smaller size than equivalent video files, and thus bandwidth requirements for transmitting data are reduced.
For example, a medical professional may wish to explain a certain condition to a colleague or patient. They may capture a model of a given anatomical item that exhibits the condition and create a data file using the method 100 of figure 1, or receive the data file from another party. The medical professional may then review the video content using the playback application, and decide that part of the content is not required, and that further contextual information is required for their colleague or patient. They could then remove the desired frames, and annotate the video content by adding caption text or a drawing to further explain or describe the condition and a suitable point in the video content. Alternatively they could modify the model being displayed in the video content: for example the medical professional may have a data file that describes video content showing a healthy part of the human anatomy, and they could then modify the model (for example by changing its shape or adding elements to it) in order to represent a particular medical condition. The medical professional could then playback the video content without rendering using the playback application in the presence of the patient (preferably only showing the animated model and accompanying annotations, and not the coordinate data describing the animation per se), or have the video content rendered into a common video file format for dissemination to the patient remotely. In an example, an ophthalmologist may have video content showing a model of an eye exhibiting a condition such as retinal detachment. The ophthalmologist may have a patient suffering from retinal detachment, and wish to use the video content to explain the condition to the patient. The ophthalmologist could then add caption text to be displayed for a group of frames corresponding to a close-up of the retinal detachment, to indicate the condition to the patient, and perhaps also draw over the animation to provide further contextual information. Additionally the ophthalmologist could include an audio recording describing the condition, its implications for the patient and possible treatments as applicable. Furthermore the ophthalmologist could edit the model being displayed in the video content by modifying the data file so as to remove parts of the model considered to be extraneous detail in order to better illustrate the retinal detachment. The edited video content could then be displayed to the patient by the playback application without rendering, if both the patient and a user device implementing the software and playback application were present at the ophthalmologist's clinic. Alternatively the video content is rendered as a video file using a common file format for sharing with the patient via a video streaming service as described below with respect to figure 3.
Figure 3 shows a schematic of a system 300 for editing and rendering video content across multiple devices. One or more client devices 302 304 306 are provided, which are in communication with a central server 310 via one or more network connections 307 308 309. The client device is a computer device such as a desktop computer, laptop computer, tablet computer, smartphone etc.
The central server 310 is connected to a media server 312, which is connected to the internet 314. In some embodiments the client devices 302 304 306 are connected to the server 310 via a single network.
At least one client device 302 304 306 in the system 300 is configured to perform the methods 100 200 of figures 1 and 2. Such a client device 302 304 306 is thus able to perform the capture of a model of a 3-D object, manipulation of the object and recording the manipulation as transform data in a data file. This client device 302 304 306 would also be able to convert a data file to playback instructions for playback by a playback application, and edit the video content via modification of the data file. Client devices 302 304 306, can comprise a mouse, keyboard, touch screen display or graphics tablet, or any other user input device.
In some embodiments, the at least one client device 302 sends the data file to other client devices 304 306, which may perform the playback conversion and editing described in the method 200 of figure 2. Advantageously this allows multiple users operating different client devices to efficiently edit video content by editing the data file as opposed to editing a rendered video file.
The client device 302 304 306 transmits the data file (as edited, if a user has chosen to edit the video content) to the central server 310 via a network 307. Preferably the central server 310 saves the transform data in the data file, thus updating the data as multiple users edit the video content. In some embodiments, the central server 310 also makes the saved data file available to download to the multiple client devices 302 304 306 for further editing.
If the user has added a reference or indication of one or more external files for the purposes of inclusion of caption text, still images, video images, or audio data in the video content (as described above with respect to Figure 2), then the client devices 302 304 306 also transmit the external files to the server for storage. Preferably the server also permits the download of these external files to other client devices 302 304 306, advantageously allowing them to be reviewed by other users wishing to edit the video content. In some embodiments, a user may also edit an external file that is to be included as part of the video content at a client device 302 304 306.
The central server 310 interprets the data file, and renders the video content, including the content of any external text, audio, image or video data from external files if applicable, into a video file format (for example an AVI or mp4 file). Beneficially this reduces the processing requirements at client devices 302 304 306, since the data file does not have to be converted to a video file format via rendering at the device end of the system 300. By converting the video content into a standard video file format, the video content can be viewed using common software that many users (including medical practitioners and their patients/colleagues) have access to.
The rendered video file is then provided to a media server 312. The media server provides the rendered video file for dissemination to other users over the internet 314. Such dissemination may be via streaming, download or progressive download. Accordingly multiple users are provided with up to date, edited video content in a format that can be accessed by many end users, without requiring a specialist playback application. In other embodiments the central server performs the dissemination tasks itself, without the need for a further media server.
The methods 100 200 of figures 1 and 2, in their entirety or in part, may be implemented by one or more software components. Preferably the software components may be included in at least one computer readable medium, which when read by the components of system 300, causes the system 300 to perform the methods 100 200 of figures 1 and 2 in their entirety or in part. Accordingly the present invention provides an efficient system for the editing, rendering, and dissemination of video content, by providing a data file that can be edited and shared between users in place of a video file, rendering all video content centrally to a video file format once edited, and centrally disseminating the rendered video file to end users.

Claims

Claims
1. A method for editing and rendering video content across one or more user devices, the method comprising the steps of:
at a first computing device:
defining a first model of a three dimensional object to be rendered in the video content;
assigning a plurality of reference points to the first model of the three dimensional object;
for a plurality of frames of the video content:
determining transformation data for one or more of the reference points, wherein said transformation data comprises information representative of one or more coordinates or representative of a change in one or more coordinates, wherein said coordinates describe a position of the one or more reference points; and
recording the transformation data in a data file;
subsequently, at the first computing device or at a further computing device:
providing a user interface to enable a user to edit the transformation data in the data file so as to change the shape of the model;
subsequently forwarding the edited data file to a central server; and
at the central server;
rendering the video content based on the received edited data file for playback across one or more user devices.
2. The method of claim 1 wherein the editing comprises annotation of the video content.
3. The method of claim 2 wherein annotation of the video content comprises associating one or more of text data, drawing input data, audio data, image data or video data with the data file.
4. The method of any preceding claim wherein the editing comprises deletion of a first set of transform data corresponding to a first video content frame, and changing one or more time values corresponding to one or more subsequent video content frames.
5. The method of any preceding claim, further comprising storing the received edited data file at the central server; and retrieving the edited data file from the central server by the first or a further computing device.
6. The method of any preceding claim further comprising transmitting the rendered video content to one or more user devices, wherein transmitting includes one or more of streaming, downloading, and progressive downloading.
7. One or more computer readable media comprising instructions to cause one or more electronic devices, and a central server, to perform the method of any of claims 1-6.
A system for editing and rendering video content across one or more user devices, comprising:
one or more computing devices; and
a central sever;
wherein a first computing device is configured to:
define a first model of a three dimensional object to be rendered in the video content;
assign a plurality of reference points to the first model of the three dimensional object;
for a plurality of frames of the video content:
determining transformation data for one or more of the reference points, wherein said transformation data comprises information representative of one or more coordinates or representative of a change in one or more coordinates, wherein said coordinates describe a position of the one or more reference points; and
record the transformation data in a data file; wherein the first computing device or a further computing device is configured to subsequently provide a user interface to enable a user to edit the transformation data in the data file so as to change the shape of the model and subsequently forwarding the edited data file to the central server; and
wherein the central server is configured render the video content based on the received edited data file for playback across one or more user devices.
9. The system of claim 8 wherein the first computing device or a further computing device is configured to edit the data file by annotating the video content.
10. The system of claim 9 wherein the first computing device or a further computing device is configured to annotate the video content by associating one or more of text data, drawing input data, audio data, image data or video data with the data file.
11. The system of any of claims 8-10 wherein the first computing device or a further computing device is configured to edit the data file by deleting a first set of transform data corresponding to a first video content frame, and changing one or more time values corresponding to one or more subsequent video content frames.
12. The system of any of claims 8-11, wherein the central server is configured to store the received edited data file; and wherein the one or more computing devices are configured to retrieve the edited data file from the central server.
13. The system of any of claims 8-12, wherein the central server is further configured to transmit the rendered video content to one or more user devices, via one or more of streaming, downloading, and progressive downloading.
14. The system of any of claims 8-12, further comprising a media server, wherein the central server is further configured to forward the rendered video content to the media server;
wherein the media server is configured to transmit the rendered video content to one or more user devices, via one or more of streaming, downloading, and progressive downloading.
PCT/GB2016/050895 2015-03-31 2016-03-30 3d scene co-ordinate capture & storage WO2016156842A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1505506.4 2015-03-31
GBGB1505506.4A GB201505506D0 (en) 2015-03-31 2015-03-31 3D scene co-ordinate capture & songs

Publications (1)

Publication Number Publication Date
WO2016156842A1 true WO2016156842A1 (en) 2016-10-06

Family

ID=53178407

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2016/050895 WO2016156842A1 (en) 2015-03-31 2016-03-30 3d scene co-ordinate capture & storage

Country Status (2)

Country Link
GB (2) GB201505506D0 (en)
WO (1) WO2016156842A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11820018B1 (en) * 2020-07-31 2023-11-21 GrayMatter Robotics Inc. Method for autonomously scanning, processing, and creating a digital twin of a workpiece

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030103089A1 (en) * 2001-09-07 2003-06-05 Karthik Ramani Systems and methods for collaborative shape design
US20140229865A1 (en) * 2013-02-14 2014-08-14 TeamUp Technologies, Inc. Collaborative, multi-user system for viewing, rendering, and editing 3d assets

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7039723B2 (en) * 2001-08-31 2006-05-02 Hinnovation, Inc. On-line image processing and communication system
WO2008040123A1 (en) * 2006-10-02 2008-04-10 Aftercad Software Inc. Method and system for delivering and interactively displaying three-dimensional graphics

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030103089A1 (en) * 2001-09-07 2003-06-05 Karthik Ramani Systems and methods for collaborative shape design
US20140229865A1 (en) * 2013-02-14 2014-08-14 TeamUp Technologies, Inc. Collaborative, multi-user system for viewing, rendering, and editing 3d assets

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11820018B1 (en) * 2020-07-31 2023-11-21 GrayMatter Robotics Inc. Method for autonomously scanning, processing, and creating a digital twin of a workpiece

Also Published As

Publication number Publication date
GB201505506D0 (en) 2015-05-13
GB2538612A (en) 2016-11-23

Similar Documents

Publication Publication Date Title
US10839955B2 (en) Methods and systems for electronically receiving, modifying and distributing three-dimensional medical images
US8843852B2 (en) Medical interface, annotation and communication systems
JP5843414B2 (en) Integration of medical recording software and advanced image processing
KR101898575B1 (en) Method for predicting future state of progressive lesion and apparatus using the same
US9933930B2 (en) Systems and methods for applying series level operations and comparing images using a thumbnail navigator
US11100354B2 (en) Mark information recording apparatus, mark information presenting apparatus, mark information recording method, and mark information presenting method
US20110150420A1 (en) Method and device for storing medical data, method and device for viewing medical data, corresponding computer program products, signals and data medium
EP2856341A1 (en) Providing assistance with reporting
CN105045886B (en) DICOM image importing method
JP2005510324A (en) Handling of image data generated by image data set operations
US20190147346A1 (en) Database systems and interactive user interfaces for dynamic conversational interactions
US10395762B1 (en) Customized presentation of data
US20110179094A1 (en) Method, apparatus and computer program product for providing documentation and/or annotation capabilities for volumetric data
CN112424870A (en) Medical virtual reality and mixed reality collaboration platform
US8923582B2 (en) Systems and methods for computer aided detection using pixel intensity values
US9934539B2 (en) Timeline for multi-image viewer
JP2020064610A (en) Image viewer
Borgbjerg Web‐based imaging viewer for real‐color volumetric reconstruction of human visible project and DICOM datasets
WO2011071363A2 (en) System and method for visualizing and learning of human anatomy
WO2016156842A1 (en) 3d scene co-ordinate capture & storage
US9934356B2 (en) Multi-image viewer for multi-sourced images
US8751267B2 (en) Medical image processing server and managing method for medical image processing server
KR20190088371A (en) Method for generating future image of progressive lesion and apparatus using the same
US20210407671A1 (en) Method and system for automatically morphing and repairing medical image tags based on a centralized collection of rules
Kohlmann et al. Remote visualization techniques for medical imaging research and image-guided procedures

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16715055

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16715055

Country of ref document: EP

Kind code of ref document: A1