US20100223128A1 - Software-based Method for Assisted Video Creation - Google Patents
Software-based Method for Assisted Video Creation Download PDFInfo
- Publication number
- US20100223128A1 US20100223128A1 US12/716,231 US71623110A US2010223128A1 US 20100223128 A1 US20100223128 A1 US 20100223128A1 US 71623110 A US71623110 A US 71623110A US 2010223128 A1 US2010223128 A1 US 2010223128A1
- Authority
- US
- United States
- Prior art keywords
- template
- images
- image
- bin
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims description 68
- 230000033001 locomotion Effects 0.000 claims abstract description 40
- 230000000007 visual effect Effects 0.000 claims abstract description 32
- 230000001360 synchronised effect Effects 0.000 claims abstract description 12
- 230000007704 transition Effects 0.000 claims description 78
- 230000000694 effects Effects 0.000 claims description 30
- 230000008569 process Effects 0.000 claims description 19
- 230000036651 mood Effects 0.000 claims description 12
- 230000004048 modification Effects 0.000 claims description 11
- 238000012986 modification Methods 0.000 claims description 11
- 238000004891 communication Methods 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000001755 vocal effect Effects 0.000 claims 1
- 239000000463 material Substances 0.000 abstract description 6
- 241001244708 Moroccan pepper virus Species 0.000 description 76
- 230000006870 function Effects 0.000 description 14
- 239000000203 mixture Substances 0.000 description 9
- 238000009432 framing Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000005520 cutting process Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 210000003128 head Anatomy 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000008685 targeting Effects 0.000 description 2
- 235000019640 taste Nutrition 0.000 description 2
- 241000593989 Scardinius erythrophthalmus Species 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008933 bodily movement Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 201000005111 ocular hyperemia Diseases 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000004513 sizing Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0253—During e-commerce, i.e. online transactions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/18—Legal services; Handling legal documents
- G06Q50/184—Intellectual property management
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
Definitions
- the invention allows for a first set of persons to create and modify templates that contain visual effects, synchronization information, and possibly director assistance. These templates are utilized by a second set of persons to generate personalized motion photo videos from photographs, video segments, personal narratives or animation.
- Motion photo videos are a novel because they very quickly and inexpensively allow (1) persons to create and modify templates which are synchronized to music and may be easily populated with content by others, and (2) persons to select a song and associated pre-made template and create a high-quality synchronized custom video using hand-selected visual material to populate the template with included direction and without any required editing. Populated material in template can be further modified by additional persons instantly generate new, modified motion photo videos.
- users with commonly available electronics such as PCs, cameras, and camera-phones will be able to instantly or in real-time create videos that capture events, such as birthdays, vacations, and sports seasons, or moods, such as happiness of being with a friend or the feeling of missing somebody.
- Users can select, and possibly purchase, a popular song and have a ready-made template into which they can overlay their images for an instant finished product, creating their own customized multi-media video. Videos can also be produced without music though we describe the rest of the invention hereafter utilizing music for clarity of the explanation.
- a bin can be considered an object in software that corresponds to a fixed slot of time and contains objects including an image, transition effects, text and/or annotated speech.
- template creation There are multiple components in the creation of an MPV: (1) template creation, (2) use of template to assist in the composition of photos in real-time with the purpose of instant MPV creation using existing camera technology or modified camera hardware or software technology for the capture of images to facilitate better MPVs, and (3) use of a template to compose an MPV with pre-existing images with the purpose of creating an MPV.
- This invention focuses on template creation (1) and use of a template to compose an MPV with pre-existing images with the purpose of creating an MPV (3).
- FIG. D 1 shows the template creation process by a template composer and subsequent personalization by an end user.
- FIG. D 2 shows an illustration of the time mesh scheduling of template events.
- FIG. D 3 show a taxonomy of template objects.
- FIG. D 4 shows a timing mesh that synchronizes music, images, and transitions.
- FIG. D 5 shows an inter-bin transition GUI based editor
- FIG. D 6 shows an intra-bin transition GUI based editor
- FIG. D 7 shows an intra-transition viewing and editing application with multiple overlapping bins.
- FIG. D 8 show a separated track view mode for bins.
- FIG. D 9 shows an image selection tool.
- FIG. D 10 shows a method to create a focal point or trajectory for the focal point during manipulation of an image during a transition effect.
- FIG. D 11 shows software utilizing pre-existing templates and images to fenerate MPVs.
- FIG. D 13 shows a flow for template enabled image selection and composition.
- FIG. D 16 shows an image screen selector
- FIG. D 19 shows a camera-based title creation and share screen.
- This invention describes the creation of a template and the subsequent creation of an MPV using a template to assist in the composition and selection of images.
- This invention focuses on the use of templates with pre-existing images or where images are provided to software utilizing a template to create an MPV.
- Section 2.1 describes the template creation process and the template object that results.
- Section 2.2 describes the use of the pre-made template by a user to create an MPV.
- Template composers use tools that offer features to define image starts, stops, and transitions, as well as editing features relating to focus, movement, and other operations to be performed on images. Template composers further edit instruction sets for end users by providing instructions about which images should be placed by users in a particular bin, offering wording and/or sample symbolic images. Users can then modify these templates in limited ways by substituting their own images, either acquiring images in real-time based on instructions, or by replacing sample images with already existing images selected by the user.
- This section begins with a summary of the template creation process in FIG. D 1 and is then followed by detailed descriptions of several key components in the process.
- FIG. D 1 summarizes a typical process undertaken by a template composer in the creation of a template.
- the template composer first chooses a song and then, while listening to the song, cuts the timeline of the song into bins. Each bin is defined by where a new image will begin to appear and where an image will disappear. These bins are tied directly to the timeline of the song and can be edited graphically, with or without an image occupying the bin, for precision as defined below in an object called a time mesh.
- An image may be defined as a single digital image or photograph, a video object, or an animated video object. The most common use will likely be a single digital photograph per bin and that representation will be used in the following examples and description. However the invention envisions other forms of image related media being used and versions of the software will enable additional forms of images to be used.
- the time mesh maintains a schedule of template object events.
- the template objects include bin objects in the form of images or transition effects such as fades, pans and zooms.
- the template objects also include audio tracks such as music in digital form where the events may include the song beginning and end.
- An embodiment of a time mesh could be a table where one column contains the event identification tag and another column that represents the time at which the event is executed.
- Bins are not automatically determined by computer, but may utilize computational tools to assist in the creation of bin boundaries, such as bass patterns, drum beats, voice tracks, or volume levels as described in FIG. D 2 ( 107 ). These characterizations of an audio signal would provide visual waveforms in organized rows to the template composer, allowing visual understanding of patterns in the music. Bins are more likely to follow the phrasing of a song rather than actual beats in order to make the transitions less routine and contain more feeling. A key element here is that a template composer ultimately determines where the bin boundaries occur, and can easily modify them in the time mesh defined in FIG. D 2 . This first step after selection of a song is referred to as “Cutting” and is defined in FIG. D 1 .
- a template composer may begin the video at a different start time than the song, for example to avoid the crowd cheering at the beginning or to avoid a few second of blank space that exist on one digital copy but not on another, such that the song selected would have an accompanying fingerprint that identifies when the song starts; this fingerprint could be compared to other copies of the song to identify precisely where to begin the song, as well as ensuring the songs are played at the same speed and can arrive at a common endpoint even if one plays longer than another.
- inter-bin transitions also referred to as blending, between bins to define how one bin fades in as another fades out. Blending can be any type of entry of one or more images and/or exit of one or more images.
- the template composer then creates intra-bin transitions that describes whether a photo zooms in or out, or changes color or undergoes another visual augmentation such as sharpen or fade to grey.
- the intra-bin transition data is fine tuned to the specific bin, such as placing a fast moving zoom or a fade from color to black and white to match the feeling of the song at that point.
- the intra-bin transition data is later fine-tuned to the particular image that is placed into a bin so that it captures the full artistic intent of the template composer, such as targeting a zoom on the eyes of a person in an image. Lyrics are then synchronized to the bins if not already included with the song, and the template composer inserts images manually or automatically based on lyrics or image search instructions inserted by the template composer. These images symbolize which types or examples of images an end user could insert to follow the storyline of the video, as appropriate for the specific bin.
- the template composer can insert additional information such as prompts, cues, or instructions about what images to insert into the bin.
- Some images inserted by the template composer can have cut-out sections where they serve as backgrounds for parts of images provided by an end user, such as a snowy background with a circle overlay where a person's head appears.
- the background template image can be inverted so that there are parts provided by the template composer, such as text or other visuals, that overlay over an end user's images.
- These steps can be completed in any order after the cut occurs, or they can occur simultaneously through use of tools that can capture movement of an individual and be decoded to cause the actions of cut, blend, and intra-bin transitions to occur.
- the template can then be packaged up and made available to a user who can swap images and perform very limited fine tuning to personalize the video to their liking.
- FIG. D 1 describes the actions of a user who has received the template on their PC or other electronic device. They are instructed along the series of images in the template about which images to capture or insert at a given bin. These images might originate in real-time from a camera, be supplied by the template composer along with the template, come from a user's existing image library, or originate from the Internet based on keyword information supplied by the template composer or lyrical data. Sliding these images into the template produces a ready-made video. After the user selects the images for each bin, the user can optionally add basic annotations to their liking or as described by the template composer. Finally, the user can upload the video to the Internet where it can be accessed later from other devices. It can be shared in this method by selecting friends with which to share the video or an Internet site on which it may be posted. The video in its entirety consists of the song, images (photos, video, or other imagery), and transition information, all synchronized to the time mesh.
- FIG. D 2 describes an embodiment of the time mesh and how it may be used to schedule the execution of template object events.
- the template objects may be organized or visualized along a time line in terms of the object type, such image tracks 101 , transition tracks 103 , time mesh 105 , and audio tracks, 107 .
- the template parent object has three main child objects, bins 150 153 , time mesh 105 and music/audio objects 132 137 .
- the time mesh object 105 may have two child objects, the event identification 128 and event execution time 130 objects. These two objects contain the schedule of events for all template events and may be stored in the form of a table.
- Another embodiment may use a time mesh with a single event id that references execution time and instructions.
- the time mesh may have multiple events with the same execution time when in reality the template and MPV software will order the instructions for the microprocessor.
- the result between two events executed with milliseconds will likely give the appearance of synchronization to the viewer, and computer processing may combine overlapping instructions when appropriate prior to providing visual or audio output so that the actual output is consistent with all the instruction sets.
- the events could also be staggered in time, for example in the milliseconds, for other embodiments.
- the bin parent object 150 includes multiple child objects that include an image 120 , inter-bin transitions 124 such as fades, intra-bin transitions 126 such as pans and zooms 126 .
- a given bin 150 is defined by start 151 and end 152 events.
- bin 2 overlaps with bin 1 with a fade in 184 as bin 1 fades out 180 .
- the other bins 3 - 7 are shown here with hard transitions and no fades so that other features can be more easily identified.
- the time mesh will contain an event id, for example B 1 S 148 and an event time 149 corresponding to the beginning of bin 1 .
- the time mesh will track the beginning and end of all bins as shown in the figure as bin 1 start BS 1 and end BE 1 , bin 2 start B 2 S and end B 2 E and continues to bin 7 start BS 7 and end BS 7 .
- Bins may overlap as shown in bin 5 160 and bin 6 164 where images may be superimposed, e.g. to form a sequence. It is not shown in this figure, but one embodiment might have audio objects that belong to a given parent bin object.
- One example is the a birthday related template where the last bin has a placeholder for the user to record their a personal birthday greeting. Since the audio object belongs to a given bin it can only be played during the fixed time slot defined for a given bin.
- the bins will normally be sequenced with the music track but there may be cases where a bin contains a title slide 150 or a credits slide 170 that correspond to silence 137 .
- the silence may be stored as entries in the time mesh or as shown here, the absence of a music track is treated as silence without an explicit time mesh entry.
- the bin can be considered as a container of objects including a specific image and data such as transition effects that generally applies to an image.
- the bin exists without any specific image and image data may be replaced or modified at any time.
- the inter-bin transition objects 124 control the effects associated with the transition of one image to another such as where one image fades out 184 or fades in 180 .
- the fade transition events are recorded in the time mesh with the event id 181 and event execution time 181 .
- the inter-bin transitions will normally overlap from one bin to another.
- the intra-bin transition objects control how a given bin manipulates an image in terms of composition, such as pans, zooms and color manipulations.
- One example is the use of pan to move a viewing window from the left of the image to the right where the time mesh contains the pan event id 191 and time 192 associated with the beginning 190 contains information regarding on how the pan is executed in terms of speed and perimeter of the bounding box being viewed.
- Another example has bins with or without images automatically changes the color throughout the duration of the bin beginning with one part of the color spectrum and gradually changing to another in order to display the intra-bin effect of zoom, pan, or rotate.
- This color change could be part of the finished video or serve as a tool for creation as it captures the energy of the transition in a visually changing form.
- the bins may include objects that can and cannot be altered by additional contributors or template users.
- An example is a non-modified bin at that beginning that adds a corporate branding image or icon with either silence or a short audio segment.
- the music object 132 exists in digital form as a music file or a digital signature.
- a digital signature is a digital transformation, such as a wavelet transform, that identifies unique characteristics of a song related to synchronization with bin objects.
- the digital signature may be used to synchronize a downloaded version of the song to the time mesh by using pattern matching between the templated stored signature and a transformation of the downloaded song.
- the matching may be as simple as locating the beginning of the downloaded song and meshing it to the song begin event 133 .
- the matching may include convolution of the two signals to find the best match.
- the matching may include stretching and scaling the two signatures to address situations where the template creation may be done with a different encoding of the song than the song that may be downloaded at a later date.
- the matching may also include cutting portions of a song, such as the beginning or end, to make the song used by the user consistent with the song used by the template composer, or merging bins or cutting bins out of the MPV to be consistent with the song provided
- a preferred embodiment uses it to allow for an extensible and flexible architecture.
- This architecture will allow modifications to be made and existing templates to be easily updated without breaking the existing template. For example, if a creator updates their images to use a new inter-bin transition effect the changes can be rolled out across the current template and all modifications thereof.
- It also allows for reuse of existing template objects for example a company may create several templates based on the advertising campaign with a template for specific products within a given line.
- elements of the template for example opening and closing branding, campaign messages and transition effects can be consistent with the images and songs changing. It is also preferred that permissions on each object be maintained so that the template creator can control which objects can and cannot be modified by subsequent template creation contributors or by a template user.
- a taxonomy of template objects is provided as an example in FIG. D 3 .
- the parent template objects has three child objects: bin, time mesh and song.
- Each of these objects has child objects.
- each bin object may have an id, image, transition effects and text associated with it.
- the inter-bin and intra-bin objects have transition effects as described as part of the discussion associated with FIG. D 2 .
- embodiments may contain instructions which are references to libraries or instruction sets.
- a template object such as an inter-bin transition may contain a reference call to a particular type of fade. That fade operation may be pre-defined in a library on a particular computing device or camera, or may be able to reference a library or set of instructions online which provide functionality or further reference to where functionality may be attained.
- the template utilizes reference call made by the fade to the instructions defined in the referenced software or hardware.
- This structure allows templates to take advantage of pre-existing routines on specific devices. Further, this structure allows updates relating to templates or specific hardware to be made online and referenced by existing templates.
- a template may contain a reference to another place which contains a reference to yet another place, and this process may contain any number of further references until the actual machine instructions to perform the action are retrieved.
- Another embodiment maintains a list of contributors where a given template parent object may have additional template children associated with a contributor's modification.
- the template database may maintain an ‘attribution’ list where each contributor is recognized and automatically added to previous contributors in that inheritance chain.
- the list may include links to Internet-accessible sites that contain further updates, such as for insertion of new material, or additional referencing, such as modifications that occurred after the creation of the MPV being viewed.
- Another embodiment allows for the attribution list to share in any commercial proceeds or licensing event.
- FIG. D 4 shows elements of the timing mesh in what is the resulting product of a template composer synchronizing music with images and transitions. It locks items independently to specific points in the song that are determined by the template composer and subsequent modification of parts generally occur independent of each other so that they may stay synchronized to the audio.
- 260 is a timeline of the song
- 270 is a waveform characterizing the audio elements of the song.
- 250 is a slider to show which part of the song we can see in the box (the grey area indicates the part we see, the white is the entire video/song length).
- 200 indicates the lyrics that are synched to 220 , the filmstrip of photos in order of when they appear.
- 230 is the previous image, 240 the transition to the next, and 250 the current image being shown.
- the dashed box around 240 and 250 is the element that is editable as described below. It can also extend to cover the preceding and following transition, or can encompass an image, an intermediate transition, and the following image, such that the encompassed area is editable as described below.
- the checkboxes allow certain elements to be displayed so that the composer can view or hide whichever of the aforementioned items are desired as such. They can be dragged up and down to reorder the elements described. 296 allows the filmstrip timeline to be increased or decreased in size, showing correspondingly less or more of the series and allowing easier editing. Template composers can drag sliders to navigate or use buttons as shown. Additionally, they may drag sides of the pictures or transitions to change where they occur in the timeline.
- timing mesh The most important aspect of the timing mesh is that all elements here are synched to the timeline and not to each other, such that dragging an image left will change that image, but not the audio or any previous or subsequent images. They are instead locked into the timing mesh independently.
- the structure of the timing mesh also allows segments of an MPV to be cut and for the contained elements to be reindexed to a time mesh that starts at that segment time, as might occur when combining portions of two MPVs.
- GUI graphical user interface
- Inter-bin transitions describe visual effects associated with the switch from one image, video image sequence or animation to another one image, video image sequence or animation.
- An inter-bin transition may describe the type and rate of fade from one image to another.
- the inter-bin transition may be created or described through a graphical user interface (GUI) or could be described in an existing or new programming language.
- GUI graphical user interface
- FIG. D 6 An example of how a GUI might facilitate intra-bin transition is shown in FIG. D 6 .
- the intra-bin editor 600 allows for a suggestion box that offers hints, with option for more explanation and image suggestions box. This can be hidden by clicking on the x in the box. Also can the user pull up additional hint information either by clicking on box, placing mouse over box.
- the main viewing window at 610 shows current image when paused and the changes in image (still or video) when play is pressed.
- the main viewing window may be in the form of a media player that allows the user to play, stop, pause, reverse or forward their way through a transition sequence.
- the initial framing of the current image (with effects already in place) indicated at 620 might be zoomed in, altered, or have other visual effects contained within.
- the initial frame may allow the user to select the initial condition or state before pan and zoom effects are applied.
- the user can select point on image, indicated at 630 , on which to focus, via an optional target tool. User can click to edit or link to, end image or can remove from tool. This also enables a point to stay static during a zoom in or out.
- the ending framing of image is indicated at 640 . As in 630 , user can click to edit in a number of ways to control the final state of the pan and zoom effects.
- the timeline from start of image to end of image and status indicator As shown at 650 the timeline from start of image to end of image and status indicator.
- the slider moves automatically with time. User can drag slider to desired point within an image transition or the MPV.
- a loop button for image transition is shown at 660 .
- the zoom or pan movement in an image might occur linearly, exponentially, logarithmically, or according to another parameter. Oftentimes linear movements do not correspond well to a perception of a consistent movement. Movements of any of the parameters might automatically scale to keep perceptual consistency in mind.
- a template composer creates the template, he might utilize a tool such as the image selector tool in FIG. D 9 .
- Images can be manually or automatically inserted onto the palette in 910 , where each grey box represents a different picture. These images originate from the folders, websites, or other sources described in 900 .
- 920 represents the image from 910 currently selected for insertion into the bin that is contained by the dashed rectangle on the film strip (as described in FIG. D 2 at 220 ).
- a template composer simply drags an image from the palette into the film strip at the desired location and the image now occupies the bin.
- the synchronization of the inter-bin and intra-bin image transitions with music represents fixed points in time during the duration of the song and is managed as part of the time mesh.
- elements of the time mesh may be shown visually as a time line that ties the image selection, image transitions, music or lyrics together.
- the GUI may incorporate the use of multiple tracks to aid the user in viewing simultaneous bin, image or song related events, as illustrated in FIGS. D 7 and D 8 .
- the selection of transition effects may be aided by the use of a handheld device that a template creator can sway like a wand or conductor's baton. This may allow the creator to more easily translate how they process the music in their brain to the visual effects that define one or more bins.
- Other devices may include accelerometer based devices that capture motion, such as dancing or other bodily movement and translate those motions to visual effects during template creation.
- FIG. D 7 provides an embodiment of intra-transition viewing with multiple overlapping bins.
- 700 is a suggestion box which offers hints for both bins, with option for more explanation and image suggestions box.
- 710 is the main viewing window, shows current view in video that mixes multiple bins and utilizes the current inter and intra transition effects. This view changes when play is pressed as movie proceeds. Synched up to slider in 5 and timeline at bottom.
- 720 is the initial framing of the image in bin 1 (with effects already in place); might be zoomed in, altered, or have other visual effects contained within.
- 722 is the end framing of the image.
- 730 is the initial framing of the image in bin 2 (with effects already in place); might be zoomed in, altered, or have other visual effects contained within.
- 732 is the end framing.
- 740 is the timeline from start of image to end of image and status indicator. Slider moves automatically with time. User can drag slider to desired point.
- 750 is the main filmstrip showing non-overlapping bins.
- 760 is the second filmstrip showing bins that overlap with first. There can be multiple filmstrips here, where ever bins overlap (for multiple pictures at one time that are not simply transitioning from one to next). Transitions are shown before and after to indicate when image starts and when it ends, with transitions also covering inter-slide transitions of images before and after. Dashed box refers to section shown in viewer in 710 and consistent with full time bar in 740 .
- FIG. D 8 Another embodiment of displaying multiple tracks for a composer or user is described in FIG. D 8 .
- FIG. D 8 provides an example of how bins may be displayed vertically to give the User or Composer an easy method to view how bins proceed over time with respect to time placement of bins and transitions.
- 800 shows current images occupying given bins. Bins are listed in order from top to bottom.
- Click on a bin in this section brings up a screen to assist User in selecting a picture.
- Template Composer Mode clicking on a bin allows the Composer to edit the fields that will be visible to the user, such as keywords, effect, image selection (for multiple images to be recommended), etc.
- 810 is an up and down arrows allow user to scroll up or down the list of tracks for editing.
- 820 denotes a visually placed time segment, listed by beats or by a pre-defined time period such as seconds.
- 830 is a bin, placed on the horizontal axis at the time it is to begin being shown, and the transition time in and out.
- the shaded boxes indicate transitions. The clear boxes are where the image is full (though likely involved in an intra-bin transition).
- bins can be dragged to a different time, stretched, copied, or modified in other ways. Clicking on a box allows the Composer to modify the bin and its characteristics.
- 840 is a timeline (horizontal) of the currently shown section; the current time being played (vertical shaded bar).
- 850 is a track-size slider allows the tracks to be shrunk in size so that more tracks can appear, allowing the composer or user to view the full set of tracks for a video or just the current set being viewed.
- 860 is a timeline size slider that allows larger or shorter periods of time to be represented by the timeline. Composers require fine tuning of tracks and this method can act like a microscope, or allow the full video to be viewed on one screen.
- FIG. D 9 also provides an idea of how an end user might drag their images into the filmstrip to personalize a template to their tastes. Certain features would likely be disabled for the end user that are accessible to the template composer, or which may be unlocked by an end user who assumes the role of a template composer. In the above it is assumed that the template composer and the end user are separate individuals, though they certainly could be the same individual. Additionally, any template composer could begin working from a pre-existing template, whether partially or fully complete, in order to modify the template to their liking. Any user could modify a template, whether in original form or already modified by another user, to suit their taste.
- a friend may receive a video full of picture from a birthday party and choose to swap one picture of the birthday boy with an old picture of the birthday boy from childhood.
- FIGS. D 4 , D 5 , D 6 , D 7 , D 8 , and D 9 may be combined to display a set of data to allow composers and users to view and modify aspects of an MPV most efficiently for their specific purposes.
- FIG. D 10 illustrates the mathematics of a tool available to template composers and possibly end users which allows an image to have a point on the photo which stays static in it position on the screen while the photo is zoomed in or out. For example, a point between the eyes is selected as in the above illustration and while the image zooms out, the point between the eyes stays in exactly the same position on the screen.
- the target tool shown in FIG. D 6 would allow these two points on the Start and End images to be locked together and move together if the user changes which part of the image is shown on the screen. For example, when locked, if the first image is moved down so that more of the head shows, the second would move in lock step to keep the point between the eyes consistent in the before and after. The second image would therefore be constrained in where it could move; its movement would require the first to also move.
- This tool could occur automatically, semi-automatically, or manually; the software may suggest points on the image using algorithms to detect likely targets, such as eyes or planets.
- the next section provides an example of how a user might utilize the templates create by template composer.
- the template is used with a camera or camera equipped device to compose or capture images in real-time.
- the template software or software associated with a template may provide four types of functions as part of the MPV creation process.
- the first function is that it provides ‘opportunity’ by allowing for one or more templates to be downloaded or shipped with the computing device and providing immediate access to the template at any time.
- the availability of multiple templates further provides the opportunity for the user to match a given setting or event with an appropriate template to create a personalized MPV.
- the second function is that the template or software associated with a template will ‘assist’ the user in the composition of the image such as the placement of the primary subject relative to a background or landscape or the movement of the subject for a sequence or burst of images.
- the third function is that the template or software associated with a template may provide instructions to the user or the images regarding settings, such as depth of focus or flash settings, for one or more image captures.
- the fourth function is that the template or software associated with a template enables the user to view existing sample images on the digital display, view and select one or more captured images for a given bin thus replacing a given template sample image and finally viewing the finished MPV on the display.
- the template processing unit may be in the form of a software module that resides within memory and is executed on a digital device or it could be a separate integrated circuit component configured for template operations.
- the template software may be downloaded and the software associated with a template may be embedded into the digital device.
- the software associated with a template that may be embedded into the device will likely interpret a given template object to assist, capture, place images and view the final result.
- the template processing unit (constructed in software or hardware) communicates with the digital display and various selection or modification functions presented in FIGS. D 4 through D 9 .
- each template is associated with a particular song and has fixed bins for which a user will choose images.
- the preferred embodiment is that the song is available so that the user can hear the song while composing and capturing the images and that it may provide a richer experience.
- the song may not be available for download or included with the template and the invention includes the use of the template without the song as well.
- the synchronization of the sample images and the music is done during template creation so the replacement of the sample images with new images for a given bin can still occur.
- the resulting MPV can be sent from the computing device to another person's computing device.
- One method is for User A to email the completed MPV to another person, Recipient B, who receives the MPV or a link to a server where the MPV may be downloaded.
- Recipient B who receives the MPV or a link to a server where the MPV may be downloaded.
- the template is associated with a commercially available song
- User A may choose to purchase the song for the Recipient B and their MPV is downloaded with the associated song.
- User A may choose not to purchase the song and Recipient B receives a URL link to download the MPV and purchase the song.
- User A may choose not to purchase the song and Recipient B receives the MPV and is prompted to purchase the song when they try to play it.
- the invention includes the various combinations of transmitting MPV and the associated songs or the various combinations of sending links to locations where MPVs and songs may be downloaded.
- FIG. D 13 provides an embodiment of the process for an end user operating software that utilizes a template to create an MPV.
- the user will select photos though other similar embodiments would include video, animations, text, or other visual materials.
- the user starts by choosing a song, mood, or theme 2000 for which to create a video.
- the user can view templates 2010 built on that song, mood, or theme to identify which template will be selected for modification. There may be multiple templates made by a variety of template composers that are all built upon an identical song.
- 2020 the user selects the template to use, possibly including the purchase of the song and template at this stage.
- the user views the instructions contained within the template about characteristics of the images to select for placement into a bin, a series of bins (most likely in the form of a list).
- the user begins to look at the first bin which requires an image, and in 2050 instructions and lyrics about what the image should contain are included. Additionally, while in this bin the corresponding section of the song may play so that the user can feel mood of the music and hear lyrics.
- a user takes one or more multiple pictures which are stored into the currently selected bin. One of these images will occupy the bin and that image will be selected in 2070 by the user. It may default to the last picture taken in the bin. Certain bins may require multiple images to be included and these could be selected from the bin.
- the user proceeds to the next bin either automatically after selecting an image or manually indicating to proceed. After the last bin is filled, 2090 allows the user to view the full movie with their images.
- the user After viewing, in 2100 the user has the option to share, upload, or gift the video.
- the user has the opportunity to further edit the video on a PC or other mobile device such as a digital camera, digital frame, mobile handset, or other electronic device that offers a user the ability to download the video and its parts and provide further commands for replacing images in various bins.
- FIG. D 13 There may be additional steps in FIG. D 13 and the listed steps may be skipped or performed in a different order.
- colors are used to indicate what types of images to place into different bins.
- the user points to a group of images which are placed automatically into the bins in a random or semi-random order and the user is able to arrange them as they desire while watching the video with these images.
- a user may view multiple templates for a given song at the same time in order to see the differences as they occur.
- One template video could be on the left and a second on the right.
- 2021 allows for purchase of a song or template at a given point in the process, which could also occur in 2041 when gifting to another person.
- 2051 allows a user to move between bins as they desire, allowing images to be captured out of order as the user desires.
- 2061 gives the user the opportunity to view all or a segment of the video at a given time, or could allow a given bin to loop through the various images in the bin so the user can see how they look.
- 2062 allows many images to be referenced to a bin for easy reference at a later point in time even though they are not the image being used in the MPV in the bin slot.
- the real-time reference to a bin is an improvement over the current requirement that a user taking many photos at a given time would have to perform the binning assignment to a set of photos later on as they decide which image to select, without the benefit of music and requiring the user to remember what goes where, particularly for photos taken out of sequence of the bins.
- 2062 helps organize the assignment of the images taken in 2061 and avoid confusion later.
- 2071 allows the user to find photos previously taken and place them into the selected bin, or allows the user to locate and select other symbolic images that may have been downloaded with the template for particular bins.
- the user may also choose an image from several that appear from a search on keywords relating to the bin or from previous images taken that reside in other folders.
- 2081 allows annotation of images or blank bins, or allows basic editing of selected images such as red-eye reduction, targeting which area to focus on in the image as it moves, or resizing the viewable part of an image.
- FIG. D 11 The opportunity function described in FIG. D 11 relates to items 2000 , 2010 , 2011 , 2020 , and 2021 in FIG. D 13 .
- FIG. D 12 One embodiment is indicated in FIG. D 12 as an system where the functions are already integrated either in hardware, software, or a combination.
- digital device is sold to consumers pre-loaded with the ability view and select templates as well as create an MPV after selection.
- a user download software that allows for template viewing and modification onto a more generalized device such as a mobile phone, game system, or personal computer that is capable of downloading software applications and already contains the necessary hardware to store, view, and utilize templates.
- pre-existing functionality in the device including pre-loaded software for the creation of MPVs, that utilize download features to further update software that resides locally on the device for the creation of MPVs or addition of templates.
- the downloaded software may contain templates or portions of templates for the user to choose from locally on the device as in 2010 . While not required, there would likely be navigational utilities to help in the selection of templates, guiding a user through the selection of moods, themes, or songs. As an example, a user may key in a mood such as happiness or melancholy.
- the device would then search for templates that are resident on the device or resident on a server to which it can communicate to select templates that have been identified with that particular mood. Templates, portions of the templates, or descriptive information about the templates, such as a song the template is based on, would then be delivered to the user.
- Different methods for ranking and ordering the templates would be employed, such as which is most popular by purchase in the last week, which has been rated highest by viewers, or which are associated with template composers that gained reputations for creating quality templates.
- the user would then receive an ordered list of templates organized one or several of these ranking methods that provide available templates to choose from.
- the user may then view these templates as in 2010 to see what they like best.
- the software may provide other guides such as a suggestion that users who viewed particular templates ended up purchasing other templates, possibly including the percent of users who purchased each after viewing the current template.
- a user might also choose to select a theme such as Christmas or Halloween.
- the user might undergo a similar process as indicated in mood, being provided a list of templates ranked by a variety of methods.
- the user might also utilize both a theme and a mood to select the templates to view.
- a user selects “Halloween” and then “funny”, providing a list of templates that are closest to these parameters.
- Other templates that might be related to “Halloween” and “classic” but not related to “funny” would not be shown.
- Other criteria for any search might be that a template's cost is free, is within a certain price range, or is freely distributable to others.
- Other options may be available such as the ability to include advertising within an MPV in order to offset the cost of the template or music, or templates which are free if the user provides rights to freely distribute or showcase the finished MPV to a software provider.
- the intent of the navigational utilities is to provide an easy method for users to select a mood, theme, or song that fits their desired criteria as quickly as possible and that provides the most utility to the user, be it popularity, quality, or other criteria.
- the software may provide users an list of templates, organized by any of a number of criteria, such as the highest-rated templates based on a particular song or artist, the most current Billboard chart toppers, the highest ranked in any of a number of musical genres such as Country Music or R&B, the most recently purchased templates, templates that have had the finished MPV most distributed, songs that have the most templates, templates ranked as highest quality by one or more groups, templates that are most relevant to purchases already made by the user or to a user's demographic as defined by the user, by the software provider, or by a third party such as a DJ in a genre or subscribed to by the user.
- a third party such as a DJ in a genre or subscribed to by the user.
- Keywords are also obvious selection criteria for templates, and might include song names, musical artist names, musical album names, lyrics associated with the template, or synonyms of any of these.
- the software might provide results based on a match of keywords and other ranking criteria, such as a blend of the keyword and popularity.
- Such a search selection of templates could occur directly on the device. Certain templates which are most likely to be purchased may be preloaded into the software and available for a user immediately, possibly with a purchase required. Other search results would likely come from a connection to a user's computer or to the Internet.
- the device user could browse the available templates or browse template information provided from the Internet, which would be delivered to the device as search criteria were provided to a server containing template information via the Internet. The user might then download selected templates for purchase or for previewing directly onto the device.
- a computer may also be used for template selection.
- Software for template selection (and possibly MPV creation or template editing) might contain many preloaded templates ready for viewing or use. The user might view these on a PC and select which they would like to place onto the camera with the intent of creating an MPV.
- a computer might also allow for the downloading to a related device. For example a user might download an MPV or a template to a pre-registered device such as a camera or a digital frame that has another means of connecting to the Internet such as through wireless telephone networks, through wireless connections to a local area network, or where a device is connected, wired or wirelessly, to another computer accessible by the Internet such as that for a family member located elsewhere.
- a user will be required to log into an account prior to accessing or purchasing templates.
- the account might contain credits or value that a user has access to for the purchase of music, a template, or both. It might also contain MPVs that were purchased or created by the user, or gifted to the user by another, either as a rental or perpetually licensed gift, possibly with ability to view future variations or updates to a given MPV template.
- MPV creation allows the user to view pre-created video or image footage and select parts of the footage to insert into a selected template.
- a user might download both a template and one or more hours of footage which they would watch.
- the template might offer suggestions about what images or video to place into the bins.
- the images or video might be further editable, such as cropping, rotating, or changing color. The user would then be able to create the MPV based on these images.
- High resolution images may reside on a server and be accessible during the MPV creation process or after a required purchase.
- the MPV creation software would note the time of a particular image in the video being watched and be able to reference the higher resolution image from this time data.
- the MPV could then be created on the device locally or on a server and delivered to the device or to the user's account for further distribution.
- the footage viewed by a user may or may not be related to particular template.
- the assist function described in FIG. D 11 relates to items 2030 , 2040 , 2041 , 2050 , 2051 , and 2070 in FIG. D 13 .
- the goal of the assist function is to improve the imagery captured and selected by users by providing users with educational information before and during the image capture and selection process.
- Information fed into the assist function is generally provided or chosen by the template composer. Some examples of features provided in the assist function are illustrated in FIGS. D 11 and D 13 .
- FIG. D 16 provides an example of how a user might utilize the templates create by template composers, as indicated in the Personalization portion of FIG. D 1 .
- FIG. D 16 illustrates a screen being utilized by a user who has already created images and wants to place them into an MPV.
- the user may have gone through the process of capturing images with a live camera and placed them into bins, may have images that came with the template, may have searched for additional images using a search engine, or may have accessed images previously taken by the user.
- This tool allows the user to easily view lots of images and select the proper image for the bin, then moving on to the next bin and repeating the process as needed.
- the selection process could occur on a portable device, such as a camera, to augment parts of an MPV that the user desires to modify with pre-existing pictures.
- 2500 shows folders that contain images (still or video). Users can add more folders, including websites that contain images.
- 2502 displays a visual list of images from the folders listed in 2500 . Images can be dragged into the Time-synched Energy Template to replace an existing image. Images can be grouped by “Your Images Taken” or by pre-loaded “Suggested Images” that are supplied with the template or taken from a web search based on keywords of the image.
- 2506 displays instruction for User of what type of image to select.
- 2508 shows lyrics for User of current bin.
- 2510 shows currently selected picture from 2502 .
- 2508 displays lyrics for User.
- 2514 shows the timeline of images, including at least prior, current, and next. 2516 shows the current image occupying the current bin.
- 2518 shows the transition into current image occupying bin.
- 2520 shows controls for moving forward/backward in time or to next/prior bin.
- 2522 shows a view-size slider which allows images in 2514 to be made larger or smaller.
- 2524 shows a volume control slider.
- 2526 shows a menu button that brings up additional options including selecting images to show, moving bins, saving progress, changing screens, turning on or off optional features (e.g. lyrics) etc.
- the assist function may provide additional features to those in FIG. D 14 , D 15 , and D 16 .
- the view function described in FIG. D 11 relates to items 2090 in FIG. D 13 .
- Users can view MPVs with sample images, view their own with final images, view with different images being placed into a single bin to help decide which to select, or viewed semi-randomly where images are placed into the MPVs according to some selection criteria such as bins containing multiple images or such as images in any available library matching tags on the bin. Viewing is also possible on other devices, such as digital picture frames, or through output to another device by sending a digital or analog output signal such as connecting the camera to a TV for viewing the MPV.
- Viewing may occur at variable rates of volume (including no volume), occur at variable rates of speed for faster or slower playback, utilize standard navigational icons including play, pause, go to end, go to beginning, fast forward, rewind, and varying degrees of fast forward and rewind.
- Users may also watch an MPV in a mode where the bins can be modified as they are watched, such as changing a point of focus or amount of zoom, or use accelerometers, gyroscopes, or other physical movement sensors to modify the MPV as it is being viewed, such as pulsing to a physical movement or panning based on turning the device.
- images. higher resolution images, or a rendered video would need to be downloaded into the viewing device so that the MPV could be assembled or viewed.
- An MPV on a server may also be rendered and shared as a non-modifiable movie or MPV file rather than modifiable MPV files. Users might also use other transfer techniques such as Bluetooth technology for wireless transmission or a USB cable to share from one device to another, or be allowed to burn files onto a CD so they can be transferred to another user.
- FIG. D 19 describes a likely scenario for a user has just completed personalizing an MPV with his own images. At this point a user may desire to place a title and share the MPV with friends.
- 2800 shows an area where User can change title of the video.
- 2810 shows an area where User can select email addresses of others to share video with.
- 2820 shows an area where User can upload video to backend service and User can access from other computer or device
- MPVs beyond video creation and viewing is for video games.
- the user would either be able to take images in real time and receive a score for the sequence or individual images.
- the user might be required to size and place a rectangle inside existing pictures in order to cut the images, after which the user would receive a corresponding score based on how well the image was cut.
- Many variations could play on this such as determining the rotation or the point of focus. Scoring an image could take place automatically or could rely on a service of live humans that rates the modified image or the chosen effect.
- Educational opportunities could arise from this as well, such that the game is marketed as an educational utility to help improve the image capture of aspiring photographers or children.
- Some of these may come with preloaded footage as described earlier such that the final images are demonstrated to the user as a collection of preferred images.
- Another videogame example is a treasure hunt, where children are directed to find and capture particular objects. Instructions might occur by audio instructions or written instructions, or use images that possess the desired quality the child is searching for such as “red” or “three” or “building” or “Mickey Mouse”. The child would then take photos of the items as each bin requests, and at the end of the exercise the video would be created automatically using the items captured, synched to some type of pre-defined music. Images of the child might be overlaid with the items found. Items might also need to fit within certain areas on a screen to ease in the video editing, and the direction or display screen would assist the user.
- Videobooks are another application. Videobooks can either physically display several images from video sequences in a print form similar to a comic book, or may digitally offer video sequences similar to an advanced feature digital frame capable of displaying sound and music. For example, a user might be able to take screen shots every one second or take series of shots in close together in a video sequence and then compile with others so that a story from the book emerges in print form. There might be 20, 50, 100, or more photos in the sequence, likely arranged by bin. Lyrics or comments might be included for a given bin. Images would likely contain effects that had been performed in the MPV so that a series of images, for example, zooms in or pans in similarly to the MPV.
- the portable device may also be used in the semi-automated creation of templates. Any type of sensors, such as accelerometers, could be used to capture human movement as a song occurs. That data could then be interpreted to provide cuts, pans, zooms, blends, and other effects. Input devices used to collect this data might be external to the device but plug into the device with a cable or use a wireless communication technology such as Bluetooth to send information to the device.
- sensors such as accelerometers
- Input devices used to collect this data might be external to the device but plug into the device with a cable or use a wireless communication technology such as Bluetooth to send information to the device.
Abstract
The invention allows for a first set of persons to create and modify templates that contain visual effects, synchronization information, and possibly director assistance. These templates are utilized by a second set of persons to generate personalized motion photo videos from photographs, video segments, personal narratives or animation. Motion photo videos are a novel because they very quickly and inexpensively allow (1) persons to create and modify templates which are synchronized to music and may be easily populated with content by others, and (2) persons to select a song and associated pre-made template and create a high-quality synchronized custom video using hand-selected visual material to populate the template with included direction and without any required editing. Populated material in template can be further modified by additional persons instantly generate new, modified motion photo videos.
Description
- This nonprovisional utility patent application claims the benefit of the priority date of provisional application No. 61/156,871 filed Mar. 2, 2009.
- The invention allows for a first set of persons to create and modify templates that contain visual effects, synchronization information, and possibly director assistance. These templates are utilized by a second set of persons to generate personalized motion photo videos from photographs, video segments, personal narratives or animation. Motion photo videos are a novel because they very quickly and inexpensively allow (1) persons to create and modify templates which are synchronized to music and may be easily populated with content by others, and (2) persons to select a song and associated pre-made template and create a high-quality synchronized custom video using hand-selected visual material to populate the template with included direction and without any required editing. Populated material in template can be further modified by additional persons instantly generate new, modified motion photo videos.
- Professionally produced videos require a script of what images to shoot or create. Camera crews then must acquire the footage and the resulting images are processed and modified. Artists then edit content, including when to start and end segments and how to transition between segments in order to tightly synchronize the visual aspects with the specific audio track. Though results are good, the process is expensive, time-consuming, and requires significant technical expertise to use video-editing software. The invention described herein provides a method to significantly reduce the time, cost, and complexity of creating a sophisticated template containing a rich story and allowing further modification of the template for instant creation of videos with customized footage by unsophisticated users.
- By use of the invention, users with commonly available electronics such as PCs, cameras, and camera-phones will be able to instantly or in real-time create videos that capture events, such as birthdays, vacations, and sports seasons, or moods, such as happiness of being with a friend or the feeling of missing somebody. Users can select, and possibly purchase, a popular song and have a ready-made template into which they can overlay their images for an instant finished product, creating their own customized multi-media video. Videos can also be produced without music though we describe the rest of the invention hereafter utilizing music for clarity of the explanation.
- Users are guided through the video creation process by sets of instructions and image sequences that are pre-defined by template composers. Any person can be a template composer. These instructions are invaluable to ordinary users as it allows a coherent story to be personalized with their own visuals without the tedium of defining start, stop, and transition images or going through the effort of listing out the order or flow of images to create the story. There are no existing integrated tools that assist users in identifying which images to capture or insert and that allow instant placement of these images in a human-created pre-defined template with the goal of immediate production of a finished video, customized and further modifiable by the user. Existing computer systems require users to modify nearly all aspects of a video or they provide a basic template lacking synchronization defined by humans. The resulting task for the end user is either too complicated or so limiting that the user cannot modify critical aspects. Several automated tools allow for selection of images by a user that are automatically placed into a pre-defined template or a template that is not pre-defined. This invention specifically requires users to select images for placement into bins that are pre-specified. A bin can be considered an object in software that corresponds to a fixed slot of time and contains objects including an image, transition effects, text and/or annotated speech. These bins, and associated data about when and what occurs within the bin, allow for very tight synchronization with the audio source, as well as valuable instruction to the user who is determining which imagery to supply into the bin.
- There are multiple components in the creation of an MPV: (1) template creation, (2) use of template to assist in the composition of photos in real-time with the purpose of instant MPV creation using existing camera technology or modified camera hardware or software technology for the capture of images to facilitate better MPVs, and (3) use of a template to compose an MPV with pre-existing images with the purpose of creating an MPV. This invention focuses on template creation (1) and use of a template to compose an MPV with pre-existing images with the purpose of creating an MPV (3).
- FIG. D1 shows the template creation process by a template composer and subsequent personalization by an end user.
- FIG. D2 shows an illustration of the time mesh scheduling of template events.
- FIG. D3 show a taxonomy of template objects.
- FIG. D4 shows a timing mesh that synchronizes music, images, and transitions.
- FIG. D5 shows an inter-bin transition GUI based editor
- FIG. D6 shows an intra-bin transition GUI based editor
- FIG. D7 shows an intra-transition viewing and editing application with multiple overlapping bins.
- FIG. D8 show a separated track view mode for bins.
- FIG. D9 shows an image selection tool.
- FIG. D10 shows a method to create a focal point or trajectory for the focal point during manipulation of an image during a transition effect.
- FIG. D11 shows software utilizing pre-existing templates and images to fenerate MPVs.
- FIG. D13 shows a flow for template enabled image selection and composition.
- FIG. D16 shows an image screen selector.
- FIG. D19 shows a camera-based title creation and share screen.
- This invention describes the creation of a template and the subsequent creation of an MPV using a template to assist in the composition and selection of images. This invention focuses on the use of templates with pre-existing images or where images are provided to software utilizing a template to create an MPV. Section 2.1 describes the template creation process and the template object that results. Section 2.2 describes the use of the pre-made template by a user to create an MPV.
- Template composers use tools that offer features to define image starts, stops, and transitions, as well as editing features relating to focus, movement, and other operations to be performed on images. Template composers further edit instruction sets for end users by providing instructions about which images should be placed by users in a particular bin, offering wording and/or sample symbolic images. Users can then modify these templates in limited ways by substituting their own images, either acquiring images in real-time based on instructions, or by replacing sample images with already existing images selected by the user. This section begins with a summary of the template creation process in FIG. D1 and is then followed by detailed descriptions of several key components in the process.
- FIG. D1 summarizes a typical process undertaken by a template composer in the creation of a template. The template composer first chooses a song and then, while listening to the song, cuts the timeline of the song into bins. Each bin is defined by where a new image will begin to appear and where an image will disappear. These bins are tied directly to the timeline of the song and can be edited graphically, with or without an image occupying the bin, for precision as defined below in an object called a time mesh. An image may be defined as a single digital image or photograph, a video object, or an animated video object. The most common use will likely be a single digital photograph per bin and that representation will be used in the following examples and description. However the invention envisions other forms of image related media being used and versions of the software will enable additional forms of images to be used.
- The time mesh maintains a schedule of template object events. The template objects include bin objects in the form of images or transition effects such as fades, pans and zooms. The template objects also include audio tracks such as music in digital form where the events may include the song beginning and end. An embodiment of a time mesh could be a table where one column contains the event identification tag and another column that represents the time at which the event is executed.
- Bins are not automatically determined by computer, but may utilize computational tools to assist in the creation of bin boundaries, such as bass patterns, drum beats, voice tracks, or volume levels as described in FIG. D2 (107). These characterizations of an audio signal would provide visual waveforms in organized rows to the template composer, allowing visual understanding of patterns in the music. Bins are more likely to follow the phrasing of a song rather than actual beats in order to make the transitions less routine and contain more feeling. A key element here is that a template composer ultimately determines where the bin boundaries occur, and can easily modify them in the time mesh defined in FIG. D2. This first step after selection of a song is referred to as “Cutting” and is defined in FIG. D1.
- A template composer may begin the video at a different start time than the song, for example to avoid the crowd cheering at the beginning or to avoid a few second of blank space that exist on one digital copy but not on another, such that the song selected would have an accompanying fingerprint that identifies when the song starts; this fingerprint could be compared to other copies of the song to identify precisely where to begin the song, as well as ensuring the songs are played at the same speed and can arrive at a common endpoint even if one plays longer than another.
- After completing cutting, the template composer performs inter-bin transitions, also referred to as blending, between bins to define how one bin fades in as another fades out. Blending can be any type of entry of one or more images and/or exit of one or more images. The template composer then creates intra-bin transitions that describes whether a photo zooms in or out, or changes color or undergoes another visual augmentation such as sharpen or fade to grey. The intra-bin transition data is fine tuned to the specific bin, such as placing a fast moving zoom or a fade from color to black and white to match the feeling of the song at that point. The intra-bin transition data is later fine-tuned to the particular image that is placed into a bin so that it captures the full artistic intent of the template composer, such as targeting a zoom on the eyes of a person in an image. Lyrics are then synchronized to the bins if not already included with the song, and the template composer inserts images manually or automatically based on lyrics or image search instructions inserted by the template composer. These images symbolize which types or examples of images an end user could insert to follow the storyline of the video, as appropriate for the specific bin. The template composer can insert additional information such as prompts, cues, or instructions about what images to insert into the bin. Some images inserted by the template composer can have cut-out sections where they serve as backgrounds for parts of images provided by an end user, such as a snowy background with a circle overlay where a person's head appears. The background template image can be inverted so that there are parts provided by the template composer, such as text or other visuals, that overlay over an end user's images. These steps can be completed in any order after the cut occurs, or they can occur simultaneously through use of tools that can capture movement of an individual and be decoded to cause the actions of cut, blend, and intra-bin transitions to occur. The template can then be packaged up and made available to a user who can swap images and perform very limited fine tuning to personalize the video to their liking.
- 002 in FIG. D1 describes the actions of a user who has received the template on their PC or other electronic device. They are instructed along the series of images in the template about which images to capture or insert at a given bin. These images might originate in real-time from a camera, be supplied by the template composer along with the template, come from a user's existing image library, or originate from the Internet based on keyword information supplied by the template composer or lyrical data. Sliding these images into the template produces a ready-made video. After the user selects the images for each bin, the user can optionally add basic annotations to their liking or as described by the template composer. Finally, the user can upload the video to the Internet where it can be accessed later from other devices. It can be shared in this method by selecting friends with which to share the video or an Internet site on which it may be posted. The video in its entirety consists of the song, images (photos, video, or other imagery), and transition information, all synchronized to the time mesh.
- FIG. D2 describes an embodiment of the time mesh and how it may be used to schedule the execution of template object events. The template objects may be organized or visualized along a time line in terms of the object type, such image tracks 101, transition tracks 103,
time mesh 105, and audio tracks, 107. The template parent object has three main child objects,bins 150 153,time mesh 105 and music/audio objects 132 137. Thetime mesh object 105 may have two child objects, theevent identification 128 andevent execution time 130 objects. These two objects contain the schedule of events for all template events and may be stored in the form of a table. Another embodiment may use a time mesh with a single event id that references execution time and instructions. The time mesh may have multiple events with the same execution time when in reality the template and MPV software will order the instructions for the microprocessor. The result between two events executed with milliseconds will likely give the appearance of synchronization to the viewer, and computer processing may combine overlapping instructions when appropriate prior to providing visual or audio output so that the actual output is consistent with all the instruction sets. The events could also be staggered in time, for example in the milliseconds, for other embodiments. - The
bin parent object 150 includes multiple child objects that include animage 120,inter-bin transitions 124 such as fades,intra-bin transitions 126 such as pans and zooms 126. A givenbin 150 is defined bystart 151 and end 152 events. In the example shown in FIG. D2,bin 2 overlaps withbin 1 with a fade in 184 asbin 1 fades out 180. The other bins 3-7 are shown here with hard transitions and no fades so that other features can be more easily identified. For example, the time mesh will contain an event id, forexample B1S 148 and anevent time 149 corresponding to the beginning ofbin 1. The time mesh will track the beginning and end of all bins as shown in the figure as bin1 start BS1 and end BE1,bin 2 start B2S and end B2E and continues tobin 7 start BS7 and end BS7. Bins may overlap as shown inbin 5 160 andbin 6 164 where images may be superimposed, e.g. to form a sequence. It is not shown in this figure, but one embodiment might have audio objects that belong to a given parent bin object. One example is the a birthday related template where the last bin has a placeholder for the user to record their a personal birthday greeting. Since the audio object belongs to a given bin it can only be played during the fixed time slot defined for a given bin. - The bins will normally be sequenced with the music track but there may be cases where a bin contains a
title slide 150 or a credits slide 170 that correspond to silence 137. The silence may be stored as entries in the time mesh or as shown here, the absence of a music track is treated as silence without an explicit time mesh entry. The bin can be considered as a container of objects including a specific image and data such as transition effects that generally applies to an image. The bin exists without any specific image and image data may be replaced or modified at any time. The inter-bin transition objects 124 control the effects associated with the transition of one image to another such as where one image fades out 184 or fades in 180. The fade transition events are recorded in the time mesh with theevent id 181 andevent execution time 181. The inter-bin transitions will normally overlap from one bin to another. The intra-bin transition objects control how a given bin manipulates an image in terms of composition, such as pans, zooms and color manipulations. One example is the use of pan to move a viewing window from the left of the image to the right where the time mesh contains the pan event id 191 andtime 192 associated with the beginning 190 contains information regarding on how the pan is executed in terms of speed and perimeter of the bounding box being viewed. Another example has bins with or without images automatically changes the color throughout the duration of the bin beginning with one part of the color spectrum and gradually changing to another in order to display the intra-bin effect of zoom, pan, or rotate. This color change could be part of the finished video or serve as a tool for creation as it captures the energy of the transition in a visually changing form. The bins may include objects that can and cannot be altered by additional contributors or template users. An example is a non-modified bin at that beginning that adds a corporate branding image or icon with either silence or a short audio segment. - The
music object 132 exists in digital form as a music file or a digital signature. A digital signature is a digital transformation, such as a wavelet transform, that identifies unique characteristics of a song related to synchronization with bin objects. The digital signature may be used to synchronize a downloaded version of the song to the time mesh by using pattern matching between the templated stored signature and a transformation of the downloaded song. The matching may be as simple as locating the beginning of the downloaded song and meshing it to the song beginevent 133. The matching may include convolution of the two signals to find the best match. The matching may include stretching and scaling the two signatures to address situations where the template creation may be done with a different encoding of the song than the song that may be downloaded at a later date. The matching may also include cutting portions of a song, such as the beginning or end, to make the song used by the user consistent with the song used by the template composer, or merging bins or cutting bins out of the MPV to be consistent with the song provided by the user. - While the template architecture doesn't require an object oriented software architecture for implementation, a preferred embodiment uses it to allow for an extensible and flexible architecture. This architecture will allow modifications to be made and existing templates to be easily updated without breaking the existing template. For example, if a creator updates their images to use a new inter-bin transition effect the changes can be rolled out across the current template and all modifications thereof. It also allows for reuse of existing template objects for example a company may create several templates based on the advertising campaign with a template for specific products within a given line. In this embodiment, elements of the template for example opening and closing branding, campaign messages and transition effects can be consistent with the images and songs changing. It is also preferred that permissions on each object be maintained so that the template creator can control which objects can and cannot be modified by subsequent template creation contributors or by a template user.
- A taxonomy of template objects is provided as an example in FIG. D3. In this example the parent template objects has three child objects: bin, time mesh and song. Each of these objects has child objects. For example, each bin object may have an id, image, transition effects and text associated with it. The inter-bin and intra-bin objects have transition effects as described as part of the discussion associated with FIG. D2.
- Importantly, embodiments may contain instructions which are references to libraries or instruction sets. For example, a template object such as an inter-bin transition may contain a reference call to a particular type of fade. That fade operation may be pre-defined in a library on a particular computing device or camera, or may be able to reference a library or set of instructions online which provide functionality or further reference to where functionality may be attained. The template utilizes reference call made by the fade to the instructions defined in the referenced software or hardware. This structure allows templates to take advantage of pre-existing routines on specific devices. Further, this structure allows updates relating to templates or specific hardware to be made online and referenced by existing templates. A template may contain a reference to another place which contains a reference to yet another place, and this process may contain any number of further references until the actual machine instructions to perform the action are retrieved.
- Another embodiment maintains a list of contributors where a given template parent object may have additional template children associated with a contributor's modification. In this scenario, the template database may maintain an ‘attribution’ list where each contributor is recognized and automatically added to previous contributors in that inheritance chain. The list may include links to Internet-accessible sites that contain further updates, such as for insertion of new material, or additional referencing, such as modifications that occurred after the creation of the MPV being viewed. Another embodiment allows for the attribution list to share in any commercial proceeds or licensing event.
- FIG. D4 shows elements of the timing mesh in what is the resulting product of a template composer synchronizing music with images and transitions. It locks items independently to specific points in the song that are determined by the template composer and subsequent modification of parts generally occur independent of each other so that they may stay synchronized to the audio. 260 is a timeline of the song, 270 is a waveform characterizing the audio elements of the song. 250 is a slider to show which part of the song we can see in the box (the grey area indicates the part we see, the white is the entire video/song length). 200 indicates the lyrics that are synched to 220, the filmstrip of photos in order of when they appear. 230 is the previous image, 240 the transition to the next, and 250 the current image being shown. The dashed box around 240 and 250 is the element that is editable as described below. It can also extend to cover the preceding and following transition, or can encompass an image, an intermediate transition, and the following image, such that the encompassed area is editable as described below. 210, the checkboxes, allow certain elements to be displayed so that the composer can view or hide whichever of the aforementioned items are desired as such. They can be dragged up and down to reorder the elements described. 296 allows the filmstrip timeline to be increased or decreased in size, showing correspondingly less or more of the series and allowing easier editing. Template composers can drag sliders to navigate or use buttons as shown. Additionally, they may drag sides of the pictures or transitions to change where they occur in the timeline. The most important aspect of the timing mesh is that all elements here are synched to the timeline and not to each other, such that dragging an image left will change that image, but not the audio or any previous or subsequent images. They are instead locked into the timing mesh independently. The structure of the timing mesh also allows segments of an MPV to be cut and for the contained elements to be reindexed to a time mesh that starts at that segment time, as might occur when combining portions of two MPVs.
- Template creation is facilitated by a graphical user interface (GUI) that provides visual aids to synchronize image selection and transitions to specific songs. Image transition selections can be described as two classes.
- Inter-bin transitions describe visual effects associated with the switch from one image, video image sequence or animation to another one image, video image sequence or animation. An inter-bin transition may describe the type and rate of fade from one image to another. The inter-bin transition may be created or described through a graphical user interface (GUI) or could be described in an existing or new programming language.
- An example of how a GUI might facilitate intra-bin transition is shown in FIG. D6. In this example the
intra-bin editor 600 allows for a suggestion box that offers hints, with option for more explanation and image suggestions box. This can be hidden by clicking on the x in the box. Also can the user pull up additional hint information either by clicking on box, placing mouse over box. The main viewing window at 610 shows current image when paused and the changes in image (still or video) when play is pressed. The main viewing window may be in the form of a media player that allows the user to play, stop, pause, reverse or forward their way through a transition sequence. - The initial framing of the current image (with effects already in place) indicated at 620; might be zoomed in, altered, or have other visual effects contained within. The initial frame may allow the user to select the initial condition or state before pan and zoom effects are applied. The user can select point on image, indicated at 630, on which to focus, via an optional target tool. User can click to edit or link to, end image or can remove from tool. This also enables a point to stay static during a zoom in or out. The ending framing of image is indicated at 640. As in 630, user can click to edit in a number of ways to control the final state of the pan and zoom effects.
- As shown at 650 the timeline from start of image to end of image and status indicator. The slider moves automatically with time. User can drag slider to desired point within an image transition or the MPV. A loop button for image transition is shown at 660.
- The zoom or pan movement in an image might occur linearly, exponentially, logarithmically, or according to another parameter. Oftentimes linear movements do not correspond well to a perception of a consistent movement. Movements of any of the parameters might automatically scale to keep perceptual consistency in mind.
- As a template composer creates the template, he might utilize a tool such as the image selector tool in FIG. D9. Images can be manually or automatically inserted onto the palette in 910, where each grey box represents a different picture. These images originate from the folders, websites, or other sources described in 900. 920 represents the image from 910 currently selected for insertion into the bin that is contained by the dashed rectangle on the film strip (as described in FIG. D2 at 220). A template composer simply drags an image from the palette into the film strip at the desired location and the image now occupies the bin. The information about transitions and timing already exists with the bin, regardless of the data, though some images may have target data pre-associated with them to assist with transitions, or can be analyzed such that target data is derived automatically where a face might be. There is a resizing button at the bottom of 910 so that images can be resized to fit more images or less images on the palette, as well as sliders to move throughout the palette. Other data may be inserted onto this screen, such as hint information just above 920 that will be supplied to the user to assist in the user's selection of images.
- The synchronization of the inter-bin and intra-bin image transitions with music represents fixed points in time during the duration of the song and is managed as part of the time mesh. As indicated in FIGS. D4, D5 and D6, elements of the time mesh may be shown visually as a time line that ties the image selection, image transitions, music or lyrics together. Also as shown the GUI may incorporate the use of multiple tracks to aid the user in viewing simultaneous bin, image or song related events, as illustrated in FIGS. D7 and D8. Also the selection of transition effects may be aided by the use of a handheld device that a template creator can sway like a wand or conductor's baton. This may allow the creator to more easily translate how they process the music in their brain to the visual effects that define one or more bins. Other devices may include accelerometer based devices that capture motion, such as dancing or other bodily movement and translate those motions to visual effects during template creation.
- FIG. D7 provides an embodiment of intra-transition viewing with multiple overlapping bins. 700 is a suggestion box which offers hints for both bins, with option for more explanation and image suggestions box. 710 is the main viewing window, shows current view in video that mixes multiple bins and utilizes the current inter and intra transition effects. This view changes when play is pressed as movie proceeds. Synched up to slider in 5 and timeline at bottom. 720 is the initial framing of the image in bin 1 (with effects already in place); might be zoomed in, altered, or have other visual effects contained within. 722 is the end framing of the image. 730 is the initial framing of the image in bin 2 (with effects already in place); might be zoomed in, altered, or have other visual effects contained within. 732 is the end framing. 740 is the timeline from start of image to end of image and status indicator. Slider moves automatically with time. User can drag slider to desired point. 750 is the main filmstrip showing non-overlapping bins. 760 is the second filmstrip showing bins that overlap with first. There can be multiple filmstrips here, where ever bins overlap (for multiple pictures at one time that are not simply transitioning from one to next). Transitions are shown before and after to indicate when image starts and when it ends, with transitions also covering inter-slide transitions of images before and after. Dashed box refers to section shown in viewer in 710 and consistent with full time bar in 740.
- Another embodiment of displaying multiple tracks for a composer or user is described in FIG. D8.
- FIG. D8 provides an example of how bins may be displayed vertically to give the User or Composer an easy method to view how bins proceed over time with respect to time placement of bins and transitions. 800 shows current images occupying given bins. Bins are listed in order from top to bottom. In User Mode, clicking on a bin in this section brings up a screen to assist User in selecting a picture. In Template Composer Mode, clicking on a bin allows the Composer to edit the fields that will be visible to the user, such as keywords, effect, image selection (for multiple images to be recommended), etc. 810 is an up and down arrows allow user to scroll up or down the list of tracks for editing. 820 denotes a visually placed time segment, listed by beats or by a pre-defined time period such as seconds. 830 is a bin, placed on the horizontal axis at the time it is to begin being shown, and the transition time in and out. The shaded boxes indicate transitions. The clear boxes are where the image is full (though likely involved in an intra-bin transition). In Composer Mode, bins can be dragged to a different time, stretched, copied, or modified in other ways. Clicking on a box allows the Composer to modify the bin and its characteristics. 840 is a timeline (horizontal) of the currently shown section; the current time being played (vertical shaded bar). 850 is a track-size slider allows the tracks to be shrunk in size so that more tracks can appear, allowing the composer or user to view the full set of tracks for a video or just the current set being viewed. 860 is a timeline size slider that allows larger or shorter periods of time to be represented by the timeline. Composers require fine tuning of tracks and this method can act like a microscope, or allow the full video to be viewed on one screen.
- FIG. D9 also provides an idea of how an end user might drag their images into the filmstrip to personalize a template to their tastes. Certain features would likely be disabled for the end user that are accessible to the template composer, or which may be unlocked by an end user who assumes the role of a template composer. In the above it is assumed that the template composer and the end user are separate individuals, though they certainly could be the same individual. Additionally, any template composer could begin working from a pre-existing template, whether partially or fully complete, in order to modify the template to their liking. Any user could modify a template, whether in original form or already modified by another user, to suit their taste. For example, a friend may receive a video full of picture from a birthday party and choose to swap one picture of the birthday boy with an old picture of the birthday boy from childhood. There would likely be tracking data associated with changes over time and a possibly a repository in which the old versions and the associated data are stored. Access to different features of modification would depend on the licensing terms entered into by the various parties. For example it may be necessary to purchase a copy of the song or template in order to modify a friend's video, or even to purchase one or more copies of the song before it may be shared with others. Some of these rules might enable access to an existing copy of a song already residing on a friend's computer that would need to be referenced at run-time. The fingerprinting of the song in order to synchronize start times is of particular importance in this instance.
- Elements from FIGS. D4, D5, D6, D7, D8, and D9 may be combined to display a set of data to allow composers and users to view and modify aspects of an MPV most efficiently for their specific purposes.
- FIG. D10 illustrates the mathematics of a tool available to template composers and possibly end users which allows an image to have a point on the photo which stays static in it position on the screen while the photo is zoomed in or out. For example, a point between the eyes is selected as in the above illustration and while the image zooms out, the point between the eyes stays in exactly the same position on the screen. The target tool shown in FIG. D6 would allow these two points on the Start and End images to be locked together and move together if the user changes which part of the image is shown on the screen. For example, when locked, if the first image is moved down so that more of the head shows, the second would move in lock step to keep the point between the eyes consistent in the before and after. The second image would therefore be constrained in where it could move; its movement would require the first to also move. This tool could occur automatically, semi-automatically, or manually; the software may suggest points on the image using algorithms to detect likely targets, such as eyes or planets.
- The next section provides an example of how a user might utilize the templates create by template composer. The template is used with a camera or camera equipped device to compose or capture images in real-time.
- This section provides an example of how a user might utilize the templates created by a template composer to select images. As shown in FIG. D11, the template software or software associated with a template may provide four types of functions as part of the MPV creation process. The first function is that it provides ‘opportunity’ by allowing for one or more templates to be downloaded or shipped with the computing device and providing immediate access to the template at any time. The availability of multiple templates further provides the opportunity for the user to match a given setting or event with an appropriate template to create a personalized MPV.
- The second function is that the template or software associated with a template will ‘assist’ the user in the composition of the image such as the placement of the primary subject relative to a background or landscape or the movement of the subject for a sequence or burst of images. The third function is that the template or software associated with a template may provide instructions to the user or the images regarding settings, such as depth of focus or flash settings, for one or more image captures. The fourth function is that the template or software associated with a template enables the user to view existing sample images on the digital display, view and select one or more captured images for a given bin thus replacing a given template sample image and finally viewing the finished MPV on the display. The template processing unit may be in the form of a software module that resides within memory and is executed on a digital device or it could be a separate integrated circuit component configured for template operations. The template software may be downloaded and the software associated with a template may be embedded into the digital device. The software associated with a template that may be embedded into the device will likely interpret a given template object to assist, capture, place images and view the final result. In either case, the template processing unit (constructed in software or hardware) communicates with the digital display and various selection or modification functions presented in FIGS. D4 through D9.
- As described in Section 2.1, each template is associated with a particular song and has fixed bins for which a user will choose images. The preferred embodiment is that the song is available so that the user can hear the song while composing and capturing the images and that it may provide a richer experience. However in other cases the song may not be available for download or included with the template and the invention includes the use of the template without the song as well. The synchronization of the sample images and the music is done during template creation so the replacement of the sample images with new images for a given bin can still occur.
- The resulting MPV can be sent from the computing device to another person's computing device. There are multiple ways to do this. One method is for User A to email the completed MPV to another person, Recipient B, who receives the MPV or a link to a server where the MPV may be downloaded. In the case where the template is associated with a commercially available song, User A may choose to purchase the song for the Recipient B and their MPV is downloaded with the associated song. User A may choose not to purchase the song and Recipient B receives a URL link to download the MPV and purchase the song. User A may choose not to purchase the song and Recipient B receives the MPV and is prompted to purchase the song when they try to play it. User A may choose not to purchase the song and Recipient B receives the MPV and already owns the song which resides on a device that is used to view the MPV. The invention includes the various combinations of transmitting MPV and the associated songs or the various combinations of sending links to locations where MPVs and songs may be downloaded.
- FIG. D13 provides an embodiment of the process for an end user operating software that utilizes a template to create an MPV. In this case the user will select photos though other similar embodiments would include video, animations, text, or other visual materials. The user starts by choosing a song, mood, or
theme 2000 for which to create a video. The user can viewtemplates 2010 built on that song, mood, or theme to identify which template will be selected for modification. There may be multiple templates made by a variety of template composers that are all built upon an identical song. In 2020 the user selects the template to use, possibly including the purchase of the song and template at this stage. In 2030 the user views the instructions contained within the template about characteristics of the images to select for placement into a bin, a series of bins (most likely in the form of a list). In 2040 the user begins to look at the first bin which requires an image, and in 2050 instructions and lyrics about what the image should contain are included. Additionally, while in this bin the corresponding section of the song may play so that the user can feel mood of the music and hear lyrics. In 2060, a user takes one or more multiple pictures which are stored into the currently selected bin. One of these images will occupy the bin and that image will be selected in 2070 by the user. It may default to the last picture taken in the bin. Certain bins may require multiple images to be included and these could be selected from the bin. In 2080 the user proceeds to the next bin either automatically after selecting an image or manually indicating to proceed. After the last bin is filled, 2090 allows the user to view the full movie with their images. After viewing, in 2100 the user has the option to share, upload, or gift the video. In 2110 the user has the opportunity to further edit the video on a PC or other mobile device such as a digital camera, digital frame, mobile handset, or other electronic device that offers a user the ability to download the video and its parts and provide further commands for replacing images in various bins. - There may be additional steps in FIG. D13 and the listed steps may be skipped or performed in a different order. In one embodiment, there would be no lyrics or written suggestions, but there would be sample images in bins that the user would view but replace. In another embodiment, colors are used to indicate what types of images to place into different bins. In another embodiment, the user points to a group of images which are placed automatically into the bins in a random or semi-random order and the user is able to arrange them as they desire while watching the video with these images.
- Other various optional steps are included in the dashed boxes in the right column in 2011 through 2081. For example in 2011 a user may view multiple templates for a given song at the same time in order to see the differences as they occur. One template video could be on the left and a second on the right. 2021 allows for purchase of a song or template at a given point in the process, which could also occur in 2041 when gifting to another person. 2051 allows a user to move between bins as they desire, allowing images to be captured out of order as the user desires. 2061 gives the user the opportunity to view all or a segment of the video at a given time, or could allow a given bin to loop through the various images in the bin so the user can see how they look. 2062 allows many images to be referenced to a bin for easy reference at a later point in time even though they are not the image being used in the MPV in the bin slot. The real-time reference to a bin is an improvement over the current requirement that a user taking many photos at a given time would have to perform the binning assignment to a set of photos later on as they decide which image to select, without the benefit of music and requiring the user to remember what goes where, particularly for photos taken out of sequence of the bins. 2062 helps organize the assignment of the images taken in 2061 and avoid confusion later. 2071 allows the user to find photos previously taken and place them into the selected bin, or allows the user to locate and select other symbolic images that may have been downloaded with the template for particular bins. The user may also choose an image from several that appear from a search on keywords relating to the bin or from previous images taken that reside in other folders. 2081 allows annotation of images or blank bins, or allows basic editing of selected images such as red-eye reduction, targeting which area to focus on in the image as it moves, or resizing the viewable part of an image.
- The opportunity function described in FIG. D11 relates to
items - In one embodiment the downloaded software may contain templates or portions of templates for the user to choose from locally on the device as in 2010. While not required, there would likely be navigational utilities to help in the selection of templates, guiding a user through the selection of moods, themes, or songs. As an example, a user may key in a mood such as happiness or melancholy. The device would then search for templates that are resident on the device or resident on a server to which it can communicate to select templates that have been identified with that particular mood. Templates, portions of the templates, or descriptive information about the templates, such as a song the template is based on, would then be delivered to the user. Different methods for ranking and ordering the templates would be employed, such as which is most popular by purchase in the last week, which has been rated highest by viewers, or which are associated with template composers that gained reputations for creating quality templates. The user would then receive an ordered list of templates organized one or several of these ranking methods that provide available templates to choose from. The user may then view these templates as in 2010 to see what they like best. While viewing a specific template, the software may provide other guides such as a suggestion that users who viewed particular templates ended up purchasing other templates, possibly including the percent of users who purchased each after viewing the current template.
- A user might also choose to select a theme such as Christmas or Halloween. The user might undergo a similar process as indicated in mood, being provided a list of templates ranked by a variety of methods. The user might also utilize both a theme and a mood to select the templates to view. In one example a user selects “Halloween” and then “funny”, providing a list of templates that are closest to these parameters. Other templates that might be related to “Halloween” and “classic” but not related to “funny” would not be shown. Other criteria for any search might be that a template's cost is free, is within a certain price range, or is freely distributable to others. The same might apply to a template's music which might be free, within a certain price range, or be freely distributable. Other options may be available such as the ability to include advertising within an MPV in order to offset the cost of the template or music, or templates which are free if the user provides rights to freely distribute or showcase the finished MPV to a software provider. The intent of the navigational utilities is to provide an easy method for users to select a mood, theme, or song that fits their desired criteria as quickly as possible and that provides the most utility to the user, be it popularity, quality, or other criteria.
- Another method for search might be by song. The software may provide users an list of templates, organized by any of a number of criteria, such as the highest-rated templates based on a particular song or artist, the most current Billboard chart toppers, the highest ranked in any of a number of musical genres such as Country Music or R&B, the most recently purchased templates, templates that have had the finished MPV most distributed, songs that have the most templates, templates ranked as highest quality by one or more groups, templates that are most relevant to purchases already made by the user or to a user's demographic as defined by the user, by the software provider, or by a third party such as a DJ in a genre or subscribed to by the user. Keywords are also obvious selection criteria for templates, and might include song names, musical artist names, musical album names, lyrics associated with the template, or synonyms of any of these. The software might provide results based on a match of keywords and other ranking criteria, such as a blend of the keyword and popularity.
- Such a search selection of templates could occur directly on the device. Certain templates which are most likely to be purchased may be preloaded into the software and available for a user immediately, possibly with a purchase required. Other search results would likely come from a connection to a user's computer or to the Internet. The device user could browse the available templates or browse template information provided from the Internet, which would be delivered to the device as search criteria were provided to a server containing template information via the Internet. The user might then download selected templates for purchase or for previewing directly onto the device.
- A computer may also be used for template selection. Software for template selection (and possibly MPV creation or template editing) might contain many preloaded templates ready for viewing or use. The user might view these on a PC and select which they would like to place onto the camera with the intent of creating an MPV. A computer might also allow for the downloading to a related device. For example a user might download an MPV or a template to a pre-registered device such as a camera or a digital frame that has another means of connecting to the Internet such as through wireless telephone networks, through wireless connections to a local area network, or where a device is connected, wired or wirelessly, to another computer accessible by the Internet such as that for a family member located elsewhere.
- In many cases, a user will be required to log into an account prior to accessing or purchasing templates. The account might contain credits or value that a user has access to for the purchase of music, a template, or both. It might also contain MPVs that were purchased or created by the user, or gifted to the user by another, either as a rental or perpetually licensed gift, possibly with ability to view future variations or updates to a given MPV template.
- Another embodiment for MPV creation allows the user to view pre-created video or image footage and select parts of the footage to insert into a selected template. In this instance, a user might download both a template and one or more hours of footage which they would watch. Such a device need not contain image capture since the raw image footage would already be provided. The template might offer suggestions about what images or video to place into the bins. The images or video might be further editable, such as cropping, rotating, or changing color. The user would then be able to create the MPV based on these images. High resolution images may reside on a server and be accessible during the MPV creation process or after a required purchase. In such a case the MPV creation software would note the time of a particular image in the video being watched and be able to reference the higher resolution image from this time data. The MPV could then be created on the device locally or on a server and delivered to the device or to the user's account for further distribution. The footage viewed by a user may or may not be related to particular template.
- The assist function described in FIG. D11 relates to
items - FIG. D16 provides an example of how a user might utilize the templates create by template composers, as indicated in the Personalization portion of FIG. D1. FIG. D16 illustrates a screen being utilized by a user who has already created images and wants to place them into an MPV. The user may have gone through the process of capturing images with a live camera and placed them into bins, may have images that came with the template, may have searched for additional images using a search engine, or may have accessed images previously taken by the user. This tool allows the user to easily view lots of images and select the proper image for the bin, then moving on to the next bin and repeating the process as needed. The selection process could occur on a portable device, such as a camera, to augment parts of an MPV that the user desires to modify with pre-existing pictures.
- 2500 shows folders that contain images (still or video). Users can add more folders, including websites that contain images. 2502 displays a visual list of images from the folders listed in 2500. Images can be dragged into the Time-synched Energy Template to replace an existing image. Images can be grouped by “Your Images Taken” or by pre-loaded “Suggested Images” that are supplied with the template or taken from a web search based on keywords of the image. 2506 displays instruction for User of what type of image to select. 2508 shows lyrics for User of current bin. 2510 shows currently selected picture from 2502. 2508 displays lyrics for User. 2514 shows the timeline of images, including at least prior, current, and next. 2516 shows the current image occupying the current bin. It is consistent with 2510. 2518 shows the transition into current image occupying bin. 2520 shows controls for moving forward/backward in time or to next/prior bin. 2522 shows a view-size slider which allows images in 2514 to be made larger or smaller. 2524 shows a volume control slider. 2526 shows a menu button that brings up additional options including selecting images to show, moving bins, saving progress, changing screens, turning on or off optional features (e.g. lyrics) etc.
- The assist function may provide additional features to those in FIG. D14, D15, and D16. Features, including some of those mentioned in the earlier Figures, include the following.
-
- A list which can be viewed on a display, where each item refers to a bin and includes keywords. The list can be viewed on the device display, emailed, viewed on a PC, or viewed in another way.
- The ability to view a library of images pre-selected for bins by template composers, images placed into bins of MPVs previously made by users, images from local or online libraries, images from online communities, images from linked friends in online communities, or images that have been processed and possess visual characteristics, any of which might or might not be tagged with keywords consistent with the template composer's instructed keywords.
- The ability to view a list of images pre-selected by a template composer or third party which has instructions about the positive characteristics about the image, including what a user should try to do in composing an image for a particular bin.
- The ability to view multiple images in a bin in an MPV at the same time, or to view images designated for a single bin in sequence in a loop, possibly included a prior and/or next image.
- The auto-detection of characteristics within an image, such as a face that might fit into a pre-defined area of a background image
- Instructions for a user to take multiple images that could be automatically cropped or super-imposed, possibly in a sequence over time. Images might be taken in certain portion of the viewfinder, or be full-sized images which are resized and placed into the prescribed position.
- Instructions that can be toggled on or off to guide a user in the placement and sizing of a subject.
- Lyrics that can be toggled on or off for a particular bin
- Automatic selection of the point of focus on an image
- Manual selection of the point of focus of an image using instructions on what to place the focus on, such as a building, a persons eyes, a river, or other object
- Automatic selection of one image for each bin from a pool of images marked as belonging to a particular bin to be placed in the bin for the creation of an MPV, or the ability to watch an MPV that selects from a designated pool of images for each bin.
- A service whereby an image can be uploaded and commented on, edited, or modified by a third party, providing the user with guidance about the quality of the image and what could improve the image.
- A service whereby the images taken by a user, possibly many images that may or may not be designated for specific bins, would be transmitted to professionals or third-parties, possibly be edited or cut, and placed into bins to form an MPV for the user.
- Any combination of the above.
- The view function described in FIG. D11 relates to
items 2090 in FIG. D13. Users can view MPVs with sample images, view their own with final images, view with different images being placed into a single bin to help decide which to select, or viewed semi-randomly where images are placed into the MPVs according to some selection criteria such as bins containing multiple images or such as images in any available library matching tags on the bin. Viewing is also possible on other devices, such as digital picture frames, or through output to another device by sending a digital or analog output signal such as connecting the camera to a TV for viewing the MPV. Viewing may occur at variable rates of volume (including no volume), occur at variable rates of speed for faster or slower playback, utilize standard navigational icons including play, pause, go to end, go to beginning, fast forward, rewind, and varying degrees of fast forward and rewind. Users may also watch an MPV in a mode where the bins can be modified as they are watched, such as changing a point of focus or amount of zoom, or use accelerometers, gyroscopes, or other physical movement sensors to modify the MPV as it is being viewed, such as pulsing to a physical movement or panning based on turning the device. In some of the instances, images. higher resolution images, or a rendered video would need to be downloaded into the viewing device so that the MPV could be assembled or viewed. - Users may share an MPV in complete or partially complete form. One method is to email the MPV file and possibly associated files, as a movie file or collection of MPV-related files that would be accessed during playback for construction of the video. Another method is to upload MPV files to a server where they can be accessed by others, possibly through an account they have set up on the server. If license fees are required, users may have already paid for certain users to be able to view the video, or they may allow others to purchase necessary rights to view the video. Purchase fees might allow for ability to modify part or all of the MPV. An MPV on a server may also be rendered and shared as a non-modifiable movie or MPV file rather than modifiable MPV files. Users might also use other transfer techniques such as Bluetooth technology for wireless transmission or a USB cable to share from one device to another, or be allowed to burn files onto a CD so they can be transferred to another user.
- FIG. D19 describes a likely scenario for a user has just completed personalizing an MPV with his own images. At this point a user may desire to place a title and share the MPV with friends. 2800 shows an area where User can change title of the video. 2810 shows an area where User can select email addresses of others to share video with. 2820 shows an area where User can upload video to backend service and User can access from other computer or device
- There may be various other embodiments not described here that achieve the spirit of the invention to accomplish these desired tasks with the combined objectives of simplicity of use and maximum intensity of effect. The novel features and advantages of the invention are described in the next section and capture key elements of a larger set of embodiments that achieve the spirit of the invention.
- One application of MPVs beyond video creation and viewing is for video games. The user would either be able to take images in real time and receive a score for the sequence or individual images. Alternatively the user might be required to size and place a rectangle inside existing pictures in order to cut the images, after which the user would receive a corresponding score based on how well the image was cut. Many variations could play on this such as determining the rotation or the point of focus. Scoring an image could take place automatically or could rely on a service of live humans that rates the modified image or the chosen effect. Educational opportunities could arise from this as well, such that the game is marketed as an educational utility to help improve the image capture of aspiring photographers or children. Some of these may come with preloaded footage as described earlier such that the final images are demonstrated to the user as a collection of preferred images. In other games, there could be specific points where the user needs to cut a song according to some criteria they are judged against, such as certain beats or accents. Users might be required to move the device according to the effect that is happening such as down or up or rotated based on movement of an image.
- Another videogame example is a treasure hunt, where children are directed to find and capture particular objects. Instructions might occur by audio instructions or written instructions, or use images that possess the desired quality the child is searching for such as “red” or “three” or “building” or “Mickey Mouse”. The child would then take photos of the items as each bin requests, and at the end of the exercise the video would be created automatically using the items captured, synched to some type of pre-defined music. Images of the child might be overlaid with the items found. Items might also need to fit within certain areas on a screen to ease in the video editing, and the direction or display screen would assist the user.
- Videobooks are another application. Videobooks can either physically display several images from video sequences in a print form similar to a comic book, or may digitally offer video sequences similar to an advanced feature digital frame capable of displaying sound and music. For example, a user might be able to take screen shots every one second or take series of shots in close together in a video sequence and then compile with others so that a story from the book emerges in print form. There might be 20, 50, 100, or more photos in the sequence, likely arranged by bin. Lyrics or comments might be included for a given bin. Images would likely contain effects that had been performed in the MPV so that a series of images, for example, zooms in or pans in similarly to the MPV.
- The portable device may also be used in the semi-automated creation of templates. Any type of sensors, such as accelerometers, could be used to capture human movement as a song occurs. That data could then be interpreted to provide cuts, pans, zooms, blends, and other effects. Input devices used to collect this data might be external to the device but plug into the device with a cable or use a wireless communication technology such as Bluetooth to send information to the device.
Claims (31)
1. The use of a template providing visual direction on a display to guide a user in the selection of an existing image or sequence of images to be placed in the template.
2. the method of 1. where the template is displayed using software residing on a processor enabled digital device such as a personal computer, web server, phone, or other computing device that provides a visual representation and where the template may include sample images and music synchronized with a given template.
3. the methods of 1 or 2 where multiple templates reside on the device and provide the user with a choice of one or more templates and where the templates may be related to a particular theme, song, mood or event.
4. the methods of 1, 2 or 3 where the template includes instructions specifying inter-image transitions such as image-to-image fades or intra-image transitions such as pan and zoom effects and where the transitions may be synchronized to music.
5. the methods of 1, 2, 3 or 4 where the templates are downloaded through a wired or wireless connection from a database of templates that may reside on a server or are downloaded via the internet to a client type computer.
6. the methods of 1, 2, 3, 4 or 5 where the user is assisted in the selection of an image or sequence of images relative to subject positioning with regard to backgrounds, landscapes or other subjects
7. the methods of 1, 2, 3, 4, 5 or 6 where the user is assisted by visual indicators shown on the display of the device, including the use of shapes to dictate the placement of a subject, such as position of the subjects eyes relative to a particular location and where the location may include a sample image with the location indicated.
8. the methods of 1, 2, 3, 4, 5, 6, or 7 where the template is synchronized with music and where the music is played for the user as part of importing images or image sequences into the template and where the combination of music and the template aids in the selection of an image, selection of a transition effect or some additional modification to the image, such as a transition from gray scale to color or transitions between multiple images taken in a sequence.
9. the methods of 1, 2, 3, 4, 5, 6, 7, or 8 to arrange or organize a selection of previously captured images in a palette and where the images are selected according to criteria associated with the mood, song or theme of a given template and where keywords or labels associated with a set of images are used to select and arrange them for importing into a given template.
10. the methods of 1, 2, 3, 4, 5, 6, 7, or 8 where the template suggests the use of multiple images taken at different settings where the settings may include depth of focus, shutter speed, ISO level, flash intensity or synchronization or focal point or where additional software recognizes or organizes images based on depth of focus, shutter speed, ISO level, flash intensity or synchronization or focal point.
11. the use of methods 1, 2, 3, 4, 5, 6, 7, 8, 9, or 10 where software is used to graphically display a template with a sample image and a media player is provided that synchronizes the images transitions to music and where additional images may be provided to show the beginning and end of the intra-image transitions such as a pan or zoom effect and where the media player may be used to view the inter-image transitions from sample image to sample image, such as fade effect, style or rate.
12. the use of methods 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 or 11 to generate a motion photo video consisting of a sequence of images synchronized to music, stored in the form of a software object, where the object may be a video or the set imported images with a computer instruction set to specify the template style or transition effects.
13. the transfer of a motion photo video in method 12 or template object used to generate a motion photo video in method 12 from one device to a another digital device such as a computer, phone, television, frame, digital book, printer, disk drive or digital media such as a DVD, CD, flash or USB based memory device through wired or wireless communication.
14. the embedding of the motion photo video or populated template object resulting from method in 12 in a personal or commercial webpage, either through the direction of the user or automated through software residing on a personal computer, web server, or other computing device.
15. the sending of the motion photo video or populated template object resulting from the method in 12 to another person via email where the object is sent or a link is sent where the object can be downloaded.
16. the use of methods 1, 2, 3, 4, 5, 6, 7, or 8 that includes the purchase of a song or some form of a digital media license for music, where the music is associated with or linked to a given template or where the sending of an object or link of an object to another person results in the purchase of a song or some form of a digital media license for music
17. the transfer of the motion photo video object resulting from method 12 through wired or wireless communications from one device to another.
18. the viewing of the motion photo video object resulting from method 12 on a display such as a television, digital frame, digital book, phone or personal computer display.
19. a strategy for selling music that includes the provision of a visual template synchronized to a particular song that allows images to be imported by a user, where the synchronization is in the form of software that specifies inter-image and intra-image transition effects.
20. a strategy for selling digital display devices for displaying the motion photo video object resulting from method 12 and where the display may include electronics that communicate with other electronic devices to receive the motion photo video object through a wired or wireless connection.
21. a strategy for advertising products or services by displaying or broadcasting the motion photo video resulting from method 12.
22. the use of method 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, or 12 to create an advertisement that may be co-sold or co-branded with a particular song or that may be related to a specific template.
23. the method or 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, or 11 where one person creates the template specifying a sequence of visual effects and a second person utilizes the template to select images to which the visual effects are applied in order to generate a motion photo video resulting from method 12.
24. the method of 23 where a template may be created by one person, where the template may be modified by one or more additional persons, and where the images are selected by a third person, where the third person uses the modified template to select content to be placed into the template to create a motion photo video resulting from method 12.
25. the method of 23 or 24 where one person selects images to be placed into a template to generate a motion video resulting from method 12, and where a second person may replace one or more images in the resulting template object, and where a new motion photo video is generated from new content in template object by the process described in method 12.
26. the use of methods 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, or 11 where an update is applied to a template which causes the initial motion photo video created by earlier versions of the populated template object to be updated with the changes to the template object, which may include changes to timing, inter-transition bins or their effects, or other visual effects, where the same content used by the person to generate the initial motion photo video in method 13, along with the modified visual effects from the update, is used to generate the new motion photo video.
27. the use of method 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, or 11 where the template is generated with the aid of a a visual representation of the music, such as a frequency transformation, energy or volume profiles at a particular frequency, lyrics in text form, amplitude or volume of a vocal track, which assist the user in determining bin properties, inter-bin transitions or intra-bin transitions with respect to the music.
28. The use of a template software object that contains one or more bin software objects, each of which that has one or more child objects associated with it such as an image, inter-bin transitions, intra-bin transitions, speech annotations, background or text, where the bins are part of a template object which schedules the display of images with any visual effects associated with the bin at the time prescribed by the template software object or bin object, where the template object is used to create a motion photo video, and where images in bins may be determined by or replaced by users, manually or automatically, to alter how a template object is populated and generate new motion photo videos.
29. The use of method 28 where the properties of the bin software object, such as intra-bin transition effects or timing of the bin, may be modified by a person so that a third person may populate, manually or automatically, the modified template object with images or additional audio to generate a new motion photo video.
30. An apparatus, such as an accelerometer based device that aids the user in selecting visual effects such as intra-bin transitions or inter-bin transitions associated with the music as part of generating the template objects used in methods 23, 24, 25, 26, 27, 28, or 29.
31. The licensing or selling of the template object created in methods 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 26, 27, or 28 by one person to another person.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/716,231 US20100223128A1 (en) | 2009-03-02 | 2010-03-02 | Software-based Method for Assisted Video Creation |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15687109P | 2009-03-02 | 2009-03-02 | |
US12/716,231 US20100223128A1 (en) | 2009-03-02 | 2010-03-02 | Software-based Method for Assisted Video Creation |
Publications (1)
Publication Number | Publication Date |
---|---|
US20100223128A1 true US20100223128A1 (en) | 2010-09-02 |
Family
ID=42666893
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/715,367 Expired - Fee Related US8860865B2 (en) | 2009-03-02 | 2010-03-01 | Assisted video creation utilizing a camera |
US12/716,231 Abandoned US20100223128A1 (en) | 2009-03-02 | 2010-03-02 | Software-based Method for Assisted Video Creation |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/715,367 Expired - Fee Related US8860865B2 (en) | 2009-03-02 | 2010-03-01 | Assisted video creation utilizing a camera |
Country Status (1)
Country | Link |
---|---|
US (2) | US8860865B2 (en) |
Cited By (201)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110029635A1 (en) * | 2009-07-30 | 2011-02-03 | Shkurko Eugene I | Image capture device with artistic template design |
US20110025883A1 (en) * | 2009-07-30 | 2011-02-03 | Shkurko Eugene I | Image capture method with artistic template design |
US20110029562A1 (en) * | 2009-07-30 | 2011-02-03 | Whitby Laura R | Coordinating user images in an artistic design |
US20110025714A1 (en) * | 2009-07-30 | 2011-02-03 | Ptucha Raymond W | Method for producing artistic image template designs |
US20110029540A1 (en) * | 2009-07-30 | 2011-02-03 | Ptucha Raymond W | Method for matching artistic attributes of a template and secondary images to a primary image |
US20110029914A1 (en) * | 2009-07-30 | 2011-02-03 | Whitby Laura R | Apparatus for generating artistic image template designs |
US20110037778A1 (en) * | 2009-08-12 | 2011-02-17 | Perception Digital Limited | Apparatus And Method For Adjusting An Image In A Screen Of A Handheld Device |
US20130055087A1 (en) * | 2011-08-26 | 2013-02-28 | Gary W. Flint | Device, Method, and Graphical User Interface for Editing Videos |
US20130150990A1 (en) * | 2011-12-12 | 2013-06-13 | Inkling Systems, Inc. | Media outline |
US20130151971A1 (en) * | 2011-12-13 | 2013-06-13 | Olympus Imaging Corp. | Server apparatus and processing method for the same |
US20130167086A1 (en) * | 2011-12-23 | 2013-06-27 | Samsung Electronics Co., Ltd. | Digital image processing apparatus and method of controlling the same |
US20130272679A1 (en) * | 2012-04-12 | 2013-10-17 | Mario Luis Gomes Cavalcanti | Video Generator System |
US20130311886A1 (en) * | 2012-05-21 | 2013-11-21 | DWA Investments, Inc. | Interactive mobile video viewing experience |
US20140172856A1 (en) * | 2012-12-19 | 2014-06-19 | Yahoo! Inc. | Method and system for storytelling on a computing device |
US8775972B2 (en) * | 2012-11-08 | 2014-07-08 | Snapchat, Inc. | Apparatus and method for single action control of social network profile access |
US8849043B2 (en) | 2009-07-30 | 2014-09-30 | Intellectual Ventures Fund 83 Llc | System for matching artistic attributes of secondary image and template to a primary image |
US20140310132A1 (en) * | 2010-04-30 | 2014-10-16 | Iliv Technologies Inc. | Collaboration tool |
US8935487B2 (en) | 2010-05-05 | 2015-01-13 | Microsoft Corporation | Fast and low-RAM-footprint indexing for data deduplication |
US20150135077A1 (en) * | 2013-03-14 | 2015-05-14 | Aperture Investments, Llc | Systems and methods for creating, searching, organizing, selecting and distributing video content based on mood |
US9053032B2 (en) | 2010-05-05 | 2015-06-09 | Microsoft Technology Licensing, Llc | Fast and low-RAM-footprint indexing for data deduplication |
USD732554S1 (en) * | 2012-01-13 | 2015-06-23 | Konica Minolta, Inc. | Electronic copying machine display screen with graphical user interface |
US9083770B1 (en) | 2013-11-26 | 2015-07-14 | Snapchat, Inc. | Method and system for integrating real time communication features in applications |
US9094137B1 (en) | 2014-06-13 | 2015-07-28 | Snapchat, Inc. | Priority based placement of messages in a geo-location based event gallery |
US20150220249A1 (en) * | 2014-01-31 | 2015-08-06 | EyeGroove, Inc. | Methods and devices for touch-based media creation |
US9116912B1 (en) | 2014-01-31 | 2015-08-25 | EyeGroove, Inc. | Methods and devices for modifying pre-existing media items |
US9207857B2 (en) | 2014-02-14 | 2015-12-08 | EyeGroove, Inc. | Methods and devices for presenting interactive media items |
US9208472B2 (en) | 2010-12-11 | 2015-12-08 | Microsoft Technology Licensing, Llc | Addition of plan-generation models and expertise by crowd contributors |
US9225897B1 (en) | 2014-07-07 | 2015-12-29 | Snapchat, Inc. | Apparatus and method for supplying content aware photo filters |
US9237202B1 (en) | 2014-03-07 | 2016-01-12 | Snapchat, Inc. | Content delivery network for ephemeral objects |
CN105303517A (en) * | 2015-10-26 | 2016-02-03 | 北京金山安全软件有限公司 | Image processing method and device |
US9276886B1 (en) | 2014-05-09 | 2016-03-01 | Snapchat, Inc. | Apparatus and method for dynamically configuring application component tiles |
US9298604B2 (en) | 2010-05-05 | 2016-03-29 | Microsoft Technology Licensing, Llc | Flash memory cache including for use with persistent key-value store |
USD754188S1 (en) * | 2014-04-17 | 2016-04-19 | Naver Corporation | Display panel with graphical user interface |
US9385983B1 (en) | 2014-12-19 | 2016-07-05 | Snapchat, Inc. | Gallery of messages from individuals with a shared interest |
US9396354B1 (en) | 2014-05-28 | 2016-07-19 | Snapchat, Inc. | Apparatus and method for automated privacy protection in distributed images |
US20160300596A1 (en) * | 2015-04-09 | 2016-10-13 | Avid Technology, Inc. | Methods and systems for processing synchronous data tracks in a media editing system |
US9477997B2 (en) | 2012-06-21 | 2016-10-25 | Goopi Sàrl | Processing resource management system and methods |
US9519644B2 (en) | 2014-04-04 | 2016-12-13 | Facebook, Inc. | Methods and devices for generating media items |
USD775156S1 (en) * | 2015-12-17 | 2016-12-27 | Outbrain Inc. | Mobile device display screen or portion thereof with a graphical user interface |
USD775153S1 (en) * | 2015-12-17 | 2016-12-27 | Outbrain Inc. | Mobile device display screen or portion thereof with a graphical user interface |
USD775157S1 (en) * | 2015-12-17 | 2016-12-27 | Outbrain Inc. | Mobile device display screen or portion thereof with a graphical user interface |
USD775154S1 (en) * | 2015-12-17 | 2016-12-27 | Outbrain Inc. | Mobile device display screen or portion thereof with a graphical user interface |
US9537811B2 (en) | 2014-10-02 | 2017-01-03 | Snap Inc. | Ephemeral gallery of ephemeral messages |
US9633696B1 (en) * | 2014-05-30 | 2017-04-25 | 3Play Media, Inc. | Systems and methods for automatically synchronizing media to derived content |
US9639871B2 (en) | 2013-03-14 | 2017-05-02 | Apperture Investments, Llc | Methods and apparatuses for assigning moods to content and searching for moods to select content |
USD787537S1 (en) * | 2014-08-05 | 2017-05-23 | Naver Corporation | Display screen with animated graphical user interface |
US9704111B1 (en) | 2011-09-27 | 2017-07-11 | 3Play Media, Inc. | Electronic transcription job market |
US9705831B2 (en) | 2013-05-30 | 2017-07-11 | Snap Inc. | Apparatus and method for maintaining a message thread with opt-in permanence for entries |
US9721394B2 (en) | 2012-08-22 | 2017-08-01 | Snaps Media, Inc. | Augmented reality virtual content platform apparatuses, methods and systems |
US9742713B2 (en) | 2013-05-30 | 2017-08-22 | Snap Inc. | Apparatus and method for maintaining a message thread with opt-in permanence for entries |
USD797130S1 (en) * | 2014-11-10 | 2017-09-12 | Hitachi, Ltd. | Display screen with graphical user interface |
US9785666B2 (en) | 2010-12-28 | 2017-10-10 | Microsoft Technology Licensing, Llc | Using index partitioning and reconciliation for data deduplication |
US9843720B1 (en) | 2014-11-12 | 2017-12-12 | Snap Inc. | User interface for accessing media at a geographic location |
US9854219B2 (en) | 2014-12-19 | 2017-12-26 | Snap Inc. | Gallery of videos set to an audio time line |
US9866999B1 (en) | 2014-01-12 | 2018-01-09 | Investment Asset Holdings Llc | Location-based messaging |
US9875304B2 (en) | 2013-03-14 | 2018-01-23 | Aperture Investments, Llc | Music selection and organization using audio fingerprints |
US9886956B1 (en) * | 2013-10-15 | 2018-02-06 | 3Play Media, Inc. | Automated delivery of transcription products |
US20180048831A1 (en) * | 2015-02-23 | 2018-02-15 | Zuma Beach Ip Pty Ltd | Generation of combined videos |
US9936030B2 (en) | 2014-01-03 | 2018-04-03 | Investel Capital Corporation | User content sharing system and method with location-based external content integration |
US9998796B1 (en) * | 2016-12-12 | 2018-06-12 | Facebook, Inc. | Enhancing live video streams using themed experiences |
WO2018126279A1 (en) * | 2016-12-30 | 2018-07-05 | Lyons Jessica Barbara | Digital video file generation |
US10055717B1 (en) | 2014-08-22 | 2018-08-21 | Snap Inc. | Message processor with application prompts |
US10061476B2 (en) | 2013-03-14 | 2018-08-28 | Aperture Investments, Llc | Systems and methods for identifying, searching, organizing, selecting and distributing content based on mood |
US10084735B1 (en) | 2014-02-21 | 2018-09-25 | Snap Inc. | Apparatus and method for alternate channel communication initiated through a common message thread |
US10123166B2 (en) | 2015-01-26 | 2018-11-06 | Snap Inc. | Content request by location |
US10133705B1 (en) | 2015-01-19 | 2018-11-20 | Snap Inc. | Multichannel system |
US10135949B1 (en) | 2015-05-05 | 2018-11-20 | Snap Inc. | Systems and methods for story and sub-story navigation |
US20180358049A1 (en) * | 2011-09-26 | 2018-12-13 | University Of North Carolina At Charlotte | Multi-modal collaborative web-based video annotation system |
US10157449B1 (en) | 2015-01-09 | 2018-12-18 | Snap Inc. | Geo-location-based image filters |
US10165402B1 (en) | 2016-06-28 | 2018-12-25 | Snap Inc. | System to track engagement of media items |
US10203855B2 (en) | 2016-12-09 | 2019-02-12 | Snap Inc. | Customized user-controlled media overlays |
US10217489B2 (en) | 2015-12-07 | 2019-02-26 | Cyberlink Corp. | Systems and methods for media track management in a media editing tool |
US10219111B1 (en) | 2018-04-18 | 2019-02-26 | Snap Inc. | Visitation tracking system |
US10223397B1 (en) | 2015-03-13 | 2019-03-05 | Snap Inc. | Social graph based co-location of network users |
US10225328B2 (en) | 2013-03-14 | 2019-03-05 | Aperture Investments, Llc | Music selection and organization using audio fingerprints |
US10242097B2 (en) | 2013-03-14 | 2019-03-26 | Aperture Investments, Llc | Music selection and organization using rhythm, texture and pitch |
US10284508B1 (en) | 2014-10-02 | 2019-05-07 | Snap Inc. | Ephemeral gallery of ephemeral messages with opt-in permanence |
US10311916B2 (en) | 2014-12-19 | 2019-06-04 | Snap Inc. | Gallery of videos set to an audio time line |
US10319149B1 (en) | 2017-02-17 | 2019-06-11 | Snap Inc. | Augmented reality anamorphosis system |
US10327096B1 (en) | 2018-03-06 | 2019-06-18 | Snap Inc. | Geo-fence selection system |
US10334307B2 (en) | 2011-07-12 | 2019-06-25 | Snap Inc. | Methods and systems of providing visual content editing functions |
US10348662B2 (en) | 2016-07-19 | 2019-07-09 | Snap Inc. | Generating customized electronic messaging graphics |
US10354425B2 (en) | 2015-12-18 | 2019-07-16 | Snap Inc. | Method and system for providing context relevant media augmentation |
US10366543B1 (en) | 2015-10-30 | 2019-07-30 | Snap Inc. | Image based tracking in augmented reality systems |
US10387730B1 (en) | 2017-04-20 | 2019-08-20 | Snap Inc. | Augmented reality typography personalization system |
US10387514B1 (en) | 2016-06-30 | 2019-08-20 | Snap Inc. | Automated content curation and communication |
US10423983B2 (en) | 2014-09-16 | 2019-09-24 | Snap Inc. | Determining targeting information based on a predictive targeting model |
US10430838B1 (en) | 2016-06-28 | 2019-10-01 | Snap Inc. | Methods and systems for generation, curation, and presentation of media collections with automated advertising |
US10439972B1 (en) | 2013-05-30 | 2019-10-08 | Snap Inc. | Apparatus and method for maintaining a message thread with opt-in permanence for entries |
US10474321B2 (en) | 2015-11-30 | 2019-11-12 | Snap Inc. | Network resource location linking and visual content sharing |
US10499191B1 (en) | 2017-10-09 | 2019-12-03 | Snap Inc. | Context sensitive presentation of content |
US10523625B1 (en) | 2017-03-09 | 2019-12-31 | Snap Inc. | Restricted group content collection |
US10581782B2 (en) | 2017-03-27 | 2020-03-03 | Snap Inc. | Generating a stitched data stream |
US10582277B2 (en) | 2017-03-27 | 2020-03-03 | Snap Inc. | Generating a stitched data stream |
US10592574B2 (en) | 2015-05-05 | 2020-03-17 | Snap Inc. | Systems and methods for automated local story generation and curation |
US10616239B2 (en) | 2015-03-18 | 2020-04-07 | Snap Inc. | Geo-fence authorization provisioning |
US10623666B2 (en) | 2016-11-07 | 2020-04-14 | Snap Inc. | Selective identification and order of image modifiers |
US10623480B2 (en) | 2013-03-14 | 2020-04-14 | Aperture Investments, Llc | Music categorization using rhythm, texture and pitch |
US10638256B1 (en) | 2016-06-20 | 2020-04-28 | Pipbin, Inc. | System for distribution and display of mobile targeted augmented reality content |
US10678818B2 (en) | 2018-01-03 | 2020-06-09 | Snap Inc. | Tag distribution visualization system |
US10679389B2 (en) | 2016-02-26 | 2020-06-09 | Snap Inc. | Methods and systems for generation, curation, and presentation of media collections |
US10679393B2 (en) | 2018-07-24 | 2020-06-09 | Snap Inc. | Conditional modification of augmented reality object |
US10740974B1 (en) | 2017-09-15 | 2020-08-11 | Snap Inc. | Augmented reality system |
US10805696B1 (en) | 2016-06-20 | 2020-10-13 | Pipbin, Inc. | System for recording and targeting tagged content of user interest |
US10817898B2 (en) | 2015-08-13 | 2020-10-27 | Placed, Llc | Determining exposures to content presented by physical objects |
US10824654B2 (en) | 2014-09-18 | 2020-11-03 | Snap Inc. | Geolocation-based pictographs |
US10834525B2 (en) | 2016-02-26 | 2020-11-10 | Snap Inc. | Generation, curation, and presentation of media collections |
US10839219B1 (en) | 2016-06-20 | 2020-11-17 | Pipbin, Inc. | System for curation, distribution and display of location-dependent augmented reality content |
US10862951B1 (en) | 2007-01-05 | 2020-12-08 | Snap Inc. | Real-time display of multiple images |
US10885136B1 (en) | 2018-02-28 | 2021-01-05 | Snap Inc. | Audience filtering system |
US10915911B2 (en) | 2017-02-03 | 2021-02-09 | Snap Inc. | System to determine a price-schedule to distribute media content |
US10933311B2 (en) | 2018-03-14 | 2021-03-02 | Snap Inc. | Generating collectible items based on location information |
US10952013B1 (en) | 2017-04-27 | 2021-03-16 | Snap Inc. | Selective location-based identity communication |
US10948717B1 (en) | 2015-03-23 | 2021-03-16 | Snap Inc. | Reducing boot time and power consumption in wearable display systems |
US10963529B1 (en) | 2017-04-27 | 2021-03-30 | Snap Inc. | Location-based search mechanism in a graphical user interface |
US10979993B2 (en) | 2016-05-25 | 2021-04-13 | Ge Aviation Systems Limited | Aircraft time synchronization system |
US10979752B1 (en) | 2018-02-28 | 2021-04-13 | Snap Inc. | Generating media content items based on location information |
US10984568B2 (en) * | 2016-10-18 | 2021-04-20 | Snow Corporation | Methods, devices, and computer-readable media for sharing image effects |
US10993069B2 (en) | 2015-07-16 | 2021-04-27 | Snap Inc. | Dynamically adaptive media content delivery |
US10997783B2 (en) | 2015-11-30 | 2021-05-04 | Snap Inc. | Image and point cloud based tracking and in augmented reality systems |
US10997760B2 (en) | 2018-08-31 | 2021-05-04 | Snap Inc. | Augmented reality anthropomorphization system |
US11017173B1 (en) | 2017-12-22 | 2021-05-25 | Snap Inc. | Named entity recognition visual context and caption data |
US11023514B2 (en) | 2016-02-26 | 2021-06-01 | Snap Inc. | Methods and systems for generation, curation, and presentation of media collections |
US11030787B2 (en) | 2017-10-30 | 2021-06-08 | Snap Inc. | Mobile-based cartographic control of display content |
US11037372B2 (en) | 2017-03-06 | 2021-06-15 | Snap Inc. | Virtual vision system |
US11044393B1 (en) | 2016-06-20 | 2021-06-22 | Pipbin, Inc. | System for curation and display of location-dependent augmented reality content in an augmented estate system |
US11128715B1 (en) | 2019-12-30 | 2021-09-21 | Snap Inc. | Physical friend proximity in chat |
US11163941B1 (en) | 2018-03-30 | 2021-11-02 | Snap Inc. | Annotating a collection of media content items |
US11170393B1 (en) | 2017-04-11 | 2021-11-09 | Snap Inc. | System to calculate an engagement score of location based media content |
US11182383B1 (en) | 2012-02-24 | 2021-11-23 | Placed, Llc | System and method for data collection to validate location data |
US11189299B1 (en) | 2017-02-20 | 2021-11-30 | Snap Inc. | Augmented reality speech balloon system |
US11199957B1 (en) | 2018-11-30 | 2021-12-14 | Snap Inc. | Generating customized avatars based on location information |
US11201981B1 (en) | 2016-06-20 | 2021-12-14 | Pipbin, Inc. | System for notification of user accessibility of curated location-dependent content in an augmented estate |
US11206615B2 (en) | 2019-05-30 | 2021-12-21 | Snap Inc. | Wearable device location systems |
US11218838B2 (en) | 2019-10-31 | 2022-01-04 | Snap Inc. | Focused map-based context information surfacing |
US11216869B2 (en) | 2014-09-23 | 2022-01-04 | Snap Inc. | User interface to augment an image using geolocation |
US11228551B1 (en) | 2020-02-12 | 2022-01-18 | Snap Inc. | Multiple gateway message exchange |
US11232040B1 (en) | 2017-04-28 | 2022-01-25 | Snap Inc. | Precaching unlockable data elements |
US11249614B2 (en) | 2019-03-28 | 2022-02-15 | Snap Inc. | Generating personalized map interface with enhanced icons |
US11250075B1 (en) | 2017-02-17 | 2022-02-15 | Snap Inc. | Searching social media content |
US11265273B1 (en) | 2017-12-01 | 2022-03-01 | Snap, Inc. | Dynamic media overlay with smart widget |
US11263828B2 (en) * | 2017-07-14 | 2022-03-01 | Glu Mobile Inc. | Systems and methods for competitive scene completion in an application |
USD945470S1 (en) * | 2018-12-27 | 2022-03-08 | Sony Corporation | Display panel or screen with animated graphical user interface |
US11271993B2 (en) | 2013-03-14 | 2022-03-08 | Aperture Investments, Llc | Streaming music categorization using rhythm, texture and pitch |
US11290851B2 (en) | 2020-06-15 | 2022-03-29 | Snap Inc. | Location sharing using offline and online objects |
US11294936B1 (en) | 2019-01-30 | 2022-04-05 | Snap Inc. | Adaptive spatial density based clustering |
US11301117B2 (en) | 2019-03-08 | 2022-04-12 | Snap Inc. | Contextual information in chat |
US11314776B2 (en) | 2020-06-15 | 2022-04-26 | Snap Inc. | Location sharing using friend list versions |
US11343323B2 (en) | 2019-12-31 | 2022-05-24 | Snap Inc. | Augmented reality objects registry |
US11361493B2 (en) | 2019-04-01 | 2022-06-14 | Snap Inc. | Semantic texture mapping system |
US11388226B1 (en) | 2015-01-13 | 2022-07-12 | Snap Inc. | Guided personal identity based actions |
US11429618B2 (en) | 2019-12-30 | 2022-08-30 | Snap Inc. | Surfacing augmented reality objects |
US11430091B2 (en) | 2020-03-27 | 2022-08-30 | Snap Inc. | Location mapping for large scale augmented-reality |
US11455082B2 (en) | 2018-09-28 | 2022-09-27 | Snap Inc. | Collaborative achievement interface |
US11475254B1 (en) | 2017-09-08 | 2022-10-18 | Snap Inc. | Multimodal entity identification |
US11483267B2 (en) | 2020-06-15 | 2022-10-25 | Snap Inc. | Location sharing using different rate-limited links |
US11500525B2 (en) | 2019-02-25 | 2022-11-15 | Snap Inc. | Custom media overlay system |
US11503432B2 (en) | 2020-06-15 | 2022-11-15 | Snap Inc. | Scalable real-time location sharing framework |
US11507614B1 (en) | 2018-02-13 | 2022-11-22 | Snap Inc. | Icon based tagging |
US11516167B2 (en) | 2020-03-05 | 2022-11-29 | Snap Inc. | Storing data based on device location |
US11558709B2 (en) | 2018-11-30 | 2023-01-17 | Snap Inc. | Position service to determine relative position to map features |
US11574431B2 (en) | 2019-02-26 | 2023-02-07 | Snap Inc. | Avatar based on weather |
US20230037470A1 (en) * | 2021-08-03 | 2023-02-09 | Idomoo Ltd | System And Method For Programing Video |
US11601783B2 (en) | 2019-06-07 | 2023-03-07 | Snap Inc. | Detection of a physical collision between two client devices in a location sharing system |
US11601888B2 (en) | 2021-03-29 | 2023-03-07 | Snap Inc. | Determining location using multi-source geolocation data |
US11606755B2 (en) | 2019-05-30 | 2023-03-14 | Snap Inc. | Wearable device location systems architecture |
US11609948B2 (en) | 2014-03-27 | 2023-03-21 | Aperture Investments, Llc | Music streaming, playlist creation and streaming architecture |
US11616745B2 (en) | 2017-01-09 | 2023-03-28 | Snap Inc. | Contextual generation and selection of customized media content |
US11619501B2 (en) | 2020-03-11 | 2023-04-04 | Snap Inc. | Avatar based on trip |
US11625443B2 (en) | 2014-06-05 | 2023-04-11 | Snap Inc. | Web document enhancement |
US11631276B2 (en) | 2016-03-31 | 2023-04-18 | Snap Inc. | Automated avatar generation |
US11645324B2 (en) | 2021-03-31 | 2023-05-09 | Snap Inc. | Location-based timeline media content system |
US11653072B2 (en) | 2018-09-12 | 2023-05-16 | Zuma Beach Ip Pty Ltd | Method and system for generating interactive media content |
US11676378B2 (en) | 2020-06-29 | 2023-06-13 | Snap Inc. | Providing travel-based augmented reality content with a captured image |
US11675831B2 (en) | 2017-05-31 | 2023-06-13 | Snap Inc. | Geolocation based playlists |
US11714535B2 (en) | 2019-07-11 | 2023-08-01 | Snap Inc. | Edge gesture interface with smart interactions |
US11729343B2 (en) | 2019-12-30 | 2023-08-15 | Snap Inc. | Including video feed in message thread |
US11735186B2 (en) | 2021-09-07 | 2023-08-22 | 3Play Media, Inc. | Hybrid live captioning systems and methods |
US11734712B2 (en) | 2012-02-24 | 2023-08-22 | Foursquare Labs, Inc. | Attributing in-store visits to media consumption based on data collected from user devices |
US11751015B2 (en) | 2019-01-16 | 2023-09-05 | Snap Inc. | Location-based context information sharing in a messaging system |
US11776256B2 (en) | 2020-03-27 | 2023-10-03 | Snap Inc. | Shared augmented reality system |
US11785161B1 (en) | 2016-06-20 | 2023-10-10 | Pipbin, Inc. | System for user accessibility of tagged curated augmented reality content |
US11799811B2 (en) | 2018-10-31 | 2023-10-24 | Snap Inc. | Messaging and gaming applications communication platform |
US11809624B2 (en) | 2019-02-13 | 2023-11-07 | Snap Inc. | Sleep detection in a location sharing system |
US11816853B2 (en) | 2016-08-30 | 2023-11-14 | Snap Inc. | Systems and methods for simultaneous localization and mapping |
US11821742B2 (en) | 2019-09-26 | 2023-11-21 | Snap Inc. | Travel based notifications |
US11829834B2 (en) | 2021-10-29 | 2023-11-28 | Snap Inc. | Extended QR code |
US11843456B2 (en) | 2016-10-24 | 2023-12-12 | Snap Inc. | Generating and displaying customized avatars in media overlays |
US11842411B2 (en) | 2017-04-27 | 2023-12-12 | Snap Inc. | Location-based virtual avatars |
US11852554B1 (en) | 2019-03-21 | 2023-12-26 | Snap Inc. | Barometer calibration in a location sharing system |
US11860888B2 (en) | 2018-05-22 | 2024-01-02 | Snap Inc. | Event detection system |
US11868414B1 (en) | 2019-03-14 | 2024-01-09 | Snap Inc. | Graph-based prediction for contact suggestion in a location sharing system |
US11870743B1 (en) | 2017-01-23 | 2024-01-09 | Snap Inc. | Customized digital avatar accessories |
US11877211B2 (en) | 2019-01-14 | 2024-01-16 | Snap Inc. | Destination sharing in location sharing system |
US11876941B1 (en) | 2016-06-20 | 2024-01-16 | Pipbin, Inc. | Clickable augmented reality content manager, system, and network |
US20240038277A1 (en) * | 2022-07-29 | 2024-02-01 | Rovi Guides, Inc. | Systems and methods of generating personalized video clips for songs using a pool of short videos |
US11893208B2 (en) | 2019-12-31 | 2024-02-06 | Snap Inc. | Combined map icon with action indicator |
US11900418B2 (en) | 2016-04-04 | 2024-02-13 | Snap Inc. | Mutable geo-fencing system |
US11925869B2 (en) | 2012-05-08 | 2024-03-12 | Snap Inc. | System and method for generating and displaying avatars |
US11943192B2 (en) | 2020-08-31 | 2024-03-26 | Snap Inc. | Co-location connection service |
US11972529B2 (en) | 2019-02-01 | 2024-04-30 | Snap Inc. | Augmented reality system |
Families Citing this family (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8805164B2 (en) | 2006-05-24 | 2014-08-12 | Capshore, Llc | Method and apparatus for creating a custom track |
US8831408B2 (en) | 2006-05-24 | 2014-09-09 | Capshore, Llc | Method and apparatus for creating a custom track |
US8294017B2 (en) * | 2008-10-24 | 2012-10-23 | Yatsav Productions Inc. | Personalized mandala-mantra |
US8200023B2 (en) * | 2008-12-12 | 2012-06-12 | Xerox Corporation | Method and system for processing photo product templates |
WO2012040725A2 (en) * | 2010-09-24 | 2012-03-29 | Pelco, Inc. | Method and system for configuring a sequence of positions of a camera |
US8751527B1 (en) * | 2010-10-11 | 2014-06-10 | Paul Riggio | Information retrieval system |
US9483786B2 (en) | 2011-10-13 | 2016-11-01 | Gift Card Impressions, LLC | Gift card ordering system and method |
US20130067332A1 (en) * | 2011-09-12 | 2013-03-14 | Microsoft Corporation | Media seek bar |
WO2013052477A1 (en) * | 2011-10-03 | 2013-04-11 | Netomat, Inc. | Image and/or video processing systems and methods |
JP2013090267A (en) * | 2011-10-21 | 2013-05-13 | Sanyo Electric Co Ltd | Imaging device |
US10430865B2 (en) | 2012-01-30 | 2019-10-01 | Gift Card Impressions, LLC | Personalized webpage gifting system |
US10713709B2 (en) * | 2012-01-30 | 2020-07-14 | E2Interactive, Inc. | Personalized webpage gifting system |
US9998722B2 (en) * | 2012-03-13 | 2018-06-12 | Tapshot, Inc. | System and method for guided video creation |
US10191624B2 (en) | 2012-05-21 | 2019-01-29 | Oath Inc. | System and method for authoring interactive media assets |
GB2508242B (en) * | 2012-11-27 | 2016-08-03 | Mirriad Advertising Ltd | Producing video data |
CN105027206A (en) * | 2012-11-29 | 2015-11-04 | 斯蒂芬·蔡斯 | Video headphones, system, platform, methods, apparatuses and media |
EP2992530A2 (en) * | 2013-05-02 | 2016-03-09 | Waterston Entertainment (Pty) Ltd. | System and method for incorporating digital footage into a digital cinematographic template |
KR102101850B1 (en) * | 2013-06-04 | 2020-04-17 | 삼성전자주식회사 | Method and apparatus for processing image in an electronic device |
CN104243846A (en) * | 2013-06-19 | 2014-12-24 | 北京千橡网景科技发展有限公司 | Image stitching method and device |
US9804760B2 (en) * | 2013-08-22 | 2017-10-31 | Apple Inc. | Scrollable in-line camera for capturing and sharing content |
US20160227285A1 (en) * | 2013-09-16 | 2016-08-04 | Thomson Licensing | Browsing videos by searching multiple user comments and overlaying those into the content |
US9779775B2 (en) * | 2014-02-24 | 2017-10-03 | Lyve Minds, Inc. | Automatic generation of compilation videos from an original video based on metadata associated with the original video |
US9471144B2 (en) | 2014-03-31 | 2016-10-18 | Gift Card Impressions, LLC | System and method for digital delivery of reveal videos for online gifting |
US20150331960A1 (en) * | 2014-05-15 | 2015-11-19 | Nickel Media Inc. | System and method of creating an immersive experience |
CN105205063A (en) * | 2014-06-14 | 2015-12-30 | 北京金山安全软件有限公司 | Method and system for generating video by combining pictures |
WO2016032054A1 (en) * | 2014-08-27 | 2016-03-03 | Lg Electronics Inc. | Display device and method of controlling therefor |
KR102206244B1 (en) | 2014-08-27 | 2021-01-22 | 엘지전자 주식회사 | The Apparatus and Method for Display Device |
EP3029676A1 (en) * | 2014-12-02 | 2016-06-08 | Bellevue Investments GmbH & Co. KGaA | System and method for theme based video creation with real-time effects |
US20180164990A1 (en) * | 2016-12-14 | 2018-06-14 | Facebook, Inc. | Methods and Systems for Editing Content of a Personalized Video |
US10178365B1 (en) | 2017-08-25 | 2019-01-08 | Vid Inc. | System and method for combining audio tracks with video files |
CN109561240B (en) | 2017-09-24 | 2023-02-17 | 福希特公司 | System and method for generating media assets |
CN110033502B (en) | 2018-01-10 | 2020-11-13 | Oppo广东移动通信有限公司 | Video production method, video production device, storage medium and electronic equipment |
CN108540849A (en) * | 2018-03-20 | 2018-09-14 | 厦门星罗网络科技有限公司 | The generation method and device of video photograph album |
CN114554301A (en) * | 2018-09-29 | 2022-05-27 | 深圳市大疆创新科技有限公司 | Video processing method, video processing device, shooting system and computer readable storage medium |
CN110336960B (en) * | 2019-07-17 | 2021-12-10 | 广州酷狗计算机科技有限公司 | Video synthesis method, device, terminal and storage medium |
CN112291484B (en) * | 2019-07-23 | 2022-11-29 | 腾讯科技(深圳)有限公司 | Video synthesis method and device, electronic equipment and storage medium |
US11322184B1 (en) | 2021-12-16 | 2022-05-03 | William Craig Kenney | System and method for synchronizing media files with audio track |
Citations (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6237010B1 (en) * | 1997-10-06 | 2001-05-22 | Canon Kabushiki Kaisha | Multimedia application using flashpix file format |
US20010027454A1 (en) * | 2000-03-29 | 2001-10-04 | Takashi Tsue | Method, apparatus, and recording medium for displaying templates |
US20010030660A1 (en) * | 1999-12-10 | 2001-10-18 | Roustem Zainoulline | Interactive graphical user interface and method for previewing media products |
US20010048802A1 (en) * | 2000-04-19 | 2001-12-06 | Nobuyoshi Nakajima | Method, apparatus, and recording medium for generating album |
US6429364B1 (en) * | 1998-11-02 | 2002-08-06 | Matsushita Electric Industrial Co., Ltd. | Data transmitting/receiving device and method |
US6557017B1 (en) * | 1998-02-06 | 2003-04-29 | Xerox Corporation | Image production system theme integration |
US20030160944A1 (en) * | 2002-02-28 | 2003-08-28 | Jonathan Foote | Method for automatically producing music videos |
US20040193723A1 (en) * | 2003-03-26 | 2004-09-30 | Fujitsu Limited | Method and system for streaming delivery and program and program recording medium thereof |
US20050084232A1 (en) * | 2003-10-16 | 2005-04-21 | Magix Ag | System and method for improved video editing |
US20050168453A1 (en) * | 2002-04-11 | 2005-08-04 | Konica Minolta Holdings, Inc. | Information recording medium and manufacturing method thereof |
US20050181762A1 (en) * | 2004-02-13 | 2005-08-18 | Kauppila Edwin A. | System and method for performing wireless remote monitoring |
US20050198571A1 (en) * | 2004-03-03 | 2005-09-08 | Gary Kramer | System for delivering and enabling interactivity with images |
US20060053468A1 (en) * | 2002-12-12 | 2006-03-09 | Tatsuo Sudoh | Multi-medium data processing device capable of easily creating multi-medium content |
US20060069999A1 (en) * | 2004-09-29 | 2006-03-30 | Nikon Corporation | Image reproduction apparatus and image reproduction program product |
US20060126088A1 (en) * | 2004-12-09 | 2006-06-15 | Masayuki Inoue | Information processing apparatus and method, and program |
US20060279555A1 (en) * | 2005-06-13 | 2006-12-14 | Fuji Photo Film Co., Ltd. | Album creating apparatus, album creating method and program therefor |
US20070116433A1 (en) * | 2002-06-25 | 2007-05-24 | Manico Joseph A | Software and system for customizing a presentation of digital images |
US20070130015A1 (en) * | 2005-06-15 | 2007-06-07 | Steven Starr | Advertisement revenue sharing for distributed video |
US20070211961A1 (en) * | 2006-03-07 | 2007-09-13 | Fujifilm Corporation | Image processing apparatus, method, and program |
US20070220002A1 (en) * | 2006-03-15 | 2007-09-20 | Musicnet | Dynamic server configuration for managing high volume traffic |
US20070238082A1 (en) * | 2006-04-11 | 2007-10-11 | Elizabeth Ingrassia | E-card method and system |
US20080016114A1 (en) * | 2006-07-14 | 2008-01-17 | Gerald Thomas Beauregard | Creating a new music video by intercutting user-supplied visual data with a pre-existing music video |
US20080028298A1 (en) * | 2006-07-31 | 2008-01-31 | Fujifilm Corporation | Template generating apparatus, image layout apparatus, modified template generating apparatus, and programs therefor |
US20080126939A1 (en) * | 2006-11-27 | 2008-05-29 | Samsung Electronics Co., Ltd. | System, method and medium playing moving images |
US20080168365A1 (en) * | 2007-01-07 | 2008-07-10 | Imran Chaudhri | Creating Digital Artwork Based on Content File Metadata |
US20090049371A1 (en) * | 2007-08-13 | 2009-02-19 | Shih-Ling Keng | Method of Generating a Presentation with Background Music and Related System |
US20090052734A1 (en) * | 2007-08-24 | 2009-02-26 | Sony Corporation | Moving image creating apparatus, moving image creating method, and program |
US20090066730A1 (en) * | 2007-09-06 | 2009-03-12 | Canon Kabushiki Kaisha | Image display control apparatus and image display control method |
US20090115855A1 (en) * | 2007-11-05 | 2009-05-07 | Tomohiko Gotoh | Photography apparatus, control method, program, and information processing device |
US20090204243A1 (en) * | 2008-01-09 | 2009-08-13 | 8 Figure, Llc | Method and apparatus for creating customized text-to-speech podcasts and videos incorporating associated media |
US20090228922A1 (en) * | 2008-03-10 | 2009-09-10 | United Video Properties, Inc. | Methods and devices for presenting an interactive media guidance application |
US20090237704A1 (en) * | 2006-07-31 | 2009-09-24 | Seiko Epson Corporation | Printing apparatus, content-recorded disk making apparatus, kiosk terminal, method of controlling printing apparatus and program therefor |
US20090249177A1 (en) * | 2008-03-26 | 2009-10-01 | Fujifilm Corporation | Method and apparatus for creating album, and recording medium |
US20090265334A1 (en) * | 2008-04-22 | 2009-10-22 | Microsoft Corporation | Image querying with relevance-relative scaling |
US7675635B2 (en) * | 2003-11-27 | 2010-03-09 | Fujifilm Corporation | Apparatus, method, and program for editing images for a photo album |
US7801413B2 (en) * | 2004-09-14 | 2010-09-21 | Sony Corporation | Information processing device, method, and program |
US20100241939A1 (en) * | 2006-04-03 | 2010-09-23 | Dalit Rozen-Atzmon | Photo album |
US7957835B2 (en) * | 2005-02-03 | 2011-06-07 | Toyota Jidosha Kabushiki Kaisha | Legged robot and control method thereof |
US8196032B2 (en) * | 2005-11-01 | 2012-06-05 | Microsoft Corporation | Template-based multimedia authoring and sharing |
US8449360B2 (en) * | 2009-05-29 | 2013-05-28 | Harmonix Music Systems, Inc. | Displaying song lyrics and vocal cues |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7657835B2 (en) * | 1997-01-31 | 2010-02-02 | Making Everlasting Memories, L.L.C. | Method and system for creating a commemorative presentation |
US7054508B2 (en) * | 2000-08-03 | 2006-05-30 | Canon Kabushiki Kaisha | Data editing apparatus and method |
US20050206751A1 (en) * | 2004-03-19 | 2005-09-22 | East Kodak Company | Digital video system for assembling video sequences |
-
2010
- 2010-03-01 US US12/715,367 patent/US8860865B2/en not_active Expired - Fee Related
- 2010-03-02 US US12/716,231 patent/US20100223128A1/en not_active Abandoned
Patent Citations (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6237010B1 (en) * | 1997-10-06 | 2001-05-22 | Canon Kabushiki Kaisha | Multimedia application using flashpix file format |
US6557017B1 (en) * | 1998-02-06 | 2003-04-29 | Xerox Corporation | Image production system theme integration |
US6429364B1 (en) * | 1998-11-02 | 2002-08-06 | Matsushita Electric Industrial Co., Ltd. | Data transmitting/receiving device and method |
US20010030660A1 (en) * | 1999-12-10 | 2001-10-18 | Roustem Zainoulline | Interactive graphical user interface and method for previewing media products |
US20010027454A1 (en) * | 2000-03-29 | 2001-10-04 | Takashi Tsue | Method, apparatus, and recording medium for displaying templates |
US20010048802A1 (en) * | 2000-04-19 | 2001-12-06 | Nobuyoshi Nakajima | Method, apparatus, and recording medium for generating album |
US20030160944A1 (en) * | 2002-02-28 | 2003-08-28 | Jonathan Foote | Method for automatically producing music videos |
US20050168453A1 (en) * | 2002-04-11 | 2005-08-04 | Konica Minolta Holdings, Inc. | Information recording medium and manufacturing method thereof |
US20070116433A1 (en) * | 2002-06-25 | 2007-05-24 | Manico Joseph A | Software and system for customizing a presentation of digital images |
US20060053468A1 (en) * | 2002-12-12 | 2006-03-09 | Tatsuo Sudoh | Multi-medium data processing device capable of easily creating multi-medium content |
US20040193723A1 (en) * | 2003-03-26 | 2004-09-30 | Fujitsu Limited | Method and system for streaming delivery and program and program recording medium thereof |
US20050084232A1 (en) * | 2003-10-16 | 2005-04-21 | Magix Ag | System and method for improved video editing |
US7675635B2 (en) * | 2003-11-27 | 2010-03-09 | Fujifilm Corporation | Apparatus, method, and program for editing images for a photo album |
US20050181762A1 (en) * | 2004-02-13 | 2005-08-18 | Kauppila Edwin A. | System and method for performing wireless remote monitoring |
US20050198571A1 (en) * | 2004-03-03 | 2005-09-08 | Gary Kramer | System for delivering and enabling interactivity with images |
US7801413B2 (en) * | 2004-09-14 | 2010-09-21 | Sony Corporation | Information processing device, method, and program |
US20060069999A1 (en) * | 2004-09-29 | 2006-03-30 | Nikon Corporation | Image reproduction apparatus and image reproduction program product |
US20060126088A1 (en) * | 2004-12-09 | 2006-06-15 | Masayuki Inoue | Information processing apparatus and method, and program |
US7957835B2 (en) * | 2005-02-03 | 2011-06-07 | Toyota Jidosha Kabushiki Kaisha | Legged robot and control method thereof |
US20060279555A1 (en) * | 2005-06-13 | 2006-12-14 | Fuji Photo Film Co., Ltd. | Album creating apparatus, album creating method and program therefor |
US20070130015A1 (en) * | 2005-06-15 | 2007-06-07 | Steven Starr | Advertisement revenue sharing for distributed video |
US8196032B2 (en) * | 2005-11-01 | 2012-06-05 | Microsoft Corporation | Template-based multimedia authoring and sharing |
US20070211961A1 (en) * | 2006-03-07 | 2007-09-13 | Fujifilm Corporation | Image processing apparatus, method, and program |
US20070220002A1 (en) * | 2006-03-15 | 2007-09-20 | Musicnet | Dynamic server configuration for managing high volume traffic |
US20100241939A1 (en) * | 2006-04-03 | 2010-09-23 | Dalit Rozen-Atzmon | Photo album |
US20070238082A1 (en) * | 2006-04-11 | 2007-10-11 | Elizabeth Ingrassia | E-card method and system |
US7716572B2 (en) * | 2006-07-14 | 2010-05-11 | Muvee Technologies Pte Ltd. | Creating a new music video by intercutting user-supplied visual data with a pre-existing music video |
US20080016114A1 (en) * | 2006-07-14 | 2008-01-17 | Gerald Thomas Beauregard | Creating a new music video by intercutting user-supplied visual data with a pre-existing music video |
US8166391B2 (en) * | 2006-07-31 | 2012-04-24 | Fujifilm Corporation | Template generating apparatus, image layout apparatus, modified template generating apparatus, and programs therefor |
US20080028298A1 (en) * | 2006-07-31 | 2008-01-31 | Fujifilm Corporation | Template generating apparatus, image layout apparatus, modified template generating apparatus, and programs therefor |
US20090237704A1 (en) * | 2006-07-31 | 2009-09-24 | Seiko Epson Corporation | Printing apparatus, content-recorded disk making apparatus, kiosk terminal, method of controlling printing apparatus and program therefor |
US20080126939A1 (en) * | 2006-11-27 | 2008-05-29 | Samsung Electronics Co., Ltd. | System, method and medium playing moving images |
US20080168365A1 (en) * | 2007-01-07 | 2008-07-10 | Imran Chaudhri | Creating Digital Artwork Based on Content File Metadata |
US20090049371A1 (en) * | 2007-08-13 | 2009-02-19 | Shih-Ling Keng | Method of Generating a Presentation with Background Music and Related System |
US20090052734A1 (en) * | 2007-08-24 | 2009-02-26 | Sony Corporation | Moving image creating apparatus, moving image creating method, and program |
US20090066730A1 (en) * | 2007-09-06 | 2009-03-12 | Canon Kabushiki Kaisha | Image display control apparatus and image display control method |
US20090115855A1 (en) * | 2007-11-05 | 2009-05-07 | Tomohiko Gotoh | Photography apparatus, control method, program, and information processing device |
US20090204243A1 (en) * | 2008-01-09 | 2009-08-13 | 8 Figure, Llc | Method and apparatus for creating customized text-to-speech podcasts and videos incorporating associated media |
US20090228922A1 (en) * | 2008-03-10 | 2009-09-10 | United Video Properties, Inc. | Methods and devices for presenting an interactive media guidance application |
US20090249177A1 (en) * | 2008-03-26 | 2009-10-01 | Fujifilm Corporation | Method and apparatus for creating album, and recording medium |
US20090265334A1 (en) * | 2008-04-22 | 2009-10-22 | Microsoft Corporation | Image querying with relevance-relative scaling |
US8449360B2 (en) * | 2009-05-29 | 2013-05-28 | Harmonix Music Systems, Inc. | Displaying song lyrics and vocal cues |
Cited By (419)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11588770B2 (en) | 2007-01-05 | 2023-02-21 | Snap Inc. | Real-time display of multiple images |
US10862951B1 (en) | 2007-01-05 | 2020-12-08 | Snap Inc. | Real-time display of multiple images |
US8854395B2 (en) | 2009-07-30 | 2014-10-07 | Intellectual Ventures Fund 83 Llc | Method for producing artistic image template designs |
US8849043B2 (en) | 2009-07-30 | 2014-09-30 | Intellectual Ventures Fund 83 Llc | System for matching artistic attributes of secondary image and template to a primary image |
US20110029540A1 (en) * | 2009-07-30 | 2011-02-03 | Ptucha Raymond W | Method for matching artistic attributes of a template and secondary images to a primary image |
US20110029914A1 (en) * | 2009-07-30 | 2011-02-03 | Whitby Laura R | Apparatus for generating artistic image template designs |
US8849853B2 (en) | 2009-07-30 | 2014-09-30 | Intellectual Ventures Fund 83 Llc | Method for matching artistic attributes of a template and secondary images to a primary image |
US8237819B2 (en) * | 2009-07-30 | 2012-08-07 | Eastman Kodak Company | Image capture method with artistic template design |
US20110025883A1 (en) * | 2009-07-30 | 2011-02-03 | Shkurko Eugene I | Image capture method with artistic template design |
US20110029635A1 (en) * | 2009-07-30 | 2011-02-03 | Shkurko Eugene I | Image capture device with artistic template design |
US20110029562A1 (en) * | 2009-07-30 | 2011-02-03 | Whitby Laura R | Coordinating user images in an artistic design |
US20110025714A1 (en) * | 2009-07-30 | 2011-02-03 | Ptucha Raymond W | Method for producing artistic image template designs |
US20110037778A1 (en) * | 2009-08-12 | 2011-02-17 | Perception Digital Limited | Apparatus And Method For Adjusting An Image In A Screen Of A Handheld Device |
US20180075413A1 (en) * | 2010-04-30 | 2018-03-15 | Iliv Technologies Inc. | Collaboration tool |
US20140310132A1 (en) * | 2010-04-30 | 2014-10-16 | Iliv Technologies Inc. | Collaboration tool |
US11443281B2 (en) * | 2010-04-30 | 2022-09-13 | Iliv Technologies Inc. | Collaboration tool |
US9436596B2 (en) | 2010-05-05 | 2016-09-06 | Microsoft Technology Licensing, Llc | Flash memory cache including for use with persistent key-value store |
US9298604B2 (en) | 2010-05-05 | 2016-03-29 | Microsoft Technology Licensing, Llc | Flash memory cache including for use with persistent key-value store |
US8935487B2 (en) | 2010-05-05 | 2015-01-13 | Microsoft Corporation | Fast and low-RAM-footprint indexing for data deduplication |
US9053032B2 (en) | 2010-05-05 | 2015-06-09 | Microsoft Technology Licensing, Llc | Fast and low-RAM-footprint indexing for data deduplication |
US9208472B2 (en) | 2010-12-11 | 2015-12-08 | Microsoft Technology Licensing, Llc | Addition of plan-generation models and expertise by crowd contributors |
US10572803B2 (en) | 2010-12-11 | 2020-02-25 | Microsoft Technology Licensing, Llc | Addition of plan-generation models and expertise by crowd contributors |
US9785666B2 (en) | 2010-12-28 | 2017-10-10 | Microsoft Technology Licensing, Llc | Using index partitioning and reconciliation for data deduplication |
US10999623B2 (en) | 2011-07-12 | 2021-05-04 | Snap Inc. | Providing visual content editing functions |
US11451856B2 (en) | 2011-07-12 | 2022-09-20 | Snap Inc. | Providing visual content editing functions |
US10334307B2 (en) | 2011-07-12 | 2019-06-25 | Snap Inc. | Methods and systems of providing visual content editing functions |
US11750875B2 (en) | 2011-07-12 | 2023-09-05 | Snap Inc. | Providing visual content editing functions |
US20130055087A1 (en) * | 2011-08-26 | 2013-02-28 | Gary W. Flint | Device, Method, and Graphical User Interface for Editing Videos |
US9933935B2 (en) * | 2011-08-26 | 2018-04-03 | Apple Inc. | Device, method, and graphical user interface for editing videos |
US20180358049A1 (en) * | 2011-09-26 | 2018-12-13 | University Of North Carolina At Charlotte | Multi-modal collaborative web-based video annotation system |
US11657341B2 (en) | 2011-09-27 | 2023-05-23 | 3Play Media, Inc. | Electronic transcription job market |
US9704111B1 (en) | 2011-09-27 | 2017-07-11 | 3Play Media, Inc. | Electronic transcription job market |
US10748532B1 (en) | 2011-09-27 | 2020-08-18 | 3Play Media, Inc. | Electronic transcription job market |
US20130150990A1 (en) * | 2011-12-12 | 2013-06-13 | Inkling Systems, Inc. | Media outline |
US9280905B2 (en) * | 2011-12-12 | 2016-03-08 | Inkling Systems, Inc. | Media outline |
US20130151971A1 (en) * | 2011-12-13 | 2013-06-13 | Olympus Imaging Corp. | Server apparatus and processing method for the same |
US20130167086A1 (en) * | 2011-12-23 | 2013-06-27 | Samsung Electronics Co., Ltd. | Digital image processing apparatus and method of controlling the same |
USD732554S1 (en) * | 2012-01-13 | 2015-06-23 | Konica Minolta, Inc. | Electronic copying machine display screen with graphical user interface |
US11182383B1 (en) | 2012-02-24 | 2021-11-23 | Placed, Llc | System and method for data collection to validate location data |
US11734712B2 (en) | 2012-02-24 | 2023-08-22 | Foursquare Labs, Inc. | Attributing in-store visits to media consumption based on data collected from user devices |
US20130272679A1 (en) * | 2012-04-12 | 2013-10-17 | Mario Luis Gomes Cavalcanti | Video Generator System |
US11925869B2 (en) | 2012-05-08 | 2024-03-12 | Snap Inc. | System and method for generating and displaying avatars |
US20130311886A1 (en) * | 2012-05-21 | 2013-11-21 | DWA Investments, Inc. | Interactive mobile video viewing experience |
US10083151B2 (en) * | 2012-05-21 | 2018-09-25 | Oath Inc. | Interactive mobile video viewing experience |
US9477997B2 (en) | 2012-06-21 | 2016-10-25 | Goopi Sàrl | Processing resource management system and methods |
US10169924B2 (en) | 2012-08-22 | 2019-01-01 | Snaps Media Inc. | Augmented reality virtual content platform apparatuses, methods and systems |
US9792733B2 (en) | 2012-08-22 | 2017-10-17 | Snaps Media, Inc. | Augmented reality virtual content platform apparatuses, methods and systems |
US9721394B2 (en) | 2012-08-22 | 2017-08-01 | Snaps Media, Inc. | Augmented reality virtual content platform apparatuses, methods and systems |
US9882907B1 (en) | 2012-11-08 | 2018-01-30 | Snap Inc. | Apparatus and method for single action control of social network profile access |
US10887308B1 (en) | 2012-11-08 | 2021-01-05 | Snap Inc. | Interactive user-interface to adjust access privileges |
US8775972B2 (en) * | 2012-11-08 | 2014-07-08 | Snapchat, Inc. | Apparatus and method for single action control of social network profile access |
US11252158B2 (en) | 2012-11-08 | 2022-02-15 | Snap Inc. | Interactive user-interface to adjust access privileges |
US10546010B2 (en) * | 2012-12-19 | 2020-01-28 | Oath Inc. | Method and system for storytelling on a computing device |
US20140172856A1 (en) * | 2012-12-19 | 2014-06-19 | Yahoo! Inc. | Method and system for storytelling on a computing device |
US9875304B2 (en) | 2013-03-14 | 2018-01-23 | Aperture Investments, Llc | Music selection and organization using audio fingerprints |
US10242097B2 (en) | 2013-03-14 | 2019-03-26 | Aperture Investments, Llc | Music selection and organization using rhythm, texture and pitch |
US9639871B2 (en) | 2013-03-14 | 2017-05-02 | Apperture Investments, Llc | Methods and apparatuses for assigning moods to content and searching for moods to select content |
US10225328B2 (en) | 2013-03-14 | 2019-03-05 | Aperture Investments, Llc | Music selection and organization using audio fingerprints |
US20150135077A1 (en) * | 2013-03-14 | 2015-05-14 | Aperture Investments, Llc | Systems and methods for creating, searching, organizing, selecting and distributing video content based on mood |
US11271993B2 (en) | 2013-03-14 | 2022-03-08 | Aperture Investments, Llc | Streaming music categorization using rhythm, texture and pitch |
US10061476B2 (en) | 2013-03-14 | 2018-08-28 | Aperture Investments, Llc | Systems and methods for identifying, searching, organizing, selecting and distributing content based on mood |
US10623480B2 (en) | 2013-03-14 | 2020-04-14 | Aperture Investments, Llc | Music categorization using rhythm, texture and pitch |
US10439972B1 (en) | 2013-05-30 | 2019-10-08 | Snap Inc. | Apparatus and method for maintaining a message thread with opt-in permanence for entries |
US11509618B2 (en) | 2013-05-30 | 2022-11-22 | Snap Inc. | Maintaining a message thread with opt-in permanence for entries |
US11115361B2 (en) | 2013-05-30 | 2021-09-07 | Snap Inc. | Apparatus and method for maintaining a message thread with opt-in permanence for entries |
US9742713B2 (en) | 2013-05-30 | 2017-08-22 | Snap Inc. | Apparatus and method for maintaining a message thread with opt-in permanence for entries |
US10587552B1 (en) | 2013-05-30 | 2020-03-10 | Snap Inc. | Apparatus and method for maintaining a message thread with opt-in permanence for entries |
US11134046B2 (en) | 2013-05-30 | 2021-09-28 | Snap Inc. | Apparatus and method for maintaining a message thread with opt-in permanence for entries |
US9705831B2 (en) | 2013-05-30 | 2017-07-11 | Snap Inc. | Apparatus and method for maintaining a message thread with opt-in permanence for entries |
US9886956B1 (en) * | 2013-10-15 | 2018-02-06 | 3Play Media, Inc. | Automated delivery of transcription products |
US10681092B1 (en) | 2013-11-26 | 2020-06-09 | Snap Inc. | Method and system for integrating real time communication features in applications |
US11102253B2 (en) | 2013-11-26 | 2021-08-24 | Snap Inc. | Method and system for integrating real time communication features in applications |
US9083770B1 (en) | 2013-11-26 | 2015-07-14 | Snapchat, Inc. | Method and system for integrating real time communication features in applications |
US10069876B1 (en) | 2013-11-26 | 2018-09-04 | Snap Inc. | Method and system for integrating real time communication features in applications |
US9794303B1 (en) | 2013-11-26 | 2017-10-17 | Snap Inc. | Method and system for integrating real time communication features in applications |
US11546388B2 (en) | 2013-11-26 | 2023-01-03 | Snap Inc. | Method and system for integrating real time communication features in applications |
US9936030B2 (en) | 2014-01-03 | 2018-04-03 | Investel Capital Corporation | User content sharing system and method with location-based external content integration |
US9866999B1 (en) | 2014-01-12 | 2018-01-09 | Investment Asset Holdings Llc | Location-based messaging |
US10349209B1 (en) | 2014-01-12 | 2019-07-09 | Investment Asset Holdings Llc | Location-based messaging |
US10080102B1 (en) | 2014-01-12 | 2018-09-18 | Investment Asset Holdings Llc | Location-based messaging |
US9207844B2 (en) * | 2014-01-31 | 2015-12-08 | EyeGroove, Inc. | Methods and devices for touch-based media creation |
US9268787B2 (en) | 2014-01-31 | 2016-02-23 | EyeGroove, Inc. | Methods and devices for synchronizing and sharing media items |
US10120530B2 (en) | 2014-01-31 | 2018-11-06 | Facebook, Inc. | Methods and devices for touch-based media creation |
US10031921B2 (en) | 2014-01-31 | 2018-07-24 | Facebook, Inc. | Methods and systems for storage of media item metadata |
US9116912B1 (en) | 2014-01-31 | 2015-08-25 | EyeGroove, Inc. | Methods and devices for modifying pre-existing media items |
US20150220249A1 (en) * | 2014-01-31 | 2015-08-06 | EyeGroove, Inc. | Methods and devices for touch-based media creation |
US9207857B2 (en) | 2014-02-14 | 2015-12-08 | EyeGroove, Inc. | Methods and devices for presenting interactive media items |
US10120565B2 (en) | 2014-02-14 | 2018-11-06 | Facebook, Inc. | Methods and devices for presenting interactive media items |
US11463394B2 (en) | 2014-02-21 | 2022-10-04 | Snap Inc. | Apparatus and method for alternate channel communication initiated through a common message thread |
US11902235B2 (en) | 2014-02-21 | 2024-02-13 | Snap Inc. | Apparatus and method for alternate channel communication initiated through a common message thread |
US10949049B1 (en) | 2014-02-21 | 2021-03-16 | Snap Inc. | Apparatus and method for alternate channel communication initiated through a common message thread |
US10084735B1 (en) | 2014-02-21 | 2018-09-25 | Snap Inc. | Apparatus and method for alternate channel communication initiated through a common message thread |
US11463393B2 (en) | 2014-02-21 | 2022-10-04 | Snap Inc. | Apparatus and method for alternate channel communication initiated through a common message thread |
US10082926B1 (en) | 2014-02-21 | 2018-09-25 | Snap Inc. | Apparatus and method for alternate channel communication initiated through a common message thread |
US10958605B1 (en) | 2014-02-21 | 2021-03-23 | Snap Inc. | Apparatus and method for alternate channel communication initiated through a common message thread |
US9407712B1 (en) | 2014-03-07 | 2016-08-02 | Snapchat, Inc. | Content delivery network for ephemeral objects |
US9237202B1 (en) | 2014-03-07 | 2016-01-12 | Snapchat, Inc. | Content delivery network for ephemeral objects |
US11899713B2 (en) | 2014-03-27 | 2024-02-13 | Aperture Investments, Llc | Music streaming, playlist creation and streaming architecture |
US11609948B2 (en) | 2014-03-27 | 2023-03-21 | Aperture Investments, Llc | Music streaming, playlist creation and streaming architecture |
US9519644B2 (en) | 2014-04-04 | 2016-12-13 | Facebook, Inc. | Methods and devices for generating media items |
US10002642B2 (en) | 2014-04-04 | 2018-06-19 | Facebook, Inc. | Methods and devices for generating media items |
USD754188S1 (en) * | 2014-04-17 | 2016-04-19 | Naver Corporation | Display panel with graphical user interface |
US11310183B2 (en) | 2014-05-09 | 2022-04-19 | Snap Inc. | Dynamic configuration of application component tiles |
US11743219B2 (en) | 2014-05-09 | 2023-08-29 | Snap Inc. | Dynamic configuration of application component tiles |
US9276886B1 (en) | 2014-05-09 | 2016-03-01 | Snapchat, Inc. | Apparatus and method for dynamically configuring application component tiles |
US10817156B1 (en) | 2014-05-09 | 2020-10-27 | Snap Inc. | Dynamic configuration of application component tiles |
US10990697B2 (en) | 2014-05-28 | 2021-04-27 | Snap Inc. | Apparatus and method for automated privacy protection in distributed images |
US9785796B1 (en) | 2014-05-28 | 2017-10-10 | Snap Inc. | Apparatus and method for automated privacy protection in distributed images |
US10572681B1 (en) | 2014-05-28 | 2020-02-25 | Snap Inc. | Apparatus and method for automated privacy protection in distributed images |
US9396354B1 (en) | 2014-05-28 | 2016-07-19 | Snapchat, Inc. | Apparatus and method for automated privacy protection in distributed images |
US9633696B1 (en) * | 2014-05-30 | 2017-04-25 | 3Play Media, Inc. | Systems and methods for automatically synchronizing media to derived content |
US11625443B2 (en) | 2014-06-05 | 2023-04-11 | Snap Inc. | Web document enhancement |
US11921805B2 (en) | 2014-06-05 | 2024-03-05 | Snap Inc. | Web document enhancement |
US10182311B2 (en) | 2014-06-13 | 2019-01-15 | Snap Inc. | Prioritization of messages within a message collection |
US10524087B1 (en) | 2014-06-13 | 2019-12-31 | Snap Inc. | Message destination list mechanism |
US9693191B2 (en) | 2014-06-13 | 2017-06-27 | Snap Inc. | Prioritization of messages within gallery |
US10779113B2 (en) | 2014-06-13 | 2020-09-15 | Snap Inc. | Prioritization of messages within a message collection |
US11317240B2 (en) | 2014-06-13 | 2022-04-26 | Snap Inc. | Geo-location based event gallery |
US10623891B2 (en) | 2014-06-13 | 2020-04-14 | Snap Inc. | Prioritization of messages within a message collection |
US9094137B1 (en) | 2014-06-13 | 2015-07-28 | Snapchat, Inc. | Priority based placement of messages in a geo-location based event gallery |
US10659914B1 (en) | 2014-06-13 | 2020-05-19 | Snap Inc. | Geo-location based event gallery |
US11166121B2 (en) | 2014-06-13 | 2021-11-02 | Snap Inc. | Prioritization of messages within a message collection |
US9113301B1 (en) | 2014-06-13 | 2015-08-18 | Snapchat, Inc. | Geo-location based event gallery |
US10448201B1 (en) | 2014-06-13 | 2019-10-15 | Snap Inc. | Prioritization of messages within a message collection |
US9430783B1 (en) | 2014-06-13 | 2016-08-30 | Snapchat, Inc. | Prioritization of messages within gallery |
US10200813B1 (en) | 2014-06-13 | 2019-02-05 | Snap Inc. | Geo-location based event gallery |
US9825898B2 (en) | 2014-06-13 | 2017-11-21 | Snap Inc. | Prioritization of messages within a message collection |
US9532171B2 (en) | 2014-06-13 | 2016-12-27 | Snap Inc. | Geo-location based event gallery |
US11849214B2 (en) | 2014-07-07 | 2023-12-19 | Snap Inc. | Apparatus and method for supplying content aware photo filters |
US9407816B1 (en) | 2014-07-07 | 2016-08-02 | Snapchat, Inc. | Apparatus and method for supplying content aware photo filters |
US10348960B1 (en) | 2014-07-07 | 2019-07-09 | Snap Inc. | Apparatus and method for supplying content aware photo filters |
US9225897B1 (en) | 2014-07-07 | 2015-12-29 | Snapchat, Inc. | Apparatus and method for supplying content aware photo filters |
US10701262B1 (en) | 2014-07-07 | 2020-06-30 | Snap Inc. | Apparatus and method for supplying content aware photo filters |
US11496673B1 (en) | 2014-07-07 | 2022-11-08 | Snap Inc. | Apparatus and method for supplying content aware photo filters |
US11595569B2 (en) | 2014-07-07 | 2023-02-28 | Snap Inc. | Supplying content aware photo filters |
US10432850B1 (en) | 2014-07-07 | 2019-10-01 | Snap Inc. | Apparatus and method for supplying content aware photo filters |
US10602057B1 (en) | 2014-07-07 | 2020-03-24 | Snap Inc. | Supplying content aware photo filters |
US10154192B1 (en) | 2014-07-07 | 2018-12-11 | Snap Inc. | Apparatus and method for supplying content aware photo filters |
US11122200B2 (en) | 2014-07-07 | 2021-09-14 | Snap Inc. | Supplying content aware photo filters |
USD787537S1 (en) * | 2014-08-05 | 2017-05-23 | Naver Corporation | Display screen with animated graphical user interface |
US11017363B1 (en) | 2014-08-22 | 2021-05-25 | Snap Inc. | Message processor with application prompts |
US10055717B1 (en) | 2014-08-22 | 2018-08-21 | Snap Inc. | Message processor with application prompts |
US10423983B2 (en) | 2014-09-16 | 2019-09-24 | Snap Inc. | Determining targeting information based on a predictive targeting model |
US11625755B1 (en) | 2014-09-16 | 2023-04-11 | Foursquare Labs, Inc. | Determining targeting information based on a predictive targeting model |
US10824654B2 (en) | 2014-09-18 | 2020-11-03 | Snap Inc. | Geolocation-based pictographs |
US11281701B2 (en) | 2014-09-18 | 2022-03-22 | Snap Inc. | Geolocation-based pictographs |
US11741136B2 (en) | 2014-09-18 | 2023-08-29 | Snap Inc. | Geolocation-based pictographs |
US11216869B2 (en) | 2014-09-23 | 2022-01-04 | Snap Inc. | User interface to augment an image using geolocation |
US9537811B2 (en) | 2014-10-02 | 2017-01-03 | Snap Inc. | Ephemeral gallery of ephemeral messages |
US20170374003A1 (en) | 2014-10-02 | 2017-12-28 | Snapchat, Inc. | Ephemeral gallery of ephemeral messages |
US11012398B1 (en) | 2014-10-02 | 2021-05-18 | Snap Inc. | Ephemeral message gallery user interface with screenshot messages |
US10284508B1 (en) | 2014-10-02 | 2019-05-07 | Snap Inc. | Ephemeral gallery of ephemeral messages with opt-in permanence |
US11411908B1 (en) | 2014-10-02 | 2022-08-09 | Snap Inc. | Ephemeral message gallery user interface with online viewing history indicia |
US11038829B1 (en) | 2014-10-02 | 2021-06-15 | Snap Inc. | Ephemeral gallery of ephemeral messages with opt-in permanence |
US11855947B1 (en) | 2014-10-02 | 2023-12-26 | Snap Inc. | Gallery of ephemeral messages |
US10944710B1 (en) | 2014-10-02 | 2021-03-09 | Snap Inc. | Ephemeral gallery user interface with remaining gallery time indication |
US10708210B1 (en) | 2014-10-02 | 2020-07-07 | Snap Inc. | Multi-user ephemeral message gallery |
US11522822B1 (en) | 2014-10-02 | 2022-12-06 | Snap Inc. | Ephemeral gallery elimination based on gallery and message timers |
US10476830B2 (en) | 2014-10-02 | 2019-11-12 | Snap Inc. | Ephemeral gallery of ephemeral messages |
US10958608B1 (en) | 2014-10-02 | 2021-03-23 | Snap Inc. | Ephemeral gallery of visual media messages |
USD797130S1 (en) * | 2014-11-10 | 2017-09-12 | Hitachi, Ltd. | Display screen with graphical user interface |
US11956533B2 (en) | 2014-11-12 | 2024-04-09 | Snap Inc. | Accessing media at a geographic location |
US9843720B1 (en) | 2014-11-12 | 2017-12-12 | Snap Inc. | User interface for accessing media at a geographic location |
US10616476B1 (en) | 2014-11-12 | 2020-04-07 | Snap Inc. | User interface for accessing media at a geographic location |
US11190679B2 (en) | 2014-11-12 | 2021-11-30 | Snap Inc. | Accessing media at a geographic location |
US9385983B1 (en) | 2014-12-19 | 2016-07-05 | Snapchat, Inc. | Gallery of messages from individuals with a shared interest |
US11250887B2 (en) | 2014-12-19 | 2022-02-15 | Snap Inc. | Routing messages by message parameter |
US10811053B2 (en) | 2014-12-19 | 2020-10-20 | Snap Inc. | Routing messages by message parameter |
US10514876B2 (en) | 2014-12-19 | 2019-12-24 | Snap Inc. | Gallery of messages from individuals with a shared interest |
US10311916B2 (en) | 2014-12-19 | 2019-06-04 | Snap Inc. | Gallery of videos set to an audio time line |
US9854219B2 (en) | 2014-12-19 | 2017-12-26 | Snap Inc. | Gallery of videos set to an audio time line |
US11803345B2 (en) | 2014-12-19 | 2023-10-31 | Snap Inc. | Gallery of messages from individuals with a shared interest |
US10580458B2 (en) | 2014-12-19 | 2020-03-03 | Snap Inc. | Gallery of videos set to an audio time line |
US11783862B2 (en) | 2014-12-19 | 2023-10-10 | Snap Inc. | Routing messages by message parameter |
US11372608B2 (en) | 2014-12-19 | 2022-06-28 | Snap Inc. | Gallery of messages from individuals with a shared interest |
US10380720B1 (en) | 2015-01-09 | 2019-08-13 | Snap Inc. | Location-based image filters |
US11301960B2 (en) | 2015-01-09 | 2022-04-12 | Snap Inc. | Object recognition based image filters |
US11734342B2 (en) | 2015-01-09 | 2023-08-22 | Snap Inc. | Object recognition based image overlays |
US10157449B1 (en) | 2015-01-09 | 2018-12-18 | Snap Inc. | Geo-location-based image filters |
US11388226B1 (en) | 2015-01-13 | 2022-07-12 | Snap Inc. | Guided personal identity based actions |
US11962645B2 (en) | 2015-01-13 | 2024-04-16 | Snap Inc. | Guided personal identity based actions |
US10416845B1 (en) | 2015-01-19 | 2019-09-17 | Snap Inc. | Multichannel system |
US11249617B1 (en) | 2015-01-19 | 2022-02-15 | Snap Inc. | Multichannel system |
US10133705B1 (en) | 2015-01-19 | 2018-11-20 | Snap Inc. | Multichannel system |
US10932085B1 (en) | 2015-01-26 | 2021-02-23 | Snap Inc. | Content request by location |
US11528579B2 (en) | 2015-01-26 | 2022-12-13 | Snap Inc. | Content request by location |
US11910267B2 (en) | 2015-01-26 | 2024-02-20 | Snap Inc. | Content request by location |
US10123166B2 (en) | 2015-01-26 | 2018-11-06 | Snap Inc. | Content request by location |
US10536800B1 (en) | 2015-01-26 | 2020-01-14 | Snap Inc. | Content request by location |
US20180048831A1 (en) * | 2015-02-23 | 2018-02-15 | Zuma Beach Ip Pty Ltd | Generation of combined videos |
US10223397B1 (en) | 2015-03-13 | 2019-03-05 | Snap Inc. | Social graph based co-location of network users |
US10893055B2 (en) | 2015-03-18 | 2021-01-12 | Snap Inc. | Geo-fence authorization provisioning |
US11902287B2 (en) | 2015-03-18 | 2024-02-13 | Snap Inc. | Geo-fence authorization provisioning |
US10616239B2 (en) | 2015-03-18 | 2020-04-07 | Snap Inc. | Geo-fence authorization provisioning |
US11320651B2 (en) | 2015-03-23 | 2022-05-03 | Snap Inc. | Reducing boot time and power consumption in displaying data content |
US10948717B1 (en) | 2015-03-23 | 2021-03-16 | Snap Inc. | Reducing boot time and power consumption in wearable display systems |
US11662576B2 (en) | 2015-03-23 | 2023-05-30 | Snap Inc. | Reducing boot time and power consumption in displaying data content |
US10529383B2 (en) * | 2015-04-09 | 2020-01-07 | Avid Technology, Inc. | Methods and systems for processing synchronous data tracks in a media editing system |
US20160300596A1 (en) * | 2015-04-09 | 2016-10-13 | Avid Technology, Inc. | Methods and systems for processing synchronous data tracks in a media editing system |
US10135949B1 (en) | 2015-05-05 | 2018-11-20 | Snap Inc. | Systems and methods for story and sub-story navigation |
US11449539B2 (en) | 2015-05-05 | 2022-09-20 | Snap Inc. | Automated local story generation and curation |
US10592574B2 (en) | 2015-05-05 | 2020-03-17 | Snap Inc. | Systems and methods for automated local story generation and curation |
US11496544B2 (en) | 2015-05-05 | 2022-11-08 | Snap Inc. | Story and sub-story navigation |
US11392633B2 (en) | 2015-05-05 | 2022-07-19 | Snap Inc. | Systems and methods for automated local story generation and curation |
US10911575B1 (en) | 2015-05-05 | 2021-02-02 | Snap Inc. | Systems and methods for story and sub-story navigation |
US10993069B2 (en) | 2015-07-16 | 2021-04-27 | Snap Inc. | Dynamically adaptive media content delivery |
US11961116B2 (en) | 2015-08-13 | 2024-04-16 | Foursquare Labs, Inc. | Determining exposures to content presented by physical objects |
US10817898B2 (en) | 2015-08-13 | 2020-10-27 | Placed, Llc | Determining exposures to content presented by physical objects |
CN105303517A (en) * | 2015-10-26 | 2016-02-03 | 北京金山安全软件有限公司 | Image processing method and device |
US11769307B2 (en) | 2015-10-30 | 2023-09-26 | Snap Inc. | Image based tracking in augmented reality systems |
US11315331B2 (en) | 2015-10-30 | 2022-04-26 | Snap Inc. | Image based tracking in augmented reality systems |
US10366543B1 (en) | 2015-10-30 | 2019-07-30 | Snap Inc. | Image based tracking in augmented reality systems |
US10733802B2 (en) | 2015-10-30 | 2020-08-04 | Snap Inc. | Image based tracking in augmented reality systems |
US10997783B2 (en) | 2015-11-30 | 2021-05-04 | Snap Inc. | Image and point cloud based tracking and in augmented reality systems |
US11380051B2 (en) | 2015-11-30 | 2022-07-05 | Snap Inc. | Image and point cloud based tracking and in augmented reality systems |
US10474321B2 (en) | 2015-11-30 | 2019-11-12 | Snap Inc. | Network resource location linking and visual content sharing |
US11599241B2 (en) | 2015-11-30 | 2023-03-07 | Snap Inc. | Network resource location linking and visual content sharing |
US10217489B2 (en) | 2015-12-07 | 2019-02-26 | Cyberlink Corp. | Systems and methods for media track management in a media editing tool |
USD775157S1 (en) * | 2015-12-17 | 2016-12-27 | Outbrain Inc. | Mobile device display screen or portion thereof with a graphical user interface |
USD775153S1 (en) * | 2015-12-17 | 2016-12-27 | Outbrain Inc. | Mobile device display screen or portion thereof with a graphical user interface |
USD775156S1 (en) * | 2015-12-17 | 2016-12-27 | Outbrain Inc. | Mobile device display screen or portion thereof with a graphical user interface |
USD775154S1 (en) * | 2015-12-17 | 2016-12-27 | Outbrain Inc. | Mobile device display screen or portion thereof with a graphical user interface |
US11468615B2 (en) | 2015-12-18 | 2022-10-11 | Snap Inc. | Media overlay publication system |
US10354425B2 (en) | 2015-12-18 | 2019-07-16 | Snap Inc. | Method and system for providing context relevant media augmentation |
US10997758B1 (en) | 2015-12-18 | 2021-05-04 | Snap Inc. | Media overlay publication system |
US11830117B2 (en) | 2015-12-18 | 2023-11-28 | Snap Inc | Media overlay publication system |
US11611846B2 (en) | 2016-02-26 | 2023-03-21 | Snap Inc. | Generation, curation, and presentation of media collections |
US11023514B2 (en) | 2016-02-26 | 2021-06-01 | Snap Inc. | Methods and systems for generation, curation, and presentation of media collections |
US10679389B2 (en) | 2016-02-26 | 2020-06-09 | Snap Inc. | Methods and systems for generation, curation, and presentation of media collections |
US10834525B2 (en) | 2016-02-26 | 2020-11-10 | Snap Inc. | Generation, curation, and presentation of media collections |
US11889381B2 (en) | 2016-02-26 | 2024-01-30 | Snap Inc. | Generation, curation, and presentation of media collections |
US11197123B2 (en) | 2016-02-26 | 2021-12-07 | Snap Inc. | Generation, curation, and presentation of media collections |
US11631276B2 (en) | 2016-03-31 | 2023-04-18 | Snap Inc. | Automated avatar generation |
US11900418B2 (en) | 2016-04-04 | 2024-02-13 | Snap Inc. | Mutable geo-fencing system |
US10979993B2 (en) | 2016-05-25 | 2021-04-13 | Ge Aviation Systems Limited | Aircraft time synchronization system |
US10805696B1 (en) | 2016-06-20 | 2020-10-13 | Pipbin, Inc. | System for recording and targeting tagged content of user interest |
US11201981B1 (en) | 2016-06-20 | 2021-12-14 | Pipbin, Inc. | System for notification of user accessibility of curated location-dependent content in an augmented estate |
US10839219B1 (en) | 2016-06-20 | 2020-11-17 | Pipbin, Inc. | System for curation, distribution and display of location-dependent augmented reality content |
US10992836B2 (en) | 2016-06-20 | 2021-04-27 | Pipbin, Inc. | Augmented property system of curated augmented reality media elements |
US10638256B1 (en) | 2016-06-20 | 2020-04-28 | Pipbin, Inc. | System for distribution and display of mobile targeted augmented reality content |
US11785161B1 (en) | 2016-06-20 | 2023-10-10 | Pipbin, Inc. | System for user accessibility of tagged curated augmented reality content |
US11876941B1 (en) | 2016-06-20 | 2024-01-16 | Pipbin, Inc. | Clickable augmented reality content manager, system, and network |
US11044393B1 (en) | 2016-06-20 | 2021-06-22 | Pipbin, Inc. | System for curation and display of location-dependent augmented reality content in an augmented estate system |
US10785597B2 (en) | 2016-06-28 | 2020-09-22 | Snap Inc. | System to track engagement of media items |
US10327100B1 (en) | 2016-06-28 | 2019-06-18 | Snap Inc. | System to track engagement of media items |
US10219110B2 (en) | 2016-06-28 | 2019-02-26 | Snap Inc. | System to track engagement of media items |
US10506371B2 (en) | 2016-06-28 | 2019-12-10 | Snap Inc. | System to track engagement of media items |
US10165402B1 (en) | 2016-06-28 | 2018-12-25 | Snap Inc. | System to track engagement of media items |
US10885559B1 (en) | 2016-06-28 | 2021-01-05 | Snap Inc. | Generation, curation, and presentation of media collections with automated advertising |
US11640625B2 (en) | 2016-06-28 | 2023-05-02 | Snap Inc. | Generation, curation, and presentation of media collections with automated advertising |
US11445326B2 (en) | 2016-06-28 | 2022-09-13 | Snap Inc. | Track engagement of media items |
US10735892B2 (en) | 2016-06-28 | 2020-08-04 | Snap Inc. | System to track engagement of media items |
US10430838B1 (en) | 2016-06-28 | 2019-10-01 | Snap Inc. | Methods and systems for generation, curation, and presentation of media collections with automated advertising |
US11895068B2 (en) | 2016-06-30 | 2024-02-06 | Snap Inc. | Automated content curation and communication |
US10387514B1 (en) | 2016-06-30 | 2019-08-20 | Snap Inc. | Automated content curation and communication |
US11080351B1 (en) | 2016-06-30 | 2021-08-03 | Snap Inc. | Automated content curation and communication |
US10348662B2 (en) | 2016-07-19 | 2019-07-09 | Snap Inc. | Generating customized electronic messaging graphics |
US11509615B2 (en) | 2016-07-19 | 2022-11-22 | Snap Inc. | Generating customized electronic messaging graphics |
US11816853B2 (en) | 2016-08-30 | 2023-11-14 | Snap Inc. | Systems and methods for simultaneous localization and mapping |
US10984568B2 (en) * | 2016-10-18 | 2021-04-20 | Snow Corporation | Methods, devices, and computer-readable media for sharing image effects |
US11876762B1 (en) | 2016-10-24 | 2024-01-16 | Snap Inc. | Generating and displaying customized avatars in media overlays |
US11843456B2 (en) | 2016-10-24 | 2023-12-12 | Snap Inc. | Generating and displaying customized avatars in media overlays |
US11750767B2 (en) | 2016-11-07 | 2023-09-05 | Snap Inc. | Selective identification and order of image modifiers |
US10623666B2 (en) | 2016-11-07 | 2020-04-14 | Snap Inc. | Selective identification and order of image modifiers |
US11233952B2 (en) | 2016-11-07 | 2022-01-25 | Snap Inc. | Selective identification and order of image modifiers |
US10754525B1 (en) | 2016-12-09 | 2020-08-25 | Snap Inc. | Customized media overlays |
US11397517B2 (en) | 2016-12-09 | 2022-07-26 | Snap Inc. | Customized media overlays |
US10203855B2 (en) | 2016-12-09 | 2019-02-12 | Snap Inc. | Customized user-controlled media overlays |
US9998796B1 (en) * | 2016-12-12 | 2018-06-12 | Facebook, Inc. | Enhancing live video streams using themed experiences |
US10123065B2 (en) | 2016-12-30 | 2018-11-06 | Mora Global, Inc. | Digital video file generation |
US10110942B2 (en) | 2016-12-30 | 2018-10-23 | Mora Global, Inc. | User relationship enhancement for social media platform |
US11284145B2 (en) | 2016-12-30 | 2022-03-22 | Mora Global, Inc. | User relationship enhancement for social media platform |
WO2018126279A1 (en) * | 2016-12-30 | 2018-07-05 | Lyons Jessica Barbara | Digital video file generation |
US11616745B2 (en) | 2017-01-09 | 2023-03-28 | Snap Inc. | Contextual generation and selection of customized media content |
US11870743B1 (en) | 2017-01-23 | 2024-01-09 | Snap Inc. | Customized digital avatar accessories |
US10915911B2 (en) | 2017-02-03 | 2021-02-09 | Snap Inc. | System to determine a price-schedule to distribute media content |
US11250075B1 (en) | 2017-02-17 | 2022-02-15 | Snap Inc. | Searching social media content |
US10319149B1 (en) | 2017-02-17 | 2019-06-11 | Snap Inc. | Augmented reality anamorphosis system |
US11720640B2 (en) | 2017-02-17 | 2023-08-08 | Snap Inc. | Searching social media content |
US11861795B1 (en) | 2017-02-17 | 2024-01-02 | Snap Inc. | Augmented reality anamorphosis system |
US11748579B2 (en) | 2017-02-20 | 2023-09-05 | Snap Inc. | Augmented reality speech balloon system |
US11189299B1 (en) | 2017-02-20 | 2021-11-30 | Snap Inc. | Augmented reality speech balloon system |
US11037372B2 (en) | 2017-03-06 | 2021-06-15 | Snap Inc. | Virtual vision system |
US11961196B2 (en) | 2017-03-06 | 2024-04-16 | Snap Inc. | Virtual vision system |
US11670057B2 (en) | 2017-03-06 | 2023-06-06 | Snap Inc. | Virtual vision system |
US11258749B2 (en) | 2017-03-09 | 2022-02-22 | Snap Inc. | Restricted group content collection |
US10523625B1 (en) | 2017-03-09 | 2019-12-31 | Snap Inc. | Restricted group content collection |
US10887269B1 (en) | 2017-03-09 | 2021-01-05 | Snap Inc. | Restricted group content collection |
US11297399B1 (en) | 2017-03-27 | 2022-04-05 | Snap Inc. | Generating a stitched data stream |
US10582277B2 (en) | 2017-03-27 | 2020-03-03 | Snap Inc. | Generating a stitched data stream |
US10581782B2 (en) | 2017-03-27 | 2020-03-03 | Snap Inc. | Generating a stitched data stream |
US11558678B2 (en) | 2017-03-27 | 2023-01-17 | Snap Inc. | Generating a stitched data stream |
US11349796B2 (en) | 2017-03-27 | 2022-05-31 | Snap Inc. | Generating a stitched data stream |
US11170393B1 (en) | 2017-04-11 | 2021-11-09 | Snap Inc. | System to calculate an engagement score of location based media content |
US11195018B1 (en) | 2017-04-20 | 2021-12-07 | Snap Inc. | Augmented reality typography personalization system |
US10387730B1 (en) | 2017-04-20 | 2019-08-20 | Snap Inc. | Augmented reality typography personalization system |
US11474663B2 (en) | 2017-04-27 | 2022-10-18 | Snap Inc. | Location-based search mechanism in a graphical user interface |
US11842411B2 (en) | 2017-04-27 | 2023-12-12 | Snap Inc. | Location-based virtual avatars |
US11385763B2 (en) | 2017-04-27 | 2022-07-12 | Snap Inc. | Map-based graphical user interface indicating geospatial activity metrics |
US10952013B1 (en) | 2017-04-27 | 2021-03-16 | Snap Inc. | Selective location-based identity communication |
US11392264B1 (en) | 2017-04-27 | 2022-07-19 | Snap Inc. | Map-based graphical user interface for multi-type social media galleries |
US11409407B2 (en) | 2017-04-27 | 2022-08-09 | Snap Inc. | Map-based graphical user interface indicating geospatial activity metrics |
US11418906B2 (en) | 2017-04-27 | 2022-08-16 | Snap Inc. | Selective location-based identity communication |
US11893647B2 (en) | 2017-04-27 | 2024-02-06 | Snap Inc. | Location-based virtual avatars |
US11782574B2 (en) | 2017-04-27 | 2023-10-10 | Snap Inc. | Map-based graphical user interface indicating geospatial activity metrics |
US11556221B2 (en) | 2017-04-27 | 2023-01-17 | Snap Inc. | Friend location sharing mechanism for social media platforms |
US10963529B1 (en) | 2017-04-27 | 2021-03-30 | Snap Inc. | Location-based search mechanism in a graphical user interface |
US11451956B1 (en) | 2017-04-27 | 2022-09-20 | Snap Inc. | Location privacy management on map-based social media platforms |
US11232040B1 (en) | 2017-04-28 | 2022-01-25 | Snap Inc. | Precaching unlockable data elements |
US11675831B2 (en) | 2017-05-31 | 2023-06-13 | Snap Inc. | Geolocation based playlists |
US11263828B2 (en) * | 2017-07-14 | 2022-03-01 | Glu Mobile Inc. | Systems and methods for competitive scene completion in an application |
US11544915B2 (en) | 2017-07-14 | 2023-01-03 | Electronic Arts Inc. | Systems and methods for interactions with remote entities |
US11475254B1 (en) | 2017-09-08 | 2022-10-18 | Snap Inc. | Multimodal entity identification |
US11721080B2 (en) | 2017-09-15 | 2023-08-08 | Snap Inc. | Augmented reality system |
US10740974B1 (en) | 2017-09-15 | 2020-08-11 | Snap Inc. | Augmented reality system |
US11335067B2 (en) | 2017-09-15 | 2022-05-17 | Snap Inc. | Augmented reality system |
US10499191B1 (en) | 2017-10-09 | 2019-12-03 | Snap Inc. | Context sensitive presentation of content |
US11006242B1 (en) | 2017-10-09 | 2021-05-11 | Snap Inc. | Context sensitive presentation of content |
US11617056B2 (en) | 2017-10-09 | 2023-03-28 | Snap Inc. | Context sensitive presentation of content |
US11670025B2 (en) | 2017-10-30 | 2023-06-06 | Snap Inc. | Mobile-based cartographic control of display content |
US11030787B2 (en) | 2017-10-30 | 2021-06-08 | Snap Inc. | Mobile-based cartographic control of display content |
US11943185B2 (en) | 2017-12-01 | 2024-03-26 | Snap Inc. | Dynamic media overlay with smart widget |
US11265273B1 (en) | 2017-12-01 | 2022-03-01 | Snap, Inc. | Dynamic media overlay with smart widget |
US11558327B2 (en) | 2017-12-01 | 2023-01-17 | Snap Inc. | Dynamic media overlay with smart widget |
US11687720B2 (en) | 2017-12-22 | 2023-06-27 | Snap Inc. | Named entity recognition visual context and caption data |
US11017173B1 (en) | 2017-12-22 | 2021-05-25 | Snap Inc. | Named entity recognition visual context and caption data |
US11487794B2 (en) | 2018-01-03 | 2022-11-01 | Snap Inc. | Tag distribution visualization system |
US10678818B2 (en) | 2018-01-03 | 2020-06-09 | Snap Inc. | Tag distribution visualization system |
US11841896B2 (en) | 2018-02-13 | 2023-12-12 | Snap Inc. | Icon based tagging |
US11507614B1 (en) | 2018-02-13 | 2022-11-22 | Snap Inc. | Icon based tagging |
US11523159B2 (en) | 2018-02-28 | 2022-12-06 | Snap Inc. | Generating media content items based on location information |
US10885136B1 (en) | 2018-02-28 | 2021-01-05 | Snap Inc. | Audience filtering system |
US10979752B1 (en) | 2018-02-28 | 2021-04-13 | Snap Inc. | Generating media content items based on location information |
US10524088B2 (en) | 2018-03-06 | 2019-12-31 | Snap Inc. | Geo-fence selection system |
US11722837B2 (en) | 2018-03-06 | 2023-08-08 | Snap Inc. | Geo-fence selection system |
US11570572B2 (en) | 2018-03-06 | 2023-01-31 | Snap Inc. | Geo-fence selection system |
US11044574B2 (en) | 2018-03-06 | 2021-06-22 | Snap Inc. | Geo-fence selection system |
US10327096B1 (en) | 2018-03-06 | 2019-06-18 | Snap Inc. | Geo-fence selection system |
US10933311B2 (en) | 2018-03-14 | 2021-03-02 | Snap Inc. | Generating collectible items based on location information |
US11491393B2 (en) | 2018-03-14 | 2022-11-08 | Snap Inc. | Generating collectible items based on location information |
US11163941B1 (en) | 2018-03-30 | 2021-11-02 | Snap Inc. | Annotating a collection of media content items |
US11297463B2 (en) | 2018-04-18 | 2022-04-05 | Snap Inc. | Visitation tracking system |
US10924886B2 (en) | 2018-04-18 | 2021-02-16 | Snap Inc. | Visitation tracking system |
US10219111B1 (en) | 2018-04-18 | 2019-02-26 | Snap Inc. | Visitation tracking system |
US11683657B2 (en) | 2018-04-18 | 2023-06-20 | Snap Inc. | Visitation tracking system |
US10779114B2 (en) | 2018-04-18 | 2020-09-15 | Snap Inc. | Visitation tracking system |
US10681491B1 (en) | 2018-04-18 | 2020-06-09 | Snap Inc. | Visitation tracking system |
US10448199B1 (en) | 2018-04-18 | 2019-10-15 | Snap Inc. | Visitation tracking system |
US11860888B2 (en) | 2018-05-22 | 2024-01-02 | Snap Inc. | Event detection system |
US10943381B2 (en) | 2018-07-24 | 2021-03-09 | Snap Inc. | Conditional modification of augmented reality object |
US11670026B2 (en) | 2018-07-24 | 2023-06-06 | Snap Inc. | Conditional modification of augmented reality object |
US11367234B2 (en) | 2018-07-24 | 2022-06-21 | Snap Inc. | Conditional modification of augmented reality object |
US10789749B2 (en) | 2018-07-24 | 2020-09-29 | Snap Inc. | Conditional modification of augmented reality object |
US10679393B2 (en) | 2018-07-24 | 2020-06-09 | Snap Inc. | Conditional modification of augmented reality object |
US11450050B2 (en) | 2018-08-31 | 2022-09-20 | Snap Inc. | Augmented reality anthropomorphization system |
US11676319B2 (en) | 2018-08-31 | 2023-06-13 | Snap Inc. | Augmented reality anthropomorphtzation system |
US10997760B2 (en) | 2018-08-31 | 2021-05-04 | Snap Inc. | Augmented reality anthropomorphization system |
US11653072B2 (en) | 2018-09-12 | 2023-05-16 | Zuma Beach Ip Pty Ltd | Method and system for generating interactive media content |
US11455082B2 (en) | 2018-09-28 | 2022-09-27 | Snap Inc. | Collaborative achievement interface |
US11704005B2 (en) | 2018-09-28 | 2023-07-18 | Snap Inc. | Collaborative achievement interface |
US11799811B2 (en) | 2018-10-31 | 2023-10-24 | Snap Inc. | Messaging and gaming applications communication platform |
US11698722B2 (en) | 2018-11-30 | 2023-07-11 | Snap Inc. | Generating customized avatars based on location information |
US11199957B1 (en) | 2018-11-30 | 2021-12-14 | Snap Inc. | Generating customized avatars based on location information |
US11558709B2 (en) | 2018-11-30 | 2023-01-17 | Snap Inc. | Position service to determine relative position to map features |
US11812335B2 (en) | 2018-11-30 | 2023-11-07 | Snap Inc. | Position service to determine relative position to map features |
USD945470S1 (en) * | 2018-12-27 | 2022-03-08 | Sony Corporation | Display panel or screen with animated graphical user interface |
US11877211B2 (en) | 2019-01-14 | 2024-01-16 | Snap Inc. | Destination sharing in location sharing system |
US11751015B2 (en) | 2019-01-16 | 2023-09-05 | Snap Inc. | Location-based context information sharing in a messaging system |
US11294936B1 (en) | 2019-01-30 | 2022-04-05 | Snap Inc. | Adaptive spatial density based clustering |
US11693887B2 (en) | 2019-01-30 | 2023-07-04 | Snap Inc. | Adaptive spatial density based clustering |
US11972529B2 (en) | 2019-02-01 | 2024-04-30 | Snap Inc. | Augmented reality system |
US11809624B2 (en) | 2019-02-13 | 2023-11-07 | Snap Inc. | Sleep detection in a location sharing system |
US11500525B2 (en) | 2019-02-25 | 2022-11-15 | Snap Inc. | Custom media overlay system |
US11954314B2 (en) | 2019-02-25 | 2024-04-09 | Snap Inc. | Custom media overlay system |
US11574431B2 (en) | 2019-02-26 | 2023-02-07 | Snap Inc. | Avatar based on weather |
US11301117B2 (en) | 2019-03-08 | 2022-04-12 | Snap Inc. | Contextual information in chat |
US11868414B1 (en) | 2019-03-14 | 2024-01-09 | Snap Inc. | Graph-based prediction for contact suggestion in a location sharing system |
US11852554B1 (en) | 2019-03-21 | 2023-12-26 | Snap Inc. | Barometer calibration in a location sharing system |
US11249614B2 (en) | 2019-03-28 | 2022-02-15 | Snap Inc. | Generating personalized map interface with enhanced icons |
US11740760B2 (en) | 2019-03-28 | 2023-08-29 | Snap Inc. | Generating personalized map interface with enhanced icons |
US11361493B2 (en) | 2019-04-01 | 2022-06-14 | Snap Inc. | Semantic texture mapping system |
US11963105B2 (en) | 2019-05-30 | 2024-04-16 | Snap Inc. | Wearable device location systems architecture |
US11785549B2 (en) | 2019-05-30 | 2023-10-10 | Snap Inc. | Wearable device location systems |
US11606755B2 (en) | 2019-05-30 | 2023-03-14 | Snap Inc. | Wearable device location systems architecture |
US11206615B2 (en) | 2019-05-30 | 2021-12-21 | Snap Inc. | Wearable device location systems |
US11917495B2 (en) | 2019-06-07 | 2024-02-27 | Snap Inc. | Detection of a physical collision between two client devices in a location sharing system |
US11601783B2 (en) | 2019-06-07 | 2023-03-07 | Snap Inc. | Detection of a physical collision between two client devices in a location sharing system |
US11714535B2 (en) | 2019-07-11 | 2023-08-01 | Snap Inc. | Edge gesture interface with smart interactions |
US11821742B2 (en) | 2019-09-26 | 2023-11-21 | Snap Inc. | Travel based notifications |
US11218838B2 (en) | 2019-10-31 | 2022-01-04 | Snap Inc. | Focused map-based context information surfacing |
US11729343B2 (en) | 2019-12-30 | 2023-08-15 | Snap Inc. | Including video feed in message thread |
US11128715B1 (en) | 2019-12-30 | 2021-09-21 | Snap Inc. | Physical friend proximity in chat |
US11429618B2 (en) | 2019-12-30 | 2022-08-30 | Snap Inc. | Surfacing augmented reality objects |
US11893208B2 (en) | 2019-12-31 | 2024-02-06 | Snap Inc. | Combined map icon with action indicator |
US11343323B2 (en) | 2019-12-31 | 2022-05-24 | Snap Inc. | Augmented reality objects registry |
US11943303B2 (en) | 2019-12-31 | 2024-03-26 | Snap Inc. | Augmented reality objects registry |
US11228551B1 (en) | 2020-02-12 | 2022-01-18 | Snap Inc. | Multiple gateway message exchange |
US11888803B2 (en) | 2020-02-12 | 2024-01-30 | Snap Inc. | Multiple gateway message exchange |
US11516167B2 (en) | 2020-03-05 | 2022-11-29 | Snap Inc. | Storing data based on device location |
US11765117B2 (en) | 2020-03-05 | 2023-09-19 | Snap Inc. | Storing data based on device location |
US11619501B2 (en) | 2020-03-11 | 2023-04-04 | Snap Inc. | Avatar based on trip |
US11430091B2 (en) | 2020-03-27 | 2022-08-30 | Snap Inc. | Location mapping for large scale augmented-reality |
US11915400B2 (en) | 2020-03-27 | 2024-02-27 | Snap Inc. | Location mapping for large scale augmented-reality |
US11776256B2 (en) | 2020-03-27 | 2023-10-03 | Snap Inc. | Shared augmented reality system |
US11503432B2 (en) | 2020-06-15 | 2022-11-15 | Snap Inc. | Scalable real-time location sharing framework |
US11483267B2 (en) | 2020-06-15 | 2022-10-25 | Snap Inc. | Location sharing using different rate-limited links |
US11290851B2 (en) | 2020-06-15 | 2022-03-29 | Snap Inc. | Location sharing using offline and online objects |
US11314776B2 (en) | 2020-06-15 | 2022-04-26 | Snap Inc. | Location sharing using friend list versions |
US11676378B2 (en) | 2020-06-29 | 2023-06-13 | Snap Inc. | Providing travel-based augmented reality content with a captured image |
US11943192B2 (en) | 2020-08-31 | 2024-03-26 | Snap Inc. | Co-location connection service |
US11902902B2 (en) | 2021-03-29 | 2024-02-13 | Snap Inc. | Scheduling requests for location data |
US11601888B2 (en) | 2021-03-29 | 2023-03-07 | Snap Inc. | Determining location using multi-source geolocation data |
US11606756B2 (en) | 2021-03-29 | 2023-03-14 | Snap Inc. | Scheduling requests for location data |
US11645324B2 (en) | 2021-03-31 | 2023-05-09 | Snap Inc. | Location-based timeline media content system |
US11972014B2 (en) | 2021-04-19 | 2024-04-30 | Snap Inc. | Apparatus and method for automated privacy protection in distributed images |
US20230037470A1 (en) * | 2021-08-03 | 2023-02-09 | Idomoo Ltd | System And Method For Programing Video |
US11735186B2 (en) | 2021-09-07 | 2023-08-22 | 3Play Media, Inc. | Hybrid live captioning systems and methods |
US11829834B2 (en) | 2021-10-29 | 2023-11-28 | Snap Inc. | Extended QR code |
US20240038277A1 (en) * | 2022-07-29 | 2024-02-01 | Rovi Guides, Inc. | Systems and methods of generating personalized video clips for songs using a pool of short videos |
Also Published As
Publication number | Publication date |
---|---|
US8860865B2 (en) | 2014-10-14 |
US20100220197A1 (en) | 2010-09-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8860865B2 (en) | Assisted video creation utilizing a camera | |
US9032297B2 (en) | Web based video editing | |
CN105765990B (en) | Method, system and computer medium for distributing video content over a distributed network | |
US20180330756A1 (en) | Method and apparatus for creating and automating new video works | |
JP4261644B2 (en) | Multimedia editing method and apparatus | |
US7352952B2 (en) | System and method for improved video editing | |
US8139126B2 (en) | Digital video system for assembling video sequences | |
US20190252000A1 (en) | Methods and apparatus for remote motion graphics authoring | |
US7222300B2 (en) | System and method for automatically authoring video compositions using video cliplets | |
US20050231513A1 (en) | Stop motion capture tool using image cutouts | |
US20060204214A1 (en) | Picture line audio augmentation | |
US20110170008A1 (en) | Chroma-key image animation tool | |
US20060268121A1 (en) | In-camera cinema director | |
US20070162857A1 (en) | Automated multimedia authoring | |
TW201005583A (en) | Interactive systems and methods for video compositing | |
US20030237091A1 (en) | Computer user interface for viewing video compositions generated from a video composition authoring system using video cliplets | |
CN106796808A (en) | The audio/video editing equipment of electronic image establishment, picture editting and simplification, film making method and related computer program since rest image and audio track | |
Team | Adobe Premiere Pro CS3 Classroom in a Book: Adobe Prem Pro CS3 Classroo_1 | |
Hua et al. | Interactive video authoring and sharing based on two-layer templates | |
JP3942471B2 (en) | Data editing method, data editing device, data recording device, and recording medium | |
KR20200022995A (en) | Content production system | |
AU2002301447B2 (en) | Interactive Animation of Sprites in a Video Production | |
Jago | Adobe Premiere Pro Classroom in a Book (2022 Release) | |
Eagle | Vegas Pro 9 Editing Workshop | |
Wood | Sony Vegas Pro 11 Beginner's Guide |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BURNING MOON, LLC, WYOMING Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HASHIMOTO, KAZ;DUKELLIS, JOHN NICHOLAS;SIGNING DATES FROM 20111101 TO 20111112;REEL/FRAME:027663/0508 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |