WO2015176116A1 - System and method for dynamic entertainment playlist generation - Google Patents

System and method for dynamic entertainment playlist generation Download PDF

Info

Publication number
WO2015176116A1
WO2015176116A1 PCT/AU2015/000303 AU2015000303W WO2015176116A1 WO 2015176116 A1 WO2015176116 A1 WO 2015176116A1 AU 2015000303 W AU2015000303 W AU 2015000303W WO 2015176116 A1 WO2015176116 A1 WO 2015176116A1
Authority
WO
WIPO (PCT)
Prior art keywords
genre
playlist
sub
media
genres
Prior art date
Application number
PCT/AU2015/000303
Other languages
French (fr)
Inventor
Nicolas CARTER - JOHNSON
Vijay Natesh SANTHANAM
Original Assignee
Muru Music Pty. Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2014901950A external-priority patent/AU2014901950A0/en
Application filed by Muru Music Pty. Ltd. filed Critical Muru Music Pty. Ltd.
Publication of WO2015176116A1 publication Critical patent/WO2015176116A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • G06F16/4387Presentation of query results by the use of playlists

Definitions

  • the present invention relates to a system, method software application and data signal for creating an entertainment package and in particular, to a system, method, software application and data signal that is capable of autonomously choosing one or more pieces of media to create a seamless entertainment package, with a view to providing the entertainment package to an audience for playback.
  • the invention has been developed primarily for use by audiences who wish to create a 'playlist' of music from one or more Genres, but do not wish to select individual songs within the Genre.
  • the embodiment described herein is directed to a "virtual Disc Jockey (DJ)" which is capable of mimicking the changes a DJ makes when playing in front of an audience. This includes the changes the DJ makes in a live set to adjust to keep the crowd's attention/interaction.
  • DJ virtual Disc Jockey
  • Yet other existing systems such as that described in US Patent 6,721 ,489 provide a system to create and update specific playlists. The creation of specific playlists is based on preprogramed rules and does not consider adjustable transitions between Genres and Sub-Genres based on a possible range of acoustic or social attributes.
  • Yet other existing systems such as that described in US Patent 7,345,232 provide a system to provide a playlist generated around a user's historical listening and interaction habits. It focusses on the frequency that an audio file is played. The introduction of Genre or Sub-Genre transitions is not considered. The introduction of rarely played audio files has no relationship to Genre, Sub-Genre, acoustic or social attribute.
  • the present invention provides a system for creating a playlist of a plurality of audio files to transition between a pre-set start point and end point along a selected trajectory, comprising:
  • transitioning between the first Genre to the second Genre is through at least one additional Genre
  • the at least one predefined additional Genre includes linking features between the first and second Genres and the first and second song attributes.
  • the transition between different audio files of the first Genre, at least one additional Genre and second Genre is based on a comparison between scores calculated for each different audio file, wherein the score is based on an internal calculation of the Genre of each audio file.
  • At least one of the first Genre, Second Genre and at least one additional Genre is a Sub-Genre.
  • the score for each audio file is calculated based on Genre or Sub-Genre specific weighting associated with the audio file.
  • the score for each audio file is calculated based on attributes extracted from external databases.
  • a length of the playlist between the pre-set start point and end point is of a selected time period.
  • the playlist is broken into distinct time periods around one or a group of audio files.
  • the playlist is fit to the selected time period using a time tracking error model focused on the audio files.
  • the attributes extracted from external databases are located in metadata.
  • the transition between different audio files of the first Genre, at least one additional Genre and second Genre is based on a comparison between the song attributes of each audio file.
  • the song attributes include at least one of tempo, energy, key, mode, harmony, chord usage, meter, melody, timbre, instrumental degree, degree of use of vocal elements, elements of live recording, elements of studio recording, popularity and currency.
  • the trajectory is plotted as a graph of the song attribute against time.
  • the shape of the trajectory is selected from one of the following curves; Sine Wave, Downwards, Upwards Plateau then Downwards, Linear Up, Linear Down, Bezier Ease In, Bezier Ease Out.
  • the start point is based on a first music Sub-Genre and first song attribute and the end point is based on a second music Sub-Genre wherein transitioning between the first music Sub-Genre to the second song Sub-Genre is through at least one predefined additional Sub-Genre, wherein the at least one predefined additional Sub-Genre includes linking features between the first and second music Sub-Genres and the first and second song attributes.
  • the present invention provides an entertainment package, comprising: a selection module arranged to provide selection information regarding a user's selection;
  • a processor receives the selection information and uses an algorithm to access one or more databases to identify one or more elements of media based on the selection information;
  • the selection information includes the entertainment Genre of the media.
  • the one or more databases are located remotely of the device and the access to the databases occurs via a communications link.
  • the device may further include a settings module arranged to allow the user to vary one or more settings, wherein the settings are provided to the algorithm to identify the one or more elements of media.
  • the Genre may include one or more Sub-Genres.
  • the user may select one or more of the one or more Sub-Genres.
  • the settings may include the beats per minute of the element of media, the energy of the media, the vocality of the element of media and the popularity of the element of media.
  • the advanced settings include varying at least one of the tempo, energy, key, mode, harmony, chord usage, meter, melody, timbre, instrumental degree, degree of use of vocal elements, elements of live recording, elements of studio recording or popularity of the elements of media.
  • the device may further include a human machine interface arranged to allow a user of the device to interact with the device.
  • the human machine interface may be a touchscreen.
  • the device may be a mobile communications device, such as a smartphone.
  • Figure 1 is an example computing system and network that may be utilised to operate a system, method and/or software application in accordance with the present invention
  • Figure 2 is an example graph for start and end Genres used with the playlist creation system of the present invention
  • Figure 3 is an example graph for the transition plot between start and end Genres used with the playlist creation system of the present invention
  • Figure 4 is an example graph for the transition plot between start and end Genres with pre-defined linking Sub-Genres used with the playlist creation system of the present invention
  • Figure 5 is the example graph of Figure 4 with the transition plot broken down into segments
  • Figure 6 is the example graph of Figure 5 with the segments broken down into defined steps
  • Figure 7 is the example graph of Figure 6 with an alternate route for the transition plot
  • Figure 8 is the example graph of Figure 6 with an alternate route for the transition plot
  • Figure 9 is a graph mapping the trajectory path of a playlist over a plurality of attributes
  • Figure 10 is a flow chart for the categorisation of a song for the use of the play list system of the present invention.
  • Figure 1 1 is a flow chart of the classification of a song into a Genre according to the play list system of the present invention
  • Figure 12 is a screenshot displaying a main screen in accordance with an embodiment of the invention
  • Figure 13 is a screenshot displaying a playback screen in accordance with an embodiment of the invention
  • Figure 14 is a screenshot displaying a real-time settings screen in accordance with an embodiment of the invention.
  • Figure 15 is a screenshot displaying a Sub-Genres screen in accordance with an embodiment of the invention.
  • Figure 16 is a screenshot displaying an advanced settings screen in accordance with an embodiment of the invention.
  • the interface and processor are implemented using a portable computing device (such as a smartphone or a tablet computer) having an appropriate user interface.
  • the computing device is appropriately programmed to implement the invention either alone or with the assistance of a networked server.
  • FIG. 1 there is shown a schematic diagram of a central transfer system which in this embodiment comprises a server 100.
  • the server 100 comprises suitable components necessary to receive, store and execute appropriate computer instructions.
  • the components may include a processing unit 102, read only memory (ROM) 104, random access memory (RAM) 106, and input/output devices such as disk drives (including solid state drives or any other storage technology as used depending on the specific hardware/software combination) 108, input devices 1 10 such as an Ethernet port, a USB port, etc.
  • Display 1 12 such as a liquid crystal display, a light emitting display or any other suitable display and communications links 1 14.
  • the server 100 includes instructions that may be included in ROM 104, RAM 106 or disk drives 108 and may be executed by the processing unit 102.
  • the service may include storage devices such as a disk drive 108 which may encompass solid state drives, hard disk drives, optical drives or magnetic tape drives.
  • the server 100 may use a single disk drive or multiple disk drives.
  • the server 100 may also have a suitable operating system 1 16 which resides on the disk drive or in the ROM of the server 100.
  • the embodiment described herein is a software application which, in the embodiment described herein, is branded and sold under the name "MuruTM” and is an "app” (i.e. a software application that is specifically designed for use on a portable, handheld telecommunications device such as a smart phone or a tablet computing device, such as an Apple iPhoneTM or iPadTM, or a Google AndroidTM device such as a Samsung Note 3TM).
  • a portable, handheld telecommunications device such as a smart phone or a tablet computing device, such as an Apple iPhoneTM or iPadTM, or a Google AndroidTM device such as a Samsung Note 3TM.
  • the portable device may communicate utilising any suitable technology and that any reference herein to a Subscriber Identification Module (SIM), 3 rd Generation (3G) and 4 th Generation (4G) telecommunications networks, WiFi, Bluetooth, NFC, or any other specific hardware or software, is provided for the purposes of illustration only and is not intended to limit the scope of the claimed invention.
  • SIM Subscriber Identification Module
  • 3G 3 rd Generation
  • 4G 4 th Generation
  • Example interface screen captures of an embodiment of the app are shown in Figures 12 through 16 and are described in more detail hereinbelow.
  • the application may also be provided as a "desktop" software application for use on a personal computing device such as a laptop, a notebook computer or a personal computer, or may be provided in any appropriate form, as computing technology evolves. Such variations are within the purview of the person skilled in the art.
  • FIG. 2 a graph 120 of a template for a play list trajectory comprising of audio files to lay over, with a start musical Genre 126 selected for a start point and a finish musical Genre 128 selected for an end point.
  • the start musical Genre 126 is illustrated as pop and the finish musical Genre 128 is illustrated as urban as defined by the Genre assignment system described below.
  • pop for the start musical Genre 126 and urban for the finish musical Genre 128 are illustrative and that any Genre or Sub-Genre as defined by the Genre assignment system of the present invention could be chosen for the start musical Genre 126 and finish musical Genre 128.
  • the vertical axis of the graph 120 is a song attribute 122 such as an acoustic or social attribute that can be selected by a user of the system.
  • An acoustic attribute can be selected from at least one of tempo, energy (this represents a perceptual measure of intensity and powerful activity released throughout the track. Typical energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale.
  • Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy), key, mode, harmony, chord usage, meter, melody, timbre, instrumental degree, degree of use of vocal elements, elements of live recording, elements of studio recording.
  • acoustic attributes could be used other than those listed above.
  • a user of the system is able to select the level of the acoustic attribute to start a play list trajectory with and to finish the play list trajectory with.
  • a social attributes can include but is not limited to a measure of currency, being how new the audio file is associated with its popularity (a high score for both new and popular), how unexpectedly popular an audio file is (a popular audio track that originates from a source with little to no historical popularity for their audio files would give a high score), how popular the audio file is, the date of the recording of the audio track, a measure of the demographic the audio file is directed to, a measure of the demographic to which an audio file is popular or other attributes readily understood by the skilled addressee.
  • a user of the system is able to select the level of the social attribute to start a play list trajectory with and to finish the play list trajectory with.
  • a plurality of predefined trajectories are defined within the system that plot different shapes on the graph 120 to represent a different user experience. Examples of potential predefined trajectories that are not to be interpreted as limiting are Linear, Constant, Bezier Curves or other functions such as discontinuous functions, square functions, saw tooth functions or otherwise.
  • the horizontal axis 124 is time. This allows the use of the system to select a set time period for the playlist trajectory to last for. With this set time period selected the system assigns a play list to the play list trajectory that transitions from the start Genre 126 and initially set song attribute 122 level to the finish Genre 128 and end song attribute 122 level in a smooth way in accordance with the aspects of the invention described below.
  • Figure 3 illustrates a graph 130 similar to the graph 120 of Figure 2 with a playlist trajectory 132 mapped.
  • the playlist trajectory 132 illustrates the start point 131 of the playlist trajectory starting in the pop Genre with a low tempo and ending in the urban Genre with a high tempo.
  • Audio files are selected to populate the playlist trajectory 132 according to their calculated matching to the song attribute 122 and Genre or Sub-Genre along the playlist trajectory 132. Genres or Sub-Genres that link between selected Genres or Sub-Genres can also be used. Where matches to the playlist trajectory cannot be found audio files can be selected based on the closest audio files.
  • this closeness is calculated with an error measurement calculated on the quantified elements (described below) of the audio files from the playlist trajectory 132.
  • vector rays are extended from the playlist trajectory 132 to identify the closest audio files.
  • Figure 4 illustrates a graph 134 similar to the graphs 120 and 130 of Figures 2 and 3 with the playlist trajectory 132 mapped also illustrating the use of Sub-Genres 127 and 125 to transition the music between Genres 126 and 128 is a smooth manner.
  • the Sub-Genres 127 and 125 are selected using the Genre path finder described below. Although two transitions 127 and 125 between start and end Genres 126, 128 are illustrated, the skilled addressee will recognise that additional transitions with multiple Sub-Genres can be used.
  • Figure 5 illustrates a graph 136 similar to the graph 120, 130 and 134 of Figures 2, 3 and 4 with the playlist trajectory 132 mapped, also illustrating the breaking up of the playlist trajectory 132 into defined time segments 138.
  • the time segments are used to accommodate a time tracking error model such as a variance compensation numerical method being applied to audio files or time segments 138 along the playlist trajectory 132. This allows the system to sum a collection of audio files to create the playlist for the playlist trajectory 132 and closely match the time period set for the playlist initially so that the time the final audio file of the playlist finishes coincides very closely with the set time period.
  • the segments 138 correspond to a single audio file.
  • a single segment 138 corresponds to a collection of audio files.
  • Figure 6 illustrates a graph 140 similar to the graph 136 of Figure 5 with the time segments 138 crossing the playlist trajectory with steps 142 to allow calculations for proposed audio files that follow each other to be conducted so that the songs are appropriately matched.
  • the segments 138 comprise a collection of songs the step 142 is taken at a representative point.
  • the song attribute level and Genre or Sub-Genre at the step 142 are used to calculate appropriate audio files to be placed on the playlist along the playlist trajectory in accordance with the scoring system described below.
  • the steps 142 can serve as reference points to perform calculations for closest audio files as described above.
  • adjusted play list trajectory profiles can be entered to change the playlist trajectory 132.
  • a user can manipulate the acoustic attributes currently selected during the playing of the play list. This alters the playlist trajectory in real time to provide the altered playlist trajectory 144.
  • a user can change selected yet to be played, or currently playing Genres or Sub-Genres to alter the selected songs or the playlist trajectory in real time.
  • Figure 7 illustrates a graph 146 similar to the graph 140 of Figure 6 with such an adjusted playlist trajectory 144.
  • the settings of the song attribute 122 can be altered as desired. This results in a recalculation of the playlist to provide adjusted playlist trajectory 144.
  • This adjusted playlist trajectory 144 uses new audio files to suit the new playlist trajectory 144.
  • the song attribute 122 is lowered but the general progression of the attribute linearly from low to high is continued.
  • Figure 8 illustrates a graph 148 similar to the graph 146 of Figure 7 where the altered trajectory 150 changes the general progression from the playlist trajectory 132 to a downward linear trajectory through revised playlist trajectory 150 from a high song attribute 122 to a lower song attribute.
  • Figure 9 illustrates a plurality of playlist trajectories associated with different song attributes and following different paths.
  • 701 illustrates a song attribute of tempo through beats per minute
  • 704 illustrates a song attribute of energy
  • 703 illustrates a song attribute of the use of vocals
  • 702 illustrates a song attribute of popularity.
  • the path along the song attribute changes throughout the time period of the playlist trajectory for each of the trajectories.
  • a user can have greater control over the dynamics and progression of the playlist trajectory through manipulation of the playlist trajectories 701 , 704, 703, 702. The user simply adjusts the nodes for each attribute in intervals for the duration of the playlist.
  • the app also includes a "presets" tab (not shown), which allows users to select predefined settings and Genre paths for a quick start.
  • the presets are fully adjustable but provide users with a simple and quick starting point when time is a factor or where the user requires prompting.
  • Predefined settings may be denoted by a descriptive label to describe a mood or a setting, such as:
  • FIG. 10 there is shown a flow chart setting out how an audio file is identified, categorised and catalogued for use a playlist trajectory 132, 701 , 702, 703, 704 of the present invention.
  • the system accesses an audio file catalogue resource at step 152 to extract identifying metadata associated with the audiofile.
  • the metadata extracted can include, but is not limited to title, artist, duration, unique song identification (as used in a particular resource), source platform identification, album art URL, preview audio URL, Audio MD5 Hash tag.
  • Examples of the catalogue resource from which the metadata can be extracted include but are not limited to:
  • the metadata may be provided internally within the system.
  • the present invention after the system collects the metadata at step 152, the system progressively aggregates all the data locally in storage within the system and reviews and corrects erroneous information. Erroneous data can include missing critical fields (such as artist, title, duration). This is accomplished by referencing metadata associated with particular audio data across alternative catalogue resources.
  • the accumulation and aggregation step 154 is executed in parallel across one or many physical and virtual computing resources to lower aggregation times.
  • the accumulation and aggregation step 154 caches all the raw data from the source so that the data can be reprocessed easily if required due to changes made at the source of the audio file identification, categorisation and cataloguing.
  • the system performs acoustic attribution Step 155 to associate one or more attributes and a quantified level of the attribute to an audio file.
  • the system accesses an acoustic attribute database where quantified values are associated to acoustic attributes for audio files.
  • the acoustic attributes can include, but are not limited to the following: tempo, key, mode, harmony, chord usage, meter, melody, timbre, instrumental degree, degree of use of vocal elements, elements of live recording, elements of studio recording or combinations of attributes into a metric such as energy (a high energy song would feel fast, loud, and noisy), how suitable an audio file is for dancing, how much spoken word is in an audio file, use of electronic instruments, use of acoustic instruments etc.
  • a metric such as energy (a high energy song would feel fast, loud, and noisy), how suitable an audio file is for dancing, how much spoken word is in an audio file, use of electronic instruments, use of acoustic instruments etc.
  • Several providers of this quantified acoustic attribute for a song can be used for this data. These include but are not limited to:
  • the quantified acoustic attribute can also be provided internally to the system. Where the quantified acoustic attribute is taken from an external provider, the quantified figure for an acoustic attribute is converted to a format suitable for use with the system. The quantified acoustic attribute/s associated with an audio file is/are stored in the system for use in an appropriate form in first filter step 156.
  • a set of exclusion rules are run at first exclusion step 162 against audio files in the system to remove audio files that do not meet predefined acoustic attribute guidelines. For example songs with a tempo of 0 or over 400 will be excluded as they most likely have improperly measured tempos. Similarly, songs with a high metric of spoken word might be excluded as they are likely spoken word recordings inappropriate for a proposed use.
  • the social attributes can include but are not limited to a measure of how new the audio file is associated with its popularity (a high score for both new and popular), how unexpectedly popular an audio file is (a popular audio track that originates from a source with little to no historical popularity for their audio files would give a high score), how popular the audio file is, the date of the recording of the audio track, a measure of the demographic the audio file is directed to, a measure of the demographic to which an audio file is popular.
  • measures of social attributes can be sourced from external providers or internally. Non-exhaustive examples of external providers of quantified measures of social attributes are:
  • a second filtering step 164 can be used to exclude certain audio files with specific social attributes, or specific quantified levels of one or more social attributes. For example audio tracks could be excluded where the demographic was identified as being children or people over the age of 70.
  • the system 150 associates a Genre tag to the audio file (that are keywords and phrases associated with the audio file) at social attribution step 157 through obtaining a Genre tag at Genre tag source step 159 from an external or internal source, processing this obtained tag source and putting it through the Genre assignment system 169.
  • Genre assignment system is described in more detail below under Figure 1 1 .
  • Genre tags imported through the Genre tag source step 159 can include Genres, Sub-Genres, Hybrid Genres or otherwise and can include but are not limited to pop, urban, electronic, classical, latin, jazz, acid jazz, acid lounge, indian classical, latin metal, lounge pop, electropunk, triphop, synthpop, happy hardcore etc.
  • external sources for Genre tags for step 159 include:
  • LastFM With Tags extracted for an Audio file at step 159 the results are filtered at step 160 to place the results in an appropriate format for system 150. Appropriate Genres are excluded at step 166 where they are deemed not appropriate for a particular use. For example audio lectures, learning audio recordings, comedy recordings or otherwise might be excluded from a music playlist database.
  • the system of the present invention takes the Tags extracted from external sources and processes them with the Genre assignment system of the present invention at Genre assignment step 169.
  • the Genre assignment system uses a Genre Taxonomy 170 with the Genre Tags from Genre Tag source step 159 for the Genre Assignment System of step 169.
  • FIG. 1 1 illustrating a flow chart of the Genre Assignment System of Genre Assignment System Step 169 as initiated at start step 999.
  • the Genre Assignment System receives the Tags from external Tag step 159 and maps and scores the audio file to finalise the Genre and Sub-Genre Tag to be associated with the audio file for the use of the play list generation system of the present invention.
  • the Genre Assignment System receives Genre Tags from external Tag Source Step 159 at steps 180 and 181 and generates and empty score card 182 to be filled in.
  • 170 musical Genre classifications that are taken from an internally produced index are aligned with the Tags imported at step 159 to associate quantified figures in table map step 186 for each Genre associated with the audio file according to the how the different Genres are seen to influence the audio file in question and the importance of the Genre has to the audio file giving a mapping weight to the audio file.
  • the quantified figures from 186 are imported into step 185. With the figures in the table of 185 different scorings at step 187 are associated with different Genres to give some Genres greater weighting than others.
  • the scorings are based on algorithms associated with different Genres etc.
  • the scorings are based on algorithms associated with different Genres etc.
  • One such algorithm will give each Genre a score based on multiplying an influence figure a song has by the importance figure a song has by the weight a song has.
  • An alternative algorithm will give a scoring based on the mapping weight given to the Genre and adding it to quantified figures imported at step 159.
  • the Genre Assignment System is updated at step 188 and the system moves to choose a Genre to be associated with the song at step 191 for the purposes of processing the audio file for a play list trajectory.
  • rule selection Step 191 the system selects a selection criteria rule from one that is stored in the system in document 190. Using the selected rule the system then associates a Genre according to the system to the audio file at selection step 192 using the selected scoring rule and the quantified figures given to the audio file from step 186.
  • the Genre Assignment System steps 169 selects a Genre from selection step 192 for further categorisation into a Sub-Genre.
  • Step 183 for the Genre assignment in Step 193 a selected Genre is placed in an empty score card in score card step 182 to be filled in for scoring the Sub-Genre.
  • each Sub-Genre a score based on multiplying an influence figure a song has by the importance figure a song has by the weight a song has.
  • An alternative algorithm will give a scoring based on the mapping weight given to the Sub-Genre and adding it to quantified figures imported at step 159.
  • the Genre Assignment System is updated at step 198 and the system moves to choose a Genre to be associated with the song at step 199 for the purposes of processing the audio file for a play list trajectory.
  • rule selection step 199 the system selects a selection criteria rule from one that is stored in the system in document 990. Using the selected rule the system then associates a Genre according to the system to the audio file at selection step 991 using the selected scoring rule and the quantified figures given to the audio file from step 196.
  • Genre Assignment System Step 169 is then completed at step 992.
  • a non-exclusive example of the quantified figures for different Genres and Sub-Genres associated with a single audio file is:
  • a selection criteria is applied at step 190. This selection criteria will depend on the audio files being analysed.
  • the system attaches the decided Genre to the audio file at step 172 where it is also added to the taxonomy 170.
  • the quantified figures and scoring for Genre, Sub-Genre along with the acoustic attributes and social attributes are used by the system to place audio files on the graph 120, 130, 134, 136,140, 148, 146 so that when a trajectory is selected appropriate songs can be linked together based on their scored results to create a smooth playlist.
  • An example of a selected trajectory is as follows where the upwards profile refers to the acoustic attribute of the vertical axis of graph 120, 130, 134, 136,140, 148, 146 starting low and progressing upwards.
  • the system will chose audio files to fit with the level of the audio attribute that fits with the Genre or Sub-Genre at the particular point of the playlist trajectory.
  • the Genre or Sub-Genre will be decided by the Genre applied by the selection criteria during the Genre assignment system step.
  • Figures 12 to 16 illustrate possible interfaces that a user can manipulate to interact with the system of the present invention.
  • FIG 12 there is shown an exemplary main screen 200 of the app.
  • the main screen 200 provides access to the main functional components of the app.
  • the app uses the above discussed system to create a dynamic audio file/music/media playlists based on selected Genres, Sub-Genres and playlist trajectory.
  • the user selects one or more Genres or Sub-Genres from the Genres or Sub-Genres selection area 204, with the results being displayed in the Genres or Sub-Genres indication section 202.
  • the user may also select a desired duration for the playlist by using sliding selector 206.
  • the app connects to one or more databases as discussed above and utilises metadata associated with audio files located in the database to select one or more appropriate elements of media (i.e. music files) for generation into a playlist as discussed above.
  • Genre or Sub-Genre selection area 204 as illustrated may take alternative forms, such as a linked data base, a list or other searchable library.
  • Genre or Sub-Genre indication selection may be displayed in alternative formats and still be within the scope of the present invention.
  • the app is arranged to progress through each of the selected musical Genres or Sub-Genres in the digital libraries to select a plurality of music files across each of the selected Genres or Sub-Genres or through linking Genres or Sub-Genres calculated using the above described methods.
  • the app populates the playlist according to the chosen playlist trajectory.
  • the user creates a 'map' of their preferences within the app through the creation of the above discussed playlist trajectory, which can then access one of more databases (or streaming platforms) or music software player, of their choice, with music that is available as discussed above.
  • This allows the user freedom to create durable playlists and define their music tastes without needing to remember and download individual songs or albums.
  • the user may be able to view the graph 120 and can also see the Genre path that is generated by the app. A user can therefore exclude Genres and also include other Genres of their choice in an intuitive manner.
  • the algorithm is prompted to create a new Genre path.
  • the app provides an in-built music player as shown in Figure 13.
  • a pull down menu 302 which displays the title and artist of the song being played (including an image of the album cover if available), the total time of the song 306, a save button 308 which allows the user to 'tag' the playing song for later access, a jog wheel 310 which indicates the progress through the playlist, a share button 312 (which allows the user to send a link with information about the song to another user), a summary of the Genres selected 314, a settings button 316 and a mute button 318.
  • a music player supplied by an external provider can be used to play the playlist.
  • the screen 300 of Figure 13 provides the user with all the functionality required to operate the playlist and the app as a whole. There is also provided, in one embodiment, a "like” and “dislike” button (or some other equivalents indicating positive or negative reviews of the songs), so that the user can provide feedback on whether they like or dislike a song.
  • the song is identified as a user favourite and is considered in any future playlist generation. Similarly, where a user selects the "dislike” button, the song is excluded from any future playlist.
  • the real time settings screen 400 of Figure 14 is displayed.
  • the real time-settings screen includes a back button 402, a summary of the song information 404, various settings bars which allow the user to vary certain characteristics of the playback (which are described in more detail later).
  • the Genre path created by the app links the selected Genres or Sub-Genres in a logical manner by creating a 'path' of Genres and/or Sub-Genres that are interlinked. This is done in a logical and intuitive manner by linking Genres or Sub-Genres via Sub-Genres, to create a dynamic playlist that progresses from the one Genre to another in a way that ameliorates the possibility of any adjacent media files from 'clashing'.
  • This feature can be seen with reference to Figure 15 and screenshot 500, where there is shown a main Genre 502, and associated Sub-Genres 504.
  • buttons 506 and 508 may be used to provide feedback to the algorithm and instruct an adjustment of the current playlist trajectory 132. This can be such that the algorithm is instructed to 'play more' of a particular Sub-Genre (button 506) or 'play less' of a particular Sub-Genre (button 508). It can alternatively be an adjustment of the acoustic or social attribute. This allows the user to further customise the playlist to suit their own individual tastes and/or requirements, without needing to select specific media files. [00103] In more detail, by selecting a main Genre the user can access Sub-Genres relating to the main Genre.
  • the algorithm uses the Genre which is associated with the user selected artist or song as the starting Genre).
  • the user is prompted to select a total duration for the playlist. The user can select up to 12 hours of playtime (in intervals of 30 minutes). Once the selection has been made and the user selects "Play", the playlist is generated.
  • the term “Genre” is defined broadly, to cover any type of entertainment media that can be classified together due to a common theme.
  • the term “Genre” may encompass a type of music (e.g. techno, rock, pop, country and western, etc.) and/or it may encompass other ways of classifying music, such as by era (e.g. 1960's, 1970's, 1980's, etc.).
  • era e.g. 1960's, 1970's, 1980's, etc.
  • the embodiment described herein uses a single parameter to describe a Genre, the broader inventive concept contemplates a more sophisticated and nuanced manner in which to select Genres, such as by selecting more than one setting. For example, there may be provided two controls for the setting of Genre.
  • One of the settings may allow the user to set the Genre according to type (techno, etc. as described previously) and the other setting may allow the user to choose music from certain era ' s (1960 ' s, 1970 ' s etc.) within the specific Genre selected. For example, the user may select all rock music from the 1980's.
  • FIG. 16 there is shown an interface with advanced settings screen 600 (which is accessed via screen 400 and advanced settings button 408).
  • the user can vary acoustic and social attributes of the songs selected for the playlist, altering the playlist trajectory as discussed above.
  • These settings provide a measure of easy and intuitive customisation, removing the need for the user to have any intimate knowledge of the Genre of music or of individual songs.
  • the advanced settings allow the user to modify the outcome of the successive songs in the playlist in real time.
  • the BPM range selectable by a user ranges from 30 to 250.
  • the other three settings work on a percentage range of 0-100%. If the user sets the BPM range between 100-105 all the music generated for that playlist (regardless the Genre) remains within the set boundaries of tempo (beats per minute).
  • the user having access to the advanced setting feature allows the user to adjust the aforementioned parameters to assist in the filtering process thereby resulting in a more precise and truer playlist trajectory to what a person really wants to listen to and discover.
  • [001 10] In addition to being usable by individual users for personal media consumption, the app described herein is useful in crowd and group settings, where there is an entire audience of people. For example, in the hospitality industry (which includes hotels, bars, restaurants, etc.) there may be provided, in an additional embodiment, additional features that allow audience members to interact with the venue's music selection and contribute the selection of Genres, Sub-Genres, acoustic attributes and social attributes.
  • audience members are able to vote on songs at a venue by using the approving and disapproving indications through voting buttons. Audience members may also be able to use the settings (e.g. BPM) to change the attributes of the playlist and thereby influence or manipulate (as an audience) the songs that are successively selected on the playlist.
  • settings e.g. BPM
  • Audience members may also be able to save the settings that are selected by the venue, to use for their own playlists playlist (or portions of a specific playlist) relating to a specific Genre segment of that playlist.
  • Venues may also be able to promote specials on the app, when audience members are logged into the service, such as dinner specials, special events, deals on at home and may also be provided with the functionality to save specific functions etc. Alterations and Modifications to the Embodiments
  • the embodiments described with reference to the figures can be implemented as an Application Programming Interface (API) or as a series of libraries for use by a developer or can be included within another software application, such as a terminal or personal computer operating system or a portable computing device operating system.
  • API Application Programming Interface
  • program modules include routines, programs, objects, components and data files assisting in the performance of particular functions, the skilled person will understand that the functionality of the software application may be distributed across a number of routines, objects or components to achieve the same functionality desired herein.

Abstract

A system for creating a playlist of a plurality of audio files to transition between a pre-set start point and end point along a selected trajectory, comprising: selecting the start point based on a first music Genre and first song attributes and the end point based on a second music Genre and second song attributes; wherein transitioning between the first music Genre to the second song Genre is through at least one additional Genre; and wherein the at least one predefined additional Genre includes linking features between the first and second music Genres and the first and second song attributes.

Description

SYSTEM AND METHOD FOR DYNAMIC ENTERTAINMENT PLAYLIST
GENERATION
Technical Field/Field of the Invention
[0001] The present invention relates to a system, method software application and data signal for creating an entertainment package and in particular, to a system, method, software application and data signal that is capable of autonomously choosing one or more pieces of media to create a seamless entertainment package, with a view to providing the entertainment package to an audience for playback.
[0002] The invention has been developed primarily for use by audiences who wish to create a 'playlist' of music from one or more Genres, but do not wish to select individual songs within the Genre. In other words, the embodiment described herein is directed to a "virtual Disc Jockey (DJ)" which is capable of mimicking the changes a DJ makes when playing in front of an audience. This includes the changes the DJ makes in a live set to adjust to keep the crowd's attention/interaction. However, it will be appreciated that the invention is not limited to this particular field of use.
Background Art
[0003] The following discussion of the background art is intended to facilitate an understanding of the present invention only. The discussion is not an acknowledgement or admission that any of the material referred to is or was part of the common general knowledge as at the priority date of the application.
[0004] A very large amount of entertainment media exists, with millions of new pieces of music, podcasts, TV shows, spoken word recordings, movies, short videos and video clips, and many other types of recorded electronic entertainment content, generated, published, disseminated and consumed every year.
[0005] With the advent of greatly improved mobile telecommunications (including mobile data), Wireless Internet connections, small portable devices such as 'smartphones' and 'tablet computers', it is now possible to purchase, hold and listen to vast amounts of music (songs and compositions) using a smart phone or tablet. Such smartphones and tablet computers (and also conventional computing systems) have also made the composition, generation, recording, dissemination and consumption of music easier. Users are overwhelmed with choice, and ironically, the overwhelming choice has made it difficult for users to construct a playlist of music they enjoy.
[0006] Existing products, such as that described in US Patent 8,258,390, analyse the metadata associated with an audio track, the Genre associated with an audio track, the frequency an audio track has been played by a user, the context and creates a feature set based on these analysed attributes. Each track is collected around other tracks that share similar attributes and links to these tracks through the feature set. With little regard to the Genre or Sub-Genre of the collected tracks there is the risk that ill-suited songs will be linked together through the feature set. As the context used to contribute to the playlist is auto detected, the risk of misconstruing an appropriate feature set is apparent. There is also no scope to adjust the generated playlist to incorporate a new song attribute trajectory. Additionally, there is no scope for setting a playlist that fits very closely to a set time period.
[0007] Other existing systems, such as that described in US Patent Application No. 2005/023581 1 create static playlist based on blanket attribution rules. These systems do not consider the importance of Genre and Sub-Genre to how different audio files can have their acoustic attributes characterised. There is also no scope to consider a multiple of song attributes in generation of the playlist.
[0008] Yet other existing systems, such as that described in US Patent 6,344,607 provide an audio beat matching and mixing invention. These systems are focussed on mapping beat profiles of Audio files. Audio files following from each other are matched based on beat profiles with beat profiles being adjusted to seamlessly progress from one audio file to the next. These systems are not concerned with moving between Genres and Sub-Genres and searching for appropriate audio file to suit a progression between Genres or Sub-Genres, instead matching predefined audio files based on beat profiles.
[0009] Yet other existing systems, such as that described in US Patent 6,721 ,489 provide a system to create and update specific playlists. The creation of specific playlists is based on preprogramed rules and does not consider adjustable transitions between Genres and Sub-Genres based on a possible range of acoustic or social attributes. [0010] Yet other existing systems, such as that described in US Patent 7,345,232 provide a system to provide a playlist generated around a user's historical listening and interaction habits. It focusses on the frequency that an audio file is played. The introduction of Genre or Sub-Genre transitions is not considered. The introduction of rarely played audio files has no relationship to Genre, Sub-Genre, acoustic or social attribute.
[001 1] Yet other existing systems, such as that described in US Patent Application No. 201 1/0295843 provide systems for the creation of a playlist that is iteratively generated based on interpreted contexts. There are limitations on the audio files selectable based on set libraries. These systems do not consider transitions between predefined Genres or Sub-Genres linking based on characteristics of audio files.
Summary of Invention
[0012] In a first aspect, the present invention provides a system for creating a playlist of a plurality of audio files to transition between a pre-set start point and end point along a selected trajectory, comprising:
selecting the start point based on a first Genre and first song attributes and the end point based on a second Genre and second song attributes;
wherein transitioning between the first Genre to the second Genre is through at least one additional Genre; and
wherein the at least one predefined additional Genre includes linking features between the first and second Genres and the first and second song attributes.
[0013] Preferably, the transition between different audio files of the first Genre, at least one additional Genre and second Genre is based on a comparison between scores calculated for each different audio file, wherein the score is based on an internal calculation of the Genre of each audio file.
[0014] Preferably, at least one of the first Genre, Second Genre and at least one additional Genre is a Sub-Genre.
[0015] Preferably, the score for each audio file is calculated based on Genre or Sub-Genre specific weighting associated with the audio file. [0016] Preferably, the score for each audio file is calculated based on attributes extracted from external databases.
[0017] Preferably, a length of the playlist between the pre-set start point and end point is of a selected time period.
[0018] Preferably, the playlist is broken into distinct time periods around one or a group of audio files.
[0019] Preferably, the playlist is fit to the selected time period using a time tracking error model focused on the audio files.
[0020] Preferably, the attributes extracted from external databases are located in metadata.
[0021] Preferably, the transition between different audio files of the first Genre, at least one additional Genre and second Genre is based on a comparison between the song attributes of each audio file.
[0022] Preferably, the song attributes include at least one of tempo, energy, key, mode, harmony, chord usage, meter, melody, timbre, instrumental degree, degree of use of vocal elements, elements of live recording, elements of studio recording, popularity and currency.
[0023] Preferably, the trajectory is plotted as a graph of the song attribute against time.
[0024] Preferably, the shape of the trajectory is selected from one of the following curves; Sine Wave, Downwards, Upwards Plateau then Downwards, Linear Up, Linear Down, Bezier Ease In, Bezier Ease Out.
[0025] Preferably, the start point is based on a first music Sub-Genre and first song attribute and the end point is based on a second music Sub-Genre wherein transitioning between the first music Sub-Genre to the second song Sub-Genre is through at least one predefined additional Sub-Genre, wherein the at least one predefined additional Sub-Genre includes linking features between the first and second music Sub-Genres and the first and second song attributes.
[0026] In a second embodiment, the present invention provides an entertainment package, comprising: a selection module arranged to provide selection information regarding a user's selection;
wherein a processor receives the selection information and uses an algorithm to access one or more databases to identify one or more elements of media based on the selection information; and
wherein the one or more elements of media are collated for access by a user.
[0027] In one embodiment, the selection information includes the entertainment Genre of the media. In one embodiment, the one or more databases are located remotely of the device and the access to the databases occurs via a communications link.
[0028] In one embodiment, the device may further include a settings module arranged to allow the user to vary one or more settings, wherein the settings are provided to the algorithm to identify the one or more elements of media.
[0029] The Genre may include one or more Sub-Genres. The user may select one or more of the one or more Sub-Genres. The settings may include the beats per minute of the element of media, the energy of the media, the vocality of the element of media and the popularity of the element of media.
[0030] In one embodiment, the advanced settings include varying at least one of the tempo, energy, key, mode, harmony, chord usage, meter, melody, timbre, instrumental degree, degree of use of vocal elements, elements of live recording, elements of studio recording or popularity of the elements of media.
[0031] The device may further include a human machine interface arranged to allow a user of the device to interact with the device.
[0032] The human machine interface may be a touchscreen. The device may be a mobile communications device, such as a smartphone.
Brief Description of the Drawings
[0033] Notwithstanding any other embodiments that may fall within the scope of the present invention, an embodiment of the present invention will now be described, by way of example only, with reference to the accompanying figures, in which: Figure 1 is an example computing system and network that may be utilised to operate a system, method and/or software application in accordance with the present invention;
Figure 2 is an example graph for start and end Genres used with the playlist creation system of the present invention;
Figure 3 is an example graph for the transition plot between start and end Genres used with the playlist creation system of the present invention;
Figure 4 is an example graph for the transition plot between start and end Genres with pre-defined linking Sub-Genres used with the playlist creation system of the present invention;
Figure 5 is the example graph of Figure 4 with the transition plot broken down into segments;
Figure 6 is the example graph of Figure 5 with the segments broken down into defined steps;
Figure 7 is the example graph of Figure 6 with an alternate route for the transition plot;
Figure 8 is the example graph of Figure 6 with an alternate route for the transition plot;
Figure 9 is a graph mapping the trajectory path of a playlist over a plurality of attributes;
Figure 10 is a flow chart for the categorisation of a song for the use of the play list system of the present invention;
Figure 1 1 is a flow chart of the classification of a song into a Genre according to the play list system of the present invention;
Figure 12 is a screenshot displaying a main screen in accordance with an embodiment of the invention; Figure 13 is a screenshot displaying a playback screen in accordance with an embodiment of the invention;
Figure 14 is a screenshot displaying a real-time settings screen in accordance with an embodiment of the invention;
Figure 15 is a screenshot displaying a Sub-Genres screen in accordance with an embodiment of the invention; and
Figure 16 is a screenshot displaying an advanced settings screen in accordance with an embodiment of the invention.
Description of Preferred/Specific Embodiments
[0034] Referring to Figure 1 , an embodiment of the present invention is illustrated. In this example embodiment, the interface and processor are implemented using a portable computing device (such as a smartphone or a tablet computer) having an appropriate user interface. The computing device is appropriately programmed to implement the invention either alone or with the assistance of a networked server.
[0035] Referring to Figure 1 in more detail, there is shown a schematic diagram of a central transfer system which in this embodiment comprises a server 100. The server 100 comprises suitable components necessary to receive, store and execute appropriate computer instructions. The components may include a processing unit 102, read only memory (ROM) 104, random access memory (RAM) 106, and input/output devices such as disk drives (including solid state drives or any other storage technology as used depending on the specific hardware/software combination) 108, input devices 1 10 such as an Ethernet port, a USB port, etc. Display 1 12 such as a liquid crystal display, a light emitting display or any other suitable display and communications links 1 14. The server 100 includes instructions that may be included in ROM 104, RAM 106 or disk drives 108 and may be executed by the processing unit 102. There may be provided a plurality of communication links 1 14 which may variously connect to one or more computing devices such as a server, personal computers, terminals, wireless or handheld computing devices. At least one of a plurality of communications link may be connected to an external computing network through a wireless link (e.g. satellite), optical fibre, telephone line or other type of communications link. [0036] The service may include storage devices such as a disk drive 108 which may encompass solid state drives, hard disk drives, optical drives or magnetic tape drives. The server 100 may use a single disk drive or multiple disk drives. The server 100 may also have a suitable operating system 1 16 which resides on the disk drive or in the ROM of the server 100.
[0037] In the ensuing description, for the sake of clarity, and in the context of the embodiment described, reference will be made to a "user" (the person utilising the device and the software application) and an "audience" (the person or people listening to the output of the software application). However, it will be understood that these identifiers/labels are utilised only for the sake of providing a clear and easily understood example, and no gloss should be taken from these labels to limit the scope of the embodiment, any features of the embodiment, or the broader invention described herein.
[0038] Moreover, where a third party product is referred to in the description, the name of the product is marked with a "™" to denote a brand/mark. Where a brand name is used to describe a product, the intention of the writer is to provide a 'real world' example and again, no gloss should be taken from the use of branded examples to limit the scope of the embodiment, any features of the embodiment, or the broader invention described herein.
[0039] The embodiment described herein is a software application which, in the embodiment described herein, is branded and sold under the name "Muru™" and is an "app" (i.e. a software application that is specifically designed for use on a portable, handheld telecommunications device such as a smart phone or a tablet computing device, such as an Apple iPhone™ or iPad™, or a Google Android™ device such as a Samsung Note 3™). It will be understood that the portable device may communicate utilising any suitable technology and that any reference herein to a Subscriber Identification Module (SIM), 3rd Generation (3G) and 4th Generation (4G) telecommunications networks, WiFi, Bluetooth, NFC, or any other specific hardware or software, is provided for the purposes of illustration only and is not intended to limit the scope of the claimed invention.
[0040] Example interface screen captures of an embodiment of the app are shown in Figures 12 through 16 and are described in more detail hereinbelow. [0041] It will be understood, however, that the application may also be provided as a "desktop" software application for use on a personal computing device such as a laptop, a notebook computer or a personal computer, or may be provided in any appropriate form, as computing technology evolves. Such variations are within the purview of the person skilled in the art.
[0042] Referring to Figure 2 a graph 120 of a template for a play list trajectory comprising of audio files to lay over, with a start musical Genre 126 selected for a start point and a finish musical Genre 128 selected for an end point. The start musical Genre 126 is illustrated as pop and the finish musical Genre 128 is illustrated as urban as defined by the Genre assignment system described below. The skilled addressee will understand that pop for the start musical Genre 126 and urban for the finish musical Genre 128 are illustrative and that any Genre or Sub-Genre as defined by the Genre assignment system of the present invention could be chosen for the start musical Genre 126 and finish musical Genre 128.
[0043] The vertical axis of the graph 120 is a song attribute 122 such as an acoustic or social attribute that can be selected by a user of the system. An acoustic attribute can be selected from at least one of tempo, energy (this represents a perceptual measure of intensity and powerful activity released throughout the track. Typical energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy), key, mode, harmony, chord usage, meter, melody, timbre, instrumental degree, degree of use of vocal elements, elements of live recording, elements of studio recording. The skilled addressee will understand that acoustic attributes could be used other than those listed above. A user of the system is able to select the level of the acoustic attribute to start a play list trajectory with and to finish the play list trajectory with.
[0044] A social attributes can include but is not limited to a measure of currency, being how new the audio file is associated with its popularity (a high score for both new and popular), how unexpectedly popular an audio file is (a popular audio track that originates from a source with little to no historical popularity for their audio files would give a high score), how popular the audio file is, the date of the recording of the audio track, a measure of the demographic the audio file is directed to, a measure of the demographic to which an audio file is popular or other attributes readily understood by the skilled addressee. A user of the system is able to select the level of the social attribute to start a play list trajectory with and to finish the play list trajectory with.
[0045] A plurality of predefined trajectories are defined within the system that plot different shapes on the graph 120 to represent a different user experience. Examples of potential predefined trajectories that are not to be interpreted as limiting are Linear, Constant, Bezier Curves or other functions such as discontinuous functions, square functions, saw tooth functions or otherwise.
[0046] The horizontal axis 124 is time. This allows the use of the system to select a set time period for the playlist trajectory to last for. With this set time period selected the system assigns a play list to the play list trajectory that transitions from the start Genre 126 and initially set song attribute 122 level to the finish Genre 128 and end song attribute 122 level in a smooth way in accordance with the aspects of the invention described below.
[0047] Figure 3 illustrates a graph 130 similar to the graph 120 of Figure 2 with a playlist trajectory 132 mapped. In the illustrated example, the playlist trajectory 132 illustrates the start point 131 of the playlist trajectory starting in the pop Genre with a low tempo and ending in the urban Genre with a high tempo.
[0048] Audio files are selected to populate the playlist trajectory 132 according to their calculated matching to the song attribute 122 and Genre or Sub-Genre along the playlist trajectory 132. Genres or Sub-Genres that link between selected Genres or Sub-Genres can also be used. Where matches to the playlist trajectory cannot be found audio files can be selected based on the closest audio files.
[0049] In one embodiment this closeness is calculated with an error measurement calculated on the quantified elements (described below) of the audio files from the playlist trajectory 132.
[0050] In one embodiment vector rays are extended from the playlist trajectory 132 to identify the closest audio files.
[0051] Figure 4 illustrates a graph 134 similar to the graphs 120 and 130 of Figures 2 and 3 with the playlist trajectory 132 mapped also illustrating the use of Sub-Genres 127 and 125 to transition the music between Genres 126 and 128 is a smooth manner. The Sub-Genres 127 and 125 are selected using the Genre path finder described below. Although two transitions 127 and 125 between start and end Genres 126, 128 are illustrated, the skilled addressee will recognise that additional transitions with multiple Sub-Genres can be used.
[0052] Figure 5 illustrates a graph 136 similar to the graph 120, 130 and 134 of Figures 2, 3 and 4 with the playlist trajectory 132 mapped, also illustrating the breaking up of the playlist trajectory 132 into defined time segments 138. The time segments are used to accommodate a time tracking error model such as a variance compensation numerical method being applied to audio files or time segments 138 along the playlist trajectory 132. This allows the system to sum a collection of audio files to create the playlist for the playlist trajectory 132 and closely match the time period set for the playlist initially so that the time the final audio file of the playlist finishes coincides very closely with the set time period.
[0053] In one arrangement the segments 138 correspond to a single audio file. In an alternative arrangement a single segment 138 corresponds to a collection of audio files.
[0054] Figure 6 illustrates a graph 140 similar to the graph 136 of Figure 5 with the time segments 138 crossing the playlist trajectory with steps 142 to allow calculations for proposed audio files that follow each other to be conducted so that the songs are appropriately matched. Where the segments 138 comprise a collection of songs the step 142 is taken at a representative point. The song attribute level and Genre or Sub-Genre at the step 142 are used to calculate appropriate audio files to be placed on the playlist along the playlist trajectory in accordance with the scoring system described below. The steps 142 can serve as reference points to perform calculations for closest audio files as described above.
[0055] It will be understood that adjusted play list trajectory profiles can be entered to change the playlist trajectory 132. A user can manipulate the acoustic attributes currently selected during the playing of the play list. This alters the playlist trajectory in real time to provide the altered playlist trajectory 144. Similarly, a user can change selected yet to be played, or currently playing Genres or Sub-Genres to alter the selected songs or the playlist trajectory in real time.
[0056] Figure 7 illustrates a graph 146 similar to the graph 140 of Figure 6 with such an adjusted playlist trajectory 144. In situations where the originally plotted playlist trajectory 132 is to be altered the settings of the song attribute 122 can be altered as desired. This results in a recalculation of the playlist to provide adjusted playlist trajectory 144. This adjusted playlist trajectory 144 uses new audio files to suit the new playlist trajectory 144. In the example shown in Figure 7 the song attribute 122 is lowered but the general progression of the attribute linearly from low to high is continued.
[0057] Figure 8 illustrates a graph 148 similar to the graph 146 of Figure 7 where the altered trajectory 150 changes the general progression from the playlist trajectory 132 to a downward linear trajectory through revised playlist trajectory 150 from a high song attribute 122 to a lower song attribute.
[0058] Figure 9 illustrates a plurality of playlist trajectories associated with different song attributes and following different paths. 701 illustrates a song attribute of tempo through beats per minute, 704 illustrates a song attribute of energy, 703 illustrates a song attribute of the use of vocals and 702 illustrates a song attribute of popularity. The path along the song attribute changes throughout the time period of the playlist trajectory for each of the trajectories.
[0059] A user can have greater control over the dynamics and progression of the playlist trajectory through manipulation of the playlist trajectories 701 , 704, 703, 702. The user simply adjusts the nodes for each attribute in intervals for the duration of the playlist.
[0060] In one embodiment the app also includes a "presets" tab (not shown), which allows users to select predefined settings and Genre paths for a quick start. The presets are fully adjustable but provide users with a simple and quick starting point when time is a factor or where the user requires prompting.
[0061] Predefined settings may be denoted by a descriptive label to describe a mood or a setting, such as:
"Dinner with the folks";
"Sunrise morning jog";
"Kids birthday party". [0062] At Figure 10, there is shown a flow chart setting out how an audio file is identified, categorised and catalogued for use a playlist trajectory 132, 701 , 702, 703, 704 of the present invention. The system accesses an audio file catalogue resource at step 152 to extract identifying metadata associated with the audiofile. The metadata extracted can include, but is not limited to title, artist, duration, unique song identification (as used in a particular resource), source platform identification, album art URL, preview audio URL, Audio MD5 Hash tag. Examples of the catalogue resource from which the metadata can be extracted include but are not limited to:
Spotify,
Deezer,
Musicbrainz,
Senzari,
Gracenote,
Moodagent,
Beats, and
Omnifone.
[0063] The metadata may be provided internally within the system.
[0064] At accumulation and aggregation step 154, the present invention, after the system collects the metadata at step 152, the system progressively aggregates all the data locally in storage within the system and reviews and corrects erroneous information. Erroneous data can include missing critical fields (such as artist, title, duration). This is accomplished by referencing metadata associated with particular audio data across alternative catalogue resources.
[0065] In one embodiment, the accumulation and aggregation step 154 is executed in parallel across one or many physical and virtual computing resources to lower aggregation times.
[0066] The accumulation and aggregation step 154 caches all the raw data from the source so that the data can be reprocessed easily if required due to changes made at the source of the audio file identification, categorisation and cataloguing.
[0067] Following the accumulation and aggregation step 154 the system performs acoustic attribution Step 155 to associate one or more attributes and a quantified level of the attribute to an audio file. The system accesses an acoustic attribute database where quantified values are associated to acoustic attributes for audio files. The acoustic attributes can include, but are not limited to the following: tempo, key, mode, harmony, chord usage, meter, melody, timbre, instrumental degree, degree of use of vocal elements, elements of live recording, elements of studio recording or combinations of attributes into a metric such as energy (a high energy song would feel fast, loud, and noisy), how suitable an audio file is for dancing, how much spoken word is in an audio file, use of electronic instruments, use of acoustic instruments etc. Several providers of this quantified acoustic attribute for a song can be used for this data. These include but are not limited to:
The EchoNest,
Acousticbrainz,
Moodagent,
Gracenote,
Senzari.
[0068] The quantified acoustic attribute can also be provided internally to the system. Where the quantified acoustic attribute is taken from an external provider, the quantified figure for an acoustic attribute is converted to a format suitable for use with the system. The quantified acoustic attribute/s associated with an audio file is/are stored in the system for use in an appropriate form in first filter step 156.
[0069] With the metadata and acoustic attribute/s filtered and stored in the system a set of exclusion rules are run at first exclusion step 162 against audio files in the system to remove audio files that do not meet predefined acoustic attribute guidelines. For example songs with a tempo of 0 or over 400 will be excluded as they most likely have improperly measured tempos. Similarly, songs with a high metric of spoken word might be excluded as they are likely spoken word recordings inappropriate for a proposed use.
[0070] When the acoustic attributes have been filtered at step 156 the system 150 associates a social attribute to the audio file at social attribution step 157. The social attributes can include but are not limited to a measure of how new the audio file is associated with its popularity (a high score for both new and popular), how unexpectedly popular an audio file is (a popular audio track that originates from a source with little to no historical popularity for their audio files would give a high score), how popular the audio file is, the date of the recording of the audio track, a measure of the demographic the audio file is directed to, a measure of the demographic to which an audio file is popular. These measures of social attributes can be sourced from external providers or internally. Non-exhaustive examples of external providers of quantified measures of social attributes are:
The EchoNest,
Senzari,
Moodagent,
Gracenote.
[0071] A second filtering step 164 can be used to exclude certain audio files with specific social attributes, or specific quantified levels of one or more social attributes. For example audio tracks could be excluded where the demographic was identified as being children or people over the age of 70.
[0072] When the acoustic attributes have been filtered at step 158 the system 150 associates a Genre tag to the audio file (that are keywords and phrases associated with the audio file) at social attribution step 157 through obtaining a Genre tag at Genre tag source step 159 from an external or internal source, processing this obtained tag source and putting it through the Genre assignment system 169. The Genre assignment system is described in more detail below under Figure 1 1 . The Genre tags imported through the Genre tag source step 159 can include Genres, Sub-Genres, Hybrid Genres or otherwise and can include but are not limited to pop, urban, electronic, classical, latin, jazz, acid jazz, acid lounge, indian classical, latin metal, lounge pop, electropunk, triphop, synthpop, happy hardcore etc. A non-exhaustive example of the types of Genres and Sub-Genres available through an external source is given following this paragraph. Non-exhaustive examples of external sources for Genre tags for step 159 include:
The EchoNest,
Wiki,
Senzari,
Moodagent,
LastFM. [0073] With Tags extracted for an Audio file at step 159 the results are filtered at step 160 to place the results in an appropriate format for system 150. Appropriate Genres are excluded at step 166 where they are deemed not appropriate for a particular use. For example audio lectures, learning audio recordings, comedy recordings or otherwise might be excluded from a music playlist database.
[0074] An example of Tags available through an external source and their labelling within the system for Pop is:
Tag Genre Sub-Genre
1960s SKA Urban Reggae
2 Tone Urban Reggae
Alternative Hip Hop Urban Hop
Atlanta Hip Hop Urban Hip Hop
Audio Bass Urban Hip Hop
Avanthop Urban Hip Hop
Blue-Eyed Soul Urban Soul
Bounce Urban Hip Hop
British Hip Hop Urban Hip Hop
British Rhythm and Blues Urban Rnb
Brown-Eyed Soul Urban Soul
Contemporary R&B Urban Contemporary R&B
Country Rap Urban Hip Hop
Country Soul Urban Soul
Crunk Urban Hip Hop
Crunkcore Urban Hip Hop
Dancehall Urban Dancehall
Deep Funk Urban Funk [0075] An example of Tags available through an external source and their labelling within the system for Club is:
Figure imgf000018_0001
[0076] The skilled addressee will recognise that there are many versions of the above Genre table available. [0077] The system of the present invention takes the Tags extracted from external sources and processes them with the Genre assignment system of the present invention at Genre assignment step 169. The Genre assignment system uses a Genre Taxonomy 170 with the Genre Tags from Genre Tag source step 159 for the Genre Assignment System of step 169.
[0078] Referring now to Figure 1 1 illustrating a flow chart of the Genre Assignment System of Genre Assignment System Step 169 as initiated at start step 999.
[0079] The Genre Assignment System receives the Tags from external Tag step 159 and maps and scores the audio file to finalise the Genre and Sub-Genre Tag to be associated with the audio file for the use of the play list generation system of the present invention.
[0080] The Genre Assignment System receives Genre Tags from external Tag Source Step 159 at steps 180 and 181 and generates and empty score card 182 to be filled in. At taxonomy step 184, 170 musical Genre classifications that are taken from an internally produced index are aligned with the Tags imported at step 159 to associate quantified figures in table map step 186 for each Genre associated with the audio file according to the how the different Genres are seen to influence the audio file in question and the importance of the Genre has to the audio file giving a mapping weight to the audio file. The quantified figures from 186 are imported into step 185. With the figures in the table of 185 different scorings at step 187 are associated with different Genres to give some Genres greater weighting than others. The scorings are based on algorithms associated with different Genres etc. The scorings are based on algorithms associated with different Genres etc. One such algorithm will give each Genre a score based on multiplying an influence figure a song has by the importance figure a song has by the weight a song has. An alternative algorithm will give a scoring based on the mapping weight given to the Genre and adding it to quantified figures imported at step 159. The Genre Assignment System is updated at step 188 and the system moves to choose a Genre to be associated with the song at step 191 for the purposes of processing the audio file for a play list trajectory.
[0081] At rule selection Step 191 the system selects a selection criteria rule from one that is stored in the system in document 190. Using the selected rule the system then associates a Genre according to the system to the audio file at selection step 192 using the selected scoring rule and the quantified figures given to the audio file from step 186.
[0082] To further break down the audio tracks into Sub-Genres that assists with mapping appropriate audio files to a plotted trajectory the Genre Assignment System steps 169 selects a Genre from selection step 192 for further categorisation into a Sub-Genre. As in Step 183 for the Genre assignment, in Step 193 a selected Genre is placed in an empty score card in score card step 182 to be filled in for scoring the Sub-Genre. At taxonomy step 194 musical Sub-Genre classifications that are taken from an internally produced index are aligned with the Tags imported at step 159 to associate quantified figures in table map step 196 for each Genre or Sub-Genre associated with the audio file according to the how the different Sub-Genres are seen to influence the audio file in question and the importance that the Sub-Genre has to the audio file giving a mapping weight to the audio file. The quantified figures from 196 are imported into step 195. With the figures in the table of 195, different scorings at step 197 are associated with different Genres to give some Sub-Genres greater weighting than others. The scorings are based on algorithms associated with different Sub-Genres etc. One such algorithm will give each Sub-Genre a score based on multiplying an influence figure a song has by the importance figure a song has by the weight a song has. An alternative algorithm will give a scoring based on the mapping weight given to the Sub-Genre and adding it to quantified figures imported at step 159. The Genre Assignment System is updated at step 198 and the system moves to choose a Genre to be associated with the song at step 199 for the purposes of processing the audio file for a play list trajectory.
[0083] At rule selection step 199 the system selects a selection criteria rule from one that is stored in the system in document 990. Using the selected rule the system then associates a Genre according to the system to the audio file at selection step 991 using the selected scoring rule and the quantified figures given to the audio file from step 196.
[0084] The Genre Assignment System Step 169 is then completed at step 992. [0085] A non-exclusive example of the quantified figures for different Genres and Sub-Genres associated with a single audio file is:
Figure imgf000021_0001
[0086] To provide the Genre/Genres chosen at step 191 for use in the playlist generation, a selection criteria is applied at step 190. This selection criteria will depend on the audio files being analysed.
[0087] Returning to Figure 10, with the Genre/Genres attached to the audio file in step 169, the system attaches the decided Genre to the audio file at step 172 where it is also added to the taxonomy 170.
[0088] The data associated with the audio file relating to the acoustic attributes, social attributes and Genre are then entered into a system database 172 which has a further index 174 associated with it for searching according to acoustic, social attributes and Genres.
[0089] The quantified figures and scoring for Genre, Sub-Genre along with the acoustic attributes and social attributes are used by the system to place audio files on the graph 120, 130, 134, 136,140, 148, 146 so that when a trajectory is selected appropriate songs can be linked together based on their scored results to create a smooth playlist.
[0090] An example of a selected trajectory is as follows where the upwards profile refers to the acoustic attribute of the vertical axis of graph 120, 130, 134, 136,140, 148, 146 starting low and progressing upwards. Example— User Chooses to Pop to Jazz
Top Level Path: Pop- Urban- Jazz
Start At
Pop
Transition from Pop Using
Power Pop, and/or
Teen Pop, and/or
Dance Pop
Transition into Urban Using
Contemporary R&B, and/or
Dancehall
Transition from Urban Using
R&B, and/or
Motown, and/or
Soul, and/or
Funk
Transition into Jazz Using
Free Jazz
Smooth Jazz
End At
Jazz
[0091] The system will chose audio files to fit with the level of the audio attribute that fits with the Genre or Sub-Genre at the particular point of the playlist trajectory. The Genre or Sub-Genre will be decided by the Genre applied by the selection criteria during the Genre assignment system step.
[0092] The skilled addressee will recognise that many alternatives exist such as, Sine Wave, Downwards, Upwards Plateau then Downwards, Linear Up, Linear Down, Bezier Ease In, Bezier Ease Out, Custom Bezier Curve, Discontinuous Functions, user defined Bezier Curves etc.
[0093] Figures 12 to 16 illustrate possible interfaces that a user can manipulate to interact with the system of the present invention. At Figure 12, there is shown an exemplary main screen 200 of the app. The main screen 200 provides access to the main functional components of the app. The app uses the above discussed system to create a dynamic audio file/music/media playlists based on selected Genres, Sub-Genres and playlist trajectory. As can be seen in Figure 12, the user selects one or more Genres or Sub-Genres from the Genres or Sub-Genres selection area 204, with the results being displayed in the Genres or Sub-Genres indication section 202. The user may also select a desired duration for the playlist by using sliding selector 206. Once the user has selected Genres and a playlist duration, the app connects to one or more databases as discussed above and utilises metadata associated with audio files located in the database to select one or more appropriate elements of media (i.e. music files) for generation into a playlist as discussed above.
[0094] The skilled addressee will recognise that the Genre or Sub-Genre selection area 204 as illustrated may take alternative forms, such as a linked data base, a list or other searchable library. Similarly the Genre or Sub-Genre indication selection may be displayed in alternative formats and still be within the scope of the present invention.
[0095] Where a user selects more than one Genre or Sub-Genre of music, the app is arranged to progress through each of the selected musical Genres or Sub-Genres in the digital libraries to select a plurality of music files across each of the selected Genres or Sub-Genres or through linking Genres or Sub-Genres calculated using the above described methods. Using the rules described above the app populates the playlist according to the chosen playlist trajectory.
[0096] The user creates a 'map' of their preferences within the app through the creation of the above discussed playlist trajectory, which can then access one of more databases (or streaming platforms) or music software player, of their choice, with music that is available as discussed above. This in turn allows the user freedom to create durable playlists and define their music tastes without needing to remember and download individual songs or albums. In one embodiment, the user may be able to view the graph 120 and can also see the Genre path that is generated by the app. A user can therefore exclude Genres and also include other Genres of their choice in an intuitive manner. When the user selects a different Genre or Sub-Genre as discussed above, the algorithm is prompted to create a new Genre path.
[0097] Once the app has generated a playlist, the app provides an in-built music player as shown in Figure 13. At screen 300, there is provided a pull down menu 302, a display section 304 which displays the title and artist of the song being played (including an image of the album cover if available), the total time of the song 306, a save button 308 which allows the user to 'tag' the playing song for later access, a jog wheel 310 which indicates the progress through the playlist, a share button 312 (which allows the user to send a link with information about the song to another user), a summary of the Genres selected 314, a settings button 316 and a mute button 318. [0098] In an alternative embodiment, once the playlist has been generated a music player supplied by an external provider can be used to play the playlist.
[0099] The screen 300 of Figure 13 provides the user with all the functionality required to operate the playlist and the app as a whole. There is also provided, in one embodiment, a "like" and "dislike" button (or some other equivalents indicating positive or negative reviews of the songs), so that the user can provide feedback on whether they like or dislike a song.
[00100] In one embodiment, where a user selects the "like" button, the song is identified as a user favourite and is considered in any future playlist generation. Similarly, where a user selects the "dislike" button, the song is excluded from any future playlist.
[00101] If the user selects the settings button 316, the real time settings screen 400 of Figure 14 is displayed. The real time-settings screen includes a back button 402, a summary of the song information 404, various settings bars which allow the user to vary certain characteristics of the playback (which are described in more detail later). An advanced settings button 408, a save button 410, a jog wheel 412 which indicates the progress of the playlist, a share button 414, a summary of the Genres selected 416, a settings button 418, and a mute button 420.
[00102] In one embodiment, the Genre path created by the app links the selected Genres or Sub-Genres in a logical manner by creating a 'path' of Genres and/or Sub-Genres that are interlinked. This is done in a logical and intuitive manner by linking Genres or Sub-Genres via Sub-Genres, to create a dynamic playlist that progresses from the one Genre to another in a way that ameliorates the possibility of any adjacent media files from 'clashing'. This feature can be seen with reference to Figure 15 and screenshot 500, where there is shown a main Genre 502, and associated Sub-Genres 504. A user, depending on their personal preference, may use buttons 506 and 508 to provide feedback to the algorithm and instruct an adjustment of the current playlist trajectory 132. This can be such that the algorithm is instructed to 'play more' of a particular Sub-Genre (button 506) or 'play less' of a particular Sub-Genre (button 508). It can alternatively be an adjustment of the acoustic or social attribute. This allows the user to further customise the playlist to suit their own individual tastes and/or requirements, without needing to select specific media files. [00103] In more detail, by selecting a main Genre the user can access Sub-Genres relating to the main Genre. If a user does not know which Genre to select, they can use the search function to select an artist or a specific song (as all artists and songs are classified under a particularly preferred Genre, the algorithm uses the Genre which is associated with the user selected artist or song as the starting Genre). Once the user has made a selection, the user is prompted to select a total duration for the playlist. The user can select up to 12 hours of playtime (in intervals of 30 minutes). Once the selection has been made and the user selects "Play", the playlist is generated.
[00104] It will be understood that in the context of the present specification, the term "Genre" is defined broadly, to cover any type of entertainment media that can be classified together due to a common theme. For example, the term "Genre" may encompass a type of music (e.g. techno, rock, pop, country and western, etc.) and/or it may encompass other ways of classifying music, such as by era (e.g. 1960's, 1970's, 1980's, etc.). It will also be understood that while the embodiment described herein uses a single parameter to describe a Genre, the broader inventive concept contemplates a more sophisticated and nuanced manner in which to select Genres, such as by selecting more than one setting. For example, there may be provided two controls for the setting of Genre. One of the settings may allow the user to set the Genre according to type (techno, etc. as described previously) and the other setting may allow the user to choose music from certain era's (1960's, 1970's etc.) within the specific Genre selected. For example, the user may select all rock music from the 1980's.
[00105] Referring now to Figure 16, there is shown an interface with advanced settings screen 600 (which is accessed via screen 400 and advanced settings button 408). In the advanced settings screen 600, the user can vary acoustic and social attributes of the songs selected for the playlist, altering the playlist trajectory as discussed above. These settings provide a measure of easy and intuitive customisation, removing the need for the user to have any intimate knowledge of the Genre of music or of individual songs.
[00106] In more detail, the advanced settings allow the user to modify the outcome of the successive songs in the playlist in real time. The BPM range selectable by a user ranges from 30 to 250. The other three settings work on a percentage range of 0-100%. If the user sets the BPM range between 100-105 all the music generated for that playlist (regardless the Genre) remains within the set boundaries of tempo (beats per minute).
[00107] The skilled addressee will recognise that alternative ranges are possible and that shape templates can be used to overlay onto the acoustic or social attributes.
[00108] The user having access to the advanced setting feature allows the user to adjust the aforementioned parameters to assist in the filtering process thereby resulting in a more precise and truer playlist trajectory to what a person really wants to listen to and discover.
[00109] The skilled addressee will readily recognise that alternative acoustic and social attributes can be manipulated in these advanced setting features and still be within the scope of the present invention.
Alternative Embodiment
[001 10] In addition to being usable by individual users for personal media consumption, the app described herein is useful in crowd and group settings, where there is an entire audience of people. For example, in the hospitality industry (which includes hotels, bars, restaurants, etc.) there may be provided, in an additional embodiment, additional features that allow audience members to interact with the venue's music selection and contribute the selection of Genres, Sub-Genres, acoustic attributes and social attributes.
[001 1 1] Through the use of social media integration (e.g. an app that allows audience members to connect with the app of the invention), audience members are able to vote on songs at a venue by using the approving and disapproving indications through voting buttons. Audience members may also be able to use the settings (e.g. BPM) to change the attributes of the playlist and thereby influence or manipulate (as an audience) the songs that are successively selected on the playlist.
[001 12] Audience members may also be able to save the settings that are selected by the venue, to use for their own playlists playlist (or portions of a specific playlist) relating to a specific Genre segment of that playlist.
[001 13] Venues may also be able to promote specials on the app, when audience members are logged into the service, such as dinner specials, special events, deals on at home and may also be provided with the functionality to save specific functions etc. Alterations and Modifications to the Embodiments
[001 14] It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the invention as shown in the specific embodiments without departing from the spirit or scope of the invention as broadly described. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.
[001 15] Although not required, the embodiments described with reference to the figures can be implemented as an Application Programming Interface (API) or as a series of libraries for use by a developer or can be included within another software application, such as a terminal or personal computer operating system or a portable computing device operating system. Generally, as program modules include routines, programs, objects, components and data files assisting in the performance of particular functions, the skilled person will understand that the functionality of the software application may be distributed across a number of routines, objects or components to achieve the same functionality desired herein.
[001 16] It will also be appreciated that where the methods and systems of the present invention are either wholly implemented by a computing system or partly implemented by computing systems then any appropriate computing system architecture may be utilised. This will include stand-alone computers, network computers and dedicated hardware devices, such as programmable arrays. Where the terms "computing system" and "computing device" are used, these terms are intended to cover any appropriate arrangement of computer hardware capable of implementing the function described.
[001 17] It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the invention as shown in the specific embodiments without departing from the spirit or scope of the invention as broadly described. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.

Claims

CLAIMS:
1 . A system for creating a playlist of a plurality of audio files to transition between a pre-set start point and end point along a selected trajectory, comprising:
selecting the start point based on a first music Genre and first song attributes and the end point based on a second music Genre and second song attributes;
wherein transitioning between the first music Genre to the second song Genre is through at least one additional Genre; and
wherein the at least one predefined additional Genre includes linking features between the first and second music Genres and the first and second song attributes.
2. The system as claimed in Claim 1 , wherein the transition between different audio files of the first Genre, at least one Sub-Genre and second Genre is based on a comparison between scores calculated for each different audio file, wherein the score is based on an internal calculation of the Genre of each audio file.
3. The system as claimed in Claim 2, wherein at least one of the first Genre, Second Genre and at least one additional Genre is a Sub-Genre.
4. The system as claimed in Claim 2 or 3, wherein the score for each audio file is calculated based on Genre or Sub-Genre specific weighting associated with the audio file.
5. The system as claimed in Claim 2, 3 or 4, wherein the score for each audio file is calculated based on attributes extracted from external databases.
6. The system as claimed in Claim 5, wherein the attributes extracted from external databases are located in metadata.
7. The system as claimed in any one of the preceding claims, wherein the transition between different audio files of the first Genre, at least one Sub-Genre and second Genre is based on a comparison between the song attributes of each audio file.
8. The system as claimed in Claim 7, wherein the song attributes include at least one of tempo, energy, key, mode, harmony, chord usage, meter, melody, timbre, instrumental degree, degree of use of vocal elements, elements of live recording, elements of studio recording, popularity and currency.
9. The system as claimed in any one of the preceding claims, wherein the trajectory is plotted as a graph of the song attribute against time.
10. The system as claimed in Claim 9, wherein the shape of the trajectory is selected from one of the following curves, Sine Wave, Downwards, Upwards Plateau then Downwards, Linear Up, Linear Down, Bezier Ease In, Bezier Ease Out.
1 1 . The system as claimed in any one of the preceding claims, wherein a length of the playlist between the pre-set start point and end point is of a selected time period.
12. The system as claimed in Claim 1 1 , wherein the playlist is broken into distinct time periods around one or a group of audio files.
13. The system as claimed in Claim 1 1 or 12, wherein the playlist is fit to the selected time period using a time tracking error model focused on the audio files.
14. A device for creating an entertainment package, comprising:
a selection module arranged to provide selection information regarding a user's selection;
wherein a processor receives the selection information and uses an algorithm to access one or more databases to identify one or more elements of media based on the selection information; and
wherein the one or more elements of media are collated for access by a user.
15. A device in accordance with Claim 14, wherein the selection information includes the entertainment Genre of the media.
16. A device in accordance with Claim 15, wherein the one or more databases are located remotely of the device and the access to the databases occurs via a communications link.
17. A device in accordance with Claim 15 or 16, further including a settings module arranged to allow the user to vary one or more settings, wherein the settings are provided to the algorithm to identify the one or more elements of media.
18. A device in accordance with any one of Claims 15 to 17, wherein the Genre includes one or more Sub-Genres.
19. A device in accordance with Claim 18, wherein the user may select one or more of the one or more Sub-Genres.
20. A device in accordance with Claim 19, wherein the settings include the beats per minute of the element of media, the vocality of the element of media, the energy of the media and the popularity of the element of media.
21 . A device in accordance with Claim 20, wherein the advanced settings include varying at least one of the beats per minute of the elements of media, the vocality of the elements of media, the energy of the elements of media and the popularity of the elements of media.
22. A device in accordance with any one of the preceding claims, further including a human machine interface arranged to allow a user of the device to interact with the device.
23. A device in accordance with Claim 22, wherein the human machine interface is a touchscreen.
24. A device in accordance with any one of the preceding claims, where the device is a mobile communications device, such as a smartphone.
PCT/AU2015/000303 2014-05-23 2015-05-22 System and method for dynamic entertainment playlist generation WO2015176116A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2014901950A AU2014901950A0 (en) 2014-05-23 A system, method, software application and data signal for creating an entertainment package
AU2014901950 2014-05-23

Publications (1)

Publication Number Publication Date
WO2015176116A1 true WO2015176116A1 (en) 2015-11-26

Family

ID=54553101

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2015/000303 WO2015176116A1 (en) 2014-05-23 2015-05-22 System and method for dynamic entertainment playlist generation

Country Status (1)

Country Link
WO (1) WO2015176116A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10116981B2 (en) 2016-08-01 2018-10-30 Microsoft Technology Licensing, Llc Video management system for generating video segment playlist using enhanced segmented videos

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6993532B1 (en) * 2001-05-30 2006-01-31 Microsoft Corporation Auto playlist generator
US8258390B1 (en) * 2011-03-30 2012-09-04 Google Inc. System and method for dynamic, feature-based playlist generation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6993532B1 (en) * 2001-05-30 2006-01-31 Microsoft Corporation Auto playlist generator
US8258390B1 (en) * 2011-03-30 2012-09-04 Google Inc. System and method for dynamic, feature-based playlist generation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FLEXER A. ET AL.: "Playlist Generation using Start and End Songs.", ISMIR, 2008, pages 173 - 178, XP055237825 *
FRANK J.: "Collaborative Music Consumption Through Mobile Devices", TU WEIN FACULTY OF INFORMATICS, 22 March 2010 (2010-03-22), pages 28 - 35, Retrieved from the Internet <URL:http://publik.tuwien.ac.at/files/PubDat_191154.pdf> *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10116981B2 (en) 2016-08-01 2018-10-30 Microsoft Technology Licensing, Llc Video management system for generating video segment playlist using enhanced segmented videos

Similar Documents

Publication Publication Date Title
US20230103954A1 (en) Selection of media based on edge values specifying node relationships
US20230418872A1 (en) Media content item recommendation system
US20170300567A1 (en) Media content items sequencing
US20160147876A1 (en) Systems and methods for customized music selection and distribution
US20160147501A1 (en) Systems and methods for customized music selection and distribution
US20120023403A1 (en) System and method for dynamic generation of individualized playlists according to user selection of musical features
US11669296B2 (en) Computerized systems and methods for hosting and dynamically generating and providing customized media and media experiences
US20090138457A1 (en) Grouping and weighting media categories with time periods
US20210303612A1 (en) Identifying media content
TW201424360A (en) Personalized media stations
TW200805129A (en) Information processing apparatus, method and program
US11775580B2 (en) Playlist preview
US20180197158A1 (en) Methods and Systems for Purposeful Playlist Music Selection or Purposeful Purchase List Music Selection
US11874888B2 (en) Systems and methods for recommending collaborative content
US20210034661A1 (en) Systems and methods for recommending collaborative content
US20140122606A1 (en) Information processing device, information processing method, and program
JP2013003685A (en) Information processing device, information processing method and program
WO2015176116A1 (en) System and method for dynamic entertainment playlist generation
EP3798865A1 (en) Methods and systems for organizing music tracks
US11960536B2 (en) Methods and systems for organizing music tracks
JP5834514B2 (en) Information processing apparatus, information processing system, information processing method, and program
Jan APPLYING CONTENT-BASED RECOMMENDATION TO PERSONAL ITUNES MUSIC LIBRARIES

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15795346

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15795346

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 15795346

Country of ref document: EP

Kind code of ref document: A1