US20070127886A1 - Recording medium and method and apparatus for decoding text subtitle streams - Google Patents

Recording medium and method and apparatus for decoding text subtitle streams Download PDF

Info

Publication number
US20070127886A1
US20070127886A1 US11/633,027 US63302706A US2007127886A1 US 20070127886 A1 US20070127886 A1 US 20070127886A1 US 63302706 A US63302706 A US 63302706A US 2007127886 A1 US2007127886 A1 US 2007127886A1
Authority
US
United States
Prior art keywords
region
style
text
text subtitle
user control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/633,027
Inventor
Kang Seo
Jea Yoo
Sung Park
Young Shim
Byung Kim
Seung Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020040017935A external-priority patent/KR20050092836A/en
Application filed by Individual filed Critical Individual
Priority to US11/633,027 priority Critical patent/US20070127886A1/en
Publication of US20070127886A1 publication Critical patent/US20070127886A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/12Formatting, e.g. arrangement of data block or words on the record carriers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42646Internal components of the client ; Characteristics thereof for reading from or writing on a non-volatile solid state storage medium, e.g. DVD, CD-ROM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • H04N21/4325Content retrieval operation from a local storage medium, e.g. hard-disk by playing back content from the storage medium
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/78Television signal recording using magnetic recording
    • H04N5/781Television signal recording using magnetic recording on disks or drums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/8042Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • H04N9/8233Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal the additional signal being a character code signal
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B2220/00Record carriers by type
    • G11B2220/20Disc-shaped record carriers
    • G11B2220/25Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
    • G11B2220/2537Optical discs
    • G11B2220/2541Blu-ray discs; Blue laser DVR discs

Definitions

  • the present invention relates to a recording medium and a method and apparatus for decoding a text subtitle stream recorded on a recording medium.
  • Optical discs are widely used as an optical recording medium for recording mass data.
  • a new high-density digital video disc hereinafter referred to as “HD-DVD”
  • HD-DVD high-density digital video disc
  • BD Blu-ray Disc
  • BDs global standard technical specifications of BDs, which are known to be the next generation HD-DVD technology, are under establishment as a next generation optical recording solution that is able to have data significantly surpassing the conventional DVD, along with many other digital apparatuses.
  • optical reproducing apparatuses having the Blu-ray Disc (BD) standards applied thereto are also being developed.
  • the Blu-ray Disc (BD) standards are yet to be completed, there have been many difficulties in developing a complete optical reproducing apparatus.
  • the main AV data as well as various data required for a user's convenience, such as subtitle information as the supplementary data related to the main AV data, be provided, but also managing information for reproducing the main data and the subtitle data recorded in the optical disc should be systemized and provided.
  • the present invention is directed to a text subtitle decoder and a method for decoding text subtitle streams recorded on a recording medium that substantially obviates one or more problems due to limitations and disadvantages of the related art.
  • An object of the present invention is to provide a recording medium including a dialog style segment defining a set of user control styles, each of which is able to change at least one of region presentation properties specified by a region style.
  • Another object of the present invention is to provide a method and an apparatus for decoding a text subtitle stream by using a user control style which changes at least one of the region presentation properties specified by a region style.
  • a recording medium includes a data area storing at least one text subtitle stream, each of which includes a dialog style segment defining a set of region styles to be applied to at least one region of dialog text.
  • Each text subtitle stream may further include at least one dialog presentation segment, each of which contains at least one region of dialog text and is linked to at least one of the set of region styles.
  • the dialog style segment further defines a set of user control styles for each region style, where each user control style is selectable and is configured to change at least one of region presentation properties specified by a corresponding region style.
  • each user control style may specify a direction and a magnitude of a change in at least one of a region horizontal position, a region vertical position, a text horizontal position, a text vertical position, a line space, and a font size, all which are specified in the corresponding region style.
  • a subtitle loading buffer loads the text subtitle stream, which includes a dialog style segment defining a set of region styles and at least on dialog presentation segment.
  • Each dialog presentation contains at least one region of dialog text and is linked to at least one of the set of region styles.
  • the dialog style segment further defines a set of user control styles for each region, where each user control style is selectable and is configured to change at least one of region presentation properties specified by a corresponding region style.
  • a text subtitle decoder is able to decode each dialog presentation segment using the linked region style and one of the set of user control styles defined in the dialog presentation segment.
  • Each user control style may specify a direction and a magnitude of a change in the region presentation properties specified by the corresponding region style.
  • the region presentation properties include at least one of a region horizontal position, a region vertical position, a text horizontal position, a text vertical position, a line space, and a font size, which are specified in the corresponding region style.
  • FIG. 1 illustrates a file structure of data files recorded on an optical disc according to an example of the present invention
  • FIG. 2 illustrates data storage areas of an optical disc according to an example of the present invention
  • FIG. 3 illustrates a text subtitle and a main image presented on a display screen when a text subtitle stream and a main AV stream are reproduced
  • FIG. 4 is a schematic diagram illustrating reproduction control of a main AV clip and text subtitle clips by a PlayList
  • FIG. 5A illustrates a dialog presented on a display screen according to an example of the present invention
  • FIG. 5B illustrates regions of a dialog according to an example of the present invention
  • FIG. 6A illustrates presentations of text subtitle dialogs on a display screen in presentation time stamp (PTS) intervals
  • FIG. 7A illustrates a text subtitle stream file according to an example of the present invention
  • FIG. 7B illustrates specific information contained within a DPU and a DSU included in a text subtitle stream according to an example of the present invention
  • FIG. 8 illustrates a syntax for a text subtitle stream according to an example of the present invention
  • FIG. 9B illustrates a syntax for a dialog style set included in a dialog style unit according to an example of the present invention
  • FIG. 9C illustrates a syntax for a user changeable style set included in a dialog style set according to an example of the present invention
  • FIG. 10 illustrates an example of the apparatus for decoding main AV streams and text subtitle streams according to the present invention.
  • FIG. 11 illustrates an example of the method for decoding a text subtitle stream recorded on an optical disc according to the present invention.
  • main data represent audio/video (AV) data that belong to a title (e.g., a movie title) recorded in an optical disc by an author.
  • the AV data are recorded in MPEG2 format and are often referred to as AV streams or main AV streams.
  • supplementary data represent all other data required for reproducing the main data, examples of which are text subtitle streams, interactive graphic streams, presentation graphic streams, and supplementary audio streams (e.g., for a browsable slideshow).
  • Theses supplementary data streams may be recorded in MPEG2 format or in any other data format. They could be multiplexed with the AV streams or could exist as independent data files within the optical disc.
  • a subtitle represents caption information corresponding to video (image) data being reproduced, and it may be represented in a predetermined language. For example, when a user selects an option for viewing one of a plurality of subtitles represented in various languages while viewing images on a display screen, the caption information corresponding to the selected subtitle is displayed on a predetermined portion of the display screen. If the displayed caption information is text data (e.g., characters), the selected subtitle is often called a text subtitle.
  • a plurality of text subtitle streams in MPEG2 format may be recorded in an optical disc, and they may exist as a plurality of independent stream files. Each text subtitle stream file includes text data for a text subtitle and reproduction control data required for reproduction of the text data.
  • only a single text subtitle stream in MPEG2 format may be recorded in an optical disc.
  • FIG. 1 illustrates a file structure of data files recorded on an optical disc, an example of which is a Blu-ray disc (hereinafter “BD”), according to the present invention.
  • BD Blu-ray disc
  • FIG. 1 at least one BD directory (BDMV) is included in a root directory (root).
  • Each BD directory includes an index file (index.bdmv) and an object file (MovieObject.bdmv), which are used for interacting with one or more users.
  • the index file may contain data representing an index table having a plurality of selectable menus and movie titles.
  • Each BD directory further includes four file directories that include audio/video (AV) data to be reproduced and various data required for reproduction of the AV data.
  • AV audio/video
  • the file directories included in each BD directory are a stream directory (STREAM), a clip information directory (CLIPINF), a playlist directory (PLAYLIST), and an auxiliary data directory (AUX DATA).
  • the stream directory (STREAM) includes audio/video (AV) stream files having a particular data format.
  • the AV stream files may be in the form of MPEG2 transport packets and be named as “*.m2ts”, as shown in FIG. 1 .
  • the stream directory may further include one or more text subtitle stream files, where each text subtitle stream file includes text (e.g., characters) data for a text subtitle represented in a particular language and reproduction control information of the text data.
  • the playlist directory includes one or more PlayList files (*.mpls), where each PlayList file includes at least one PlayItem which designates at least one main AV clip and the reproduction time of the main AV clip. More specifically, a PlayItem contains information designating In-Time and Out-Time, which represent reproduction begin and end times for a main AV clip designated by Clip_Information_File_Name within the PlayItem. Therefore, a PlayList file represents the basic reproduction control information for one or more main AV clips. In addition, the PlayList file may further include a SubPlayItem, which represents the basic reproduction control information for a text subtitle stream file.
  • the main function of a SubPlayItem is to control reproduction of one or more text subtitle stream files.
  • auxiliary data directory may include supplementary data stream files, examples of which are font files (e.g., *.font or *.otf), pop-up menu files (not illustrated), and sound files (e.g., Sound.bdmv) for generating click sound.
  • font files e.g., *.font or *.otf
  • pop-up menu files not illustrated
  • sound files e.g., Sound.bdmv
  • FIG. 2 illustrates data storage areas of an optical disc according to the present invention.
  • the optical disc includes a file system information area occupying the inmost portion of the disc volume, a stream area occupying the outmost portion of the disc volume, and a database area occupied between the file system information area and the stream area.
  • system information for managing the entire data files shown in FIG. 1 is stored.
  • AV streams and one or more text subtitle streams are stored in the stream area.
  • the general files, PlayList files, and clip information files shown in FIG. 1 are stored in the database area of the disc volume.
  • the general files include an index file and an object file
  • the PlayList files and clip information files include information required to reproduce the AV streams and the text subtitle streams stored in the stream area.
  • FIG. 3 illustrates a text subtitle and a main image presented on a display screen when a text subtitle stream and a main AV stream are reproduced.
  • the main image and the text subtitle are simultaneously displayed on the display screen when a main AV stream and a corresponding text subtitle stream are reproduced in synchronization.
  • FIG. 4 is a schematic diagram illustrating reproduction control of a main AV clip and text subtitle clips by a PlayList.
  • a PlayList file includes at least one PlayItem controlling reproduction of at least one main AV clip and a SubPlayItem controlling reproduction of a plurality of text subtitle clips.
  • One of text subtitle clip 1 and text subtitle clip 2 shown in FIG. 4 for English and Korean text subtitles may be synchronized with the main AV clip such that a main image and a corresponding text subtitle are displayed on a display screen simultaneously at a particular presentation time.
  • display control information e.g., position and size information
  • presentation time information examples of which are illustrated in FIG. 5A to FIG. 5C , are required.
  • FIG. 5B illustrates regions of a dialog according to the present invention.
  • a region represents a divided portion of text subtitle data (dialog) displayed on a display screen during a given presentation time.
  • a dialog includes at least one region, and each region may include at least one line of subtitle text.
  • the entire text subtitle data representing a region may be displayed on the display screen according to a region style (global style) assigned to the region.
  • the maximum number of regions included in a dialog should be determined based on a desired decoding rate of the subtitle data because the greater number of regions generally results a lower decoding ratio. For example, the maximum number of regions for a dialog may be limited to two in order to achieve a reasonably high decoding rate. However, the maximum number could be greater than two for other purposes.
  • Region style information defines a region style (global style) which is applied to an entire region of a dialog.
  • the region style information may contain at least one of a region position, region size, font color, background color, text flow, text alignment, line space, font name, font style, and font size of the region.
  • region style information may contain at least one of a region position, region size, font color, background color, text flow, text alignment, line space, font name, font style, and font size of the region.
  • two different region styles are applied to region 1 and region 2 , as shown in FIG. 5C .
  • a region style with position 1 , size 1 , and blue background color is applied to Region 1
  • a different region style with position 2 , size 2 , and red background color is applied to Region 2 .
  • inline style information defines an inline style (local style) which is applied to a particular portion of text strings included in a region.
  • the inline style information may contain at least one of a font type, font size, font style, and font color.
  • the particular portion of text strings may be an entire text line within a region or a particular portion of the text line.
  • a particular inline style is applied to the text portion “mountain” included in Region 1 .
  • at least one of the font type, font size, font style, and font color of the particular portion of text strings is different from the remaining portion of the text strings within Region 1 .
  • FIG. 6A illustrates presentations of text subtitle dialogs on a display screen in presentation time stamp (PTS) intervals.
  • PTS presentation time stamp
  • Information defining a dialog includes dialog presentation time information and dialog text data including style information and text strings to be displayed within each region of the dialog.
  • An example of the presentation time information is a set of start PTS start and PTS end, and the style information includes region (global) style information and inline (local) style information described above. It is shown in FIG. 6A that different style information sets may be applied to the dialogs.
  • FIG. 6B illustrates continuities between text subtitle dialogs being presented on a display screen in PTS intervals.
  • the presentation end time of Dialog # 1 is identical to the presentation start time of Dialog # 2 . Therefore, a continuity exists between Dialog # 1 and Dialog # 2 .
  • Display of Text # 1 in a region of Dialog # 1 is continuous with display of Text # 1 in Region 1 of Dialog # 2 .
  • PTS intervals of both dialogs are continuous and same style information (region and inline) is used when presenting Text # 1 in both regions.
  • another continuity exists between Dialog # 2 and Dialog # 3 because display of Text # 2 in Region 2 of Dialog # 2 is continuous with display of Text # 2 in a region of Dialog # 3 .
  • presentation times (PTS intervals) of the dialogs must be continuous.
  • same region and inline style information must be used when presenting the same text in the regions, respectively.
  • PTS intervals are not continuous.
  • An indicator e.g., continuous_presentation_flag
  • continuous_presentation_flag may be included in presentation information of a current dialog to indicate whether the dialog is continuous with a previous dialog.
  • a DSU is also often referred as a dialog style segment (DSS). All the remaining PES packets correspond to dialog presentation units (DPUs), each of which includes presentation information for a dialog having at least one region, and dialog text data including a region style indicator, inline style information, and text strings for each region.
  • DPUs dialog presentation units
  • DPS dialog presentation segment
  • FIG. 7B illustrates specific information contained within a DPU and a DSU included in a text subtitle stream according to the present invention.
  • a DSU contains information sets defining a group of region styles, each of which is applied to a corresponding region of a dialog.
  • a DPU contains dialog text data and dialog presentation information for a dialog.
  • the dialog text data includes text strings to be included in each region of the dialog, inline style information to be applied to a particular portion of the text strings, and a region style identifier indicating a region style to be applied to each dialog region.
  • the region style identifier identifies one of the group of region styles defined in the DSU.
  • the dialog presentation information includes presentation time information and palette (color) update information for a dialog.
  • All the data included in a text subtitle stream may be classified into three types of data based on their basic functions.
  • the data could be classified into dialog text data, composition information, and rendering information, as shown in FIG. 7B .
  • the dialog text data include text string(s), inline style information, and a region style identifier for each region of a dialog.
  • the composition information includes presentation time information, examples of which are presentation start and end times, position information for a dialog region, and palette update information for a dialog.
  • the rendering information includes information required for rendering the text strings to graphic data for presentation. Referring to FIG.
  • the horizontal and vertical positions of each region included in the DSU is a part of the composition information, and the region width, region height, font color, background color, text flow, text alignment, line space, font name, font style, and font size included in the DSU represent the rendering information.
  • a DSU includes a set of region style information (dialog style set) defining a limited number of author-defined region styles, respectively.
  • region style information (dialog style set) defining a limited number of author-defined region styles, respectively.
  • the maximum number of the region styles defined in a DSU may be limited to 60, and the region styles may be identified by their region style identifications (region_style_id). Therefore, an author stores a DSU defining only a limited number of region styles in an optical disc.
  • the region styles are used by a disc player when reproducing text subtitle streams recorded on the optical disc.
  • the disc player may use other region styles defined by an additional set of style information, which may be provided from other source.
  • An example of the source is a local data storage included in the disc player.
  • the subtitle regions reproduced from the text subtitle streams recorded on the optical disc can have a variety of region styles.
  • FIG. 8 illustrates a syntax for a text subtitle stream (Text_subtitle_stream ( )) according to an example of the present invention.
  • the text subtitle stream syntax includes a syntax for a dialog style unit (dialog_style_unit ( )) including a set of information defining a set of region styles, respectively, and syntaxes for a plurality of dialog presentation units (dialog_presentation_unit ( )), where each DPU syntax includes dialog presentation information and at least one region of dialog text.
  • Each region of dialog text includes a region style identifier, one or more text strings, and inline style information, and the region style identifier identifies one of the set of region styles defined in the DSU syntax.
  • FIG. 9A illustrates the syntax for a dialog style unit (dialog_style_unit ( )) included in the text subtitle stream syntax shown in FIG. 8 .
  • the dialog style unit syntax includes a syntax for a dialog style set (dialog_styleset ( )) in which a set of author-defined region styles are defined.
  • FIG. 9B illustrates the syntax for a dialog style set (dialog_styleset ( )) included in the dialog style unit syntax shown in FIG. 9A .
  • the dialog style set syntax includes a set of region style information defining a set of region styles (region_style ( )), respectively, and a data field or a flag (player_style_flag) indicating whether the author permitted a player to generate its own set of styles (player styles) for a text subtitle in addition to the set of author-defined style defined in region_style ( ).
  • the dialog style set syntax further includes a syntax for a user-changeable style set (user_changeable_styleset ( )) defining a set of user control styles.
  • region style identifications are assigned to the set of region styles (region_style ( )), respectively, and each region style information represents global style information to be applied to an entire portion of a region of dialog text.
  • the region style identifier included in a DPU for each region includes one of the region style identifications. Therefore, a region style corresponding to the region style identifier is applied when reproducing at least one region of dialog text contained in each DPU.
  • a region horizontal position specifies the horizontal address of the top left pixel of a region in a graphics plane
  • a region vertical position specifies the vertical address of the top left pixel of the region in the graphics plane.
  • a region width specifies the horizontal length of the region rectangle from the region horizontal position
  • a region height specifies the vertical length of the region rectangle from the region vertical position.
  • a region background color index specifies an index value indicating the background color of the region.
  • a text horizontal position specifies the horizontal address of an origin of text in the region
  • a text vertical position specifies the vertical address of the text origin in the region.
  • a text flow specifies at least one of character progression (left-to-right or right-to-left) and line progression (top-to-bottom or bottom-to-top) in the region.
  • a text alignment specifies alignment (left, center, or right) of rendered text in the region.
  • a line space specifies the distance between two adjacent lines of text in the region.
  • a font identification indicates the font identification specified in a clip information file.
  • a font style specifies the style of font for the text in the region, examples of which are normal, bold, italic, and bold and italic.
  • a font size specifies the size of font for the text in the region, an example of which is the vertical size of a character in unit of pixels.
  • a font color index (font_color_index) specifies an index value indicating the color of the text in the region.
  • the player style flag (player_style_flag) shown in FIG. 9B indicates whether au author permitted a disc player to generate and/or use its own set of styles (player styles), which may be pre-stored in a local data storage of the disc player, for a text subtitle in addition to the author-defined region styles defined in an optical disc. For example, if the value of the player style flag is set to 1b, the author permits the player to generate and/or use its own set of player styles. On the other hand, if the value of the player style flag is set to 0b, the author prohibits the player from generating and/or using the set of player styles.
  • a set of user control styles are defined for each region style having a region style ID, and user style IDs (user_style_id) are assigned to the set of user control styles, respectively.
  • the maximum number of the user control styles defined for each region style may be limited to 25. Since the maximum number of the region styles defined in a dialog style set is limited to 60, the total number of the user changeable styles defined for a DPU must be less than or equal to 1500.
  • a user control style may include a region horizontal position direction (region_horizontal_position_direction) specifying the direction of the region horizontal position's horizontal movement and a region horizontal position delta (region_horizontal_position_delta) specifying the number of the horizontal movement in the unit of pixels.
  • region_horizontal_position_direction specifying the direction of the region horizontal position's horizontal movement
  • region_horizontal_position_delta specifying the number of the horizontal movement in the unit of pixels.
  • the horizontal movement may be in a right direction if the horizontal position direction is set to 0 and may be in a left direction if it is set to 1.
  • a user control style may include a region vertical position direction (region_vertical_position_direction) specifying the direction of the region horizontal position's vertical movement and a region vertical position delta (region_vertical_position_delta) specifying the number of the vertical movement in the unit of pixels.
  • region_vertical_position_direction specifying the direction of the region horizontal position's vertical movement
  • region_vertical_position_delta specifying the number of the vertical movement in the unit of pixels.
  • the vertical movement may be in a downward direction if the vertical position direction is set to 0 and may be in a upward direction if it is set to 1.
  • a user control style may include a font size change direction (font_size_inc_dec) specifying the direction of the font size change, and a font size delta (font_size_delta) specifying the number of the font size change in unit of pixels.
  • the font size may be increased if font_size_inc_dec is set to 0 and may be decreased if it is set to 1.
  • Some of the characteristic features of the user changeable style set according to the present invention are as follows. First, a set of user control styles are defined for each of a set of region styles defined in a dialog style unit, and the number of the set of control styles are fixed. Therefore, the numbers of the user control styles defined for two different region styles, respectively, are identical. The number of the set of user control styles to be used when reproducing each region of dialog text is fixed. Next, the set of user control. styles are identified by different user style IDs, respectively. Third, all the changes in the region presentation properties are defined in combination by a single user control style. For example, the region horizontal position and font size are not changed separately by two distinct user control styles. They are changed in combination by a single user control style. Fourth, a change of a certain property is represented with its direction and magnitude rather than with an actual property value. The actual property value may be obtained by applying the magnitude (delta) and direction of the change to the original property value defined in a region style.
  • each text subtitle stream includes a DSU defining a set of dialog styles and a plurality of DPUs.
  • the set of region styles have different region style IDs.
  • the DSU further defines a set of user control styles for each region style, where the user control styles have different user style IDs.
  • Each user control style is configured to change at least one of the author-defined region presentation properties which are specified by a corresponding region style.
  • the dialog style set includes a player style flag indicating whether the author permitted a player to generate and/or use its own set of player styles for a text subtitle in additional to the author-defined style set.
  • the apparatus includes a packet identifier (PID) filter 5 for separating input streams into video streams, audio streams, graphic streams, and text subtitle streams based on their packet identifiers, a video decoding part 20 for decoding the video streams, an audio decoding part 10 for decoding the audio streams, a graphic decoding part 30 for decoding the graphic streams, and a text subtitle decoding part 40 for decoding the text subtitle streams.
  • PID packet identifier
  • the audio decoding part 10 , video decoding part 20 , and graphic decoding part 30 include transport buffers 11 , 21 , and 31 , respectively, for storing stream data to be decoded.
  • a video plane (VP) 23 and a graphic plane 33 are included in the video decoding part 20 and the graphic decoding part 30 , respectively, for converting decoded signals into displayable video and graphic images.
  • the graphic decoding part 30 includes a color look up table (CLUT) 34 for controlling color and transparency levels of the displayable graphic images.
  • CLUT color look up table
  • the text subtitle decoding part 40 When the text subtitle decoding part 40 receives a text subtitle stream supporting a single language from the switch 6 , an entire portion of the text subtitle stream may be preloaded into a subtitle preloading buffer (SPB) 41 at once. Alternatively, when there are more than one text subtitle streams for supporting multi-languages, all the text subtitle streams may be preloaded into the SPB 41 at once. Therefore, the size of the SPB 41 should be determined based on a total number of text subtitle stream files received from the switch 6 . For example, the size of the SPB 41 should be greater than or equal to 0.5 megabytes for preloading a 0.5 megabyte text subtitle stream file.
  • the size of the SPB 41 should be greater than or equal to 1 megabytes.
  • the size of the SPB 41 should be large enough to preload all the required text subtitle stream files at once.
  • the text subtitle decoding part 40 shown in FIG. 10 further includes a font preloading buffer (FPB) 410 for storing all the associated font files which may be included in the auxiliary data directory shown in FIG. 1 .
  • FPB font preloading buffer
  • the size of the FPB 410 should be large enough to preload all the required font files at once in order to ensure seamless presentation of a text subtitle supporting one or more languages. Since all the available text subtitle stream files and related font files are preloaded, extraction and use of the preloaded data can be done in a simple manner. Also the control of the SPB 41 and the FPB 410 could be quite simple due to the this reason.
  • the text subtitle decoding part 40 further includes a text subtitle decoder 42 which decodes each text subtitle stream stored in the SPB 41 , a graphic plane 43 in which the decoded subtitle data are composed as displayable subtitle images, and a color look up table (CLUT) 44 controlling at least one of color and transparency levels of the converted subtitle images.
  • a text subtitle decoder 42 which decodes each text subtitle stream stored in the SPB 41
  • a graphic plane 43 in which the decoded subtitle data are composed as displayable subtitle images
  • CLUT color look up table
  • the text subtitle decoding part 40 further includes a local data storage 45 which stores a player style set defining a set of player styles to be selectively used when reproducing a text subtitle stream preloaded in the SPB 41 .
  • the local data storage 45 may further store a user changeable set specifying a set of user control styles to be selectively used when reproducing the text subtitle stream. This user changeable set may be similar to the user changeable set included in a DSU, an example of which is shown in FIG. 9C .
  • each player style may be configured to change at least one of region presentation properties which are initially defined by a region style defined in a DSU.
  • a player style may specify a direction and a magnitude of a change in a region horizontal position defined in the region style.
  • the player style set is similar to the user changeable set, an example of which is illustrated in FIG. 9C .
  • FIG. 11 illustrates a method of decoding a text subtitle stream recorded on an optical disc according to an example of the present invention.
  • the text subtitle decoder 42 After the text subtitle decoder 42 starts reproducing a text subtitle stream preloaded into the SPB 41 , it initially reads player_style_flag included in a DSU to determine whether the use of a player style set stored in the local data storage 45 is permitted (S 110 ). For example, if player_style_flag is set to 0b, use of the player style set is not permitted. In this case, the text subtitle decoder 42 must use the author-defined region styles recorded on the optical disc (S 111 ).
  • the text subtitle decoder 42 is permitted to use the player style set stored in the local data storage 45 . Then the text subtitle decoder 42 independently determines whether to use any one of a set of player styles defined in the player style set (S 112 ). For example, the text subtitle decoder 42 may compare the set of player styles with the region styles defined in the text subtitle stream and use a result of this comparison for the determination of step S 112 . If the set of player styles are not determined to be used in step S 112 , the region styles recorded on the optical disc are used (S 111 ). On other hand, if the set of player styles are determined to be used in step S 112 , the text subtitle decoder 42 may use them independently or in combination with the set of region styles recorded on the disc.
  • the text subtitle decoder 42 may use a region style identified by a region style identifier included in the DPU. If a user wises to change this region style, he or she may input a command for changing the region style.
  • a set of user control styles which are defined by a user-changeable style set defined in a DSU, at least one of the region horizontal position, region vertical position, and font size may be changed.
  • the apparatus shown in FIG. 10 further includes an image superimposition part 50 which superimposes the images outputted from the video decoding part 20 , the graphic decoding part 30 , and the text subtitle decoding part 40 . These combined images are displayed on a display screen, as shown in FIG. 3 .
  • the video images outputted from the VP 23 of the video decoding part 20 may be displayed as a background of the display screen, and the images outputted from the graphic decoding part 30 and/or text subtitle decoding part 40 may be superimposed over the video images in a predetermined order.
  • the output images of the graphic decoding part 30 are presentation graphic images
  • these images may be initially superimposed over the video images by a first adder 52
  • the text subtitle images from the text subtitle decoding part 40 may be superimposed over the video images by a second adder 53 .
  • the output images of the graphic decoding part 30 are interactive graphic images
  • the text subtitle images from the text subtitle decoding part 40 may be initially superimposed over the video images by the first adder 52 .
  • the interactive graphic images may be further superimposed over the subtitle-superimposed images by the second adder 53 .
  • the apparatus shown in FIG. 10 further includes a system decoder 4 for decoding input transport streams (e.g., MPEG transport streams), and a microprocessor 3 for controlling operations of all the components of the apparatus mentioned above.
  • a system decoder 4 for decoding input transport streams (e.g., MPEG transport streams)
  • a microprocessor 3 for controlling operations of all the components of the apparatus mentioned above.
  • a plurality of user control styles are defined for each region style defined in a dialog style segment.
  • Each user control style is selectable by a user and is configured to change the region presentation properties specified by a corresponding region style. Therefore, a user can have options of selecting one of a variety of user control styles.

Abstract

At least one text subtitle stream is recorded on a recording medium. Each text subtitle stream includes a dialog style segment defining a set of region styles and at least one dialog presentation segment. Each dialog presentation segment contains at least one region of dialog text and being linked to at least one of the set of region styles. The dialog style segment further defines a set of user control styles for each region style. Each user control style is selectable by a user and is configured to change at least one of region presentation properties specified by a corresponding region style.

Description

    PRIORITY INFORMATION
  • This is a continuation application of application Ser. No. 11/033,494 filed Jan. 12, 2005, the entire contents of which are hereby incorporated by reference.
  • This application claims the benefit of U.S. Provisional Application No. 60/542,850, filed on Feb. 10, 2004; U.S. Provisional Application No. 60/542,852, filed on Feb. 10, 2004; and U.S. Provisional Application No. 60/543,328, filed on Feb. 11, 2004, the entire contents of which are hereby incorporated by reference. This application also claims the benefit of Korean Patent Application No. 10-2004-0017935, filed on Mar. 17, 2004, which is hereby incorporated by reference as if fully set forth herein.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to a recording medium and a method and apparatus for decoding a text subtitle stream recorded on a recording medium.
  • 2. Discussion of the Related Art
  • Optical discs are widely used as an optical recording medium for recording mass data. Presently, among a wide range of optical discs, a new high-density digital video disc (hereinafter referred to as “HD-DVD”), such as a Blu-ray Disc (hereafter referred to as “BD”), is under development for recording high definition video and audio data. Currently, global standard technical specifications of BDs, which are known to be the next generation HD-DVD technology, are under establishment as a next generation optical recording solution that is able to have data significantly surpassing the conventional DVD, along with many other digital apparatuses.
  • Accordingly, optical reproducing apparatuses having the Blu-ray Disc (BD) standards applied thereto are also being developed. However, since the Blu-ray Disc (BD) standards are yet to be completed, there have been many difficulties in developing a complete optical reproducing apparatus. Particularly, in order to effectively reproduce the data from the Blu-ray Disc (BD), not only should the main AV data as well as various data required for a user's convenience, such as subtitle information as the supplementary data related to the main AV data, be provided, but also managing information for reproducing the main data and the subtitle data recorded in the optical disc should be systemized and provided.
  • However, in the present Blu-ray Disc (BD) standards, since the standards of the supplementary data, particularly the subtitle information, are not completely consolidated, there are many restrictions in the full-scale development of a Blu-ray Disc (BD) basis optical reproducing apparatus. And, such restrictions cause problems in providing the supplementary data such as subtitles to the user.
  • SUMMARY OF THE INVENTION
  • Accordingly, the present invention is directed to a text subtitle decoder and a method for decoding text subtitle streams recorded on a recording medium that substantially obviates one or more problems due to limitations and disadvantages of the related art.
  • An object of the present invention is to provide a recording medium including a dialog style segment defining a set of user control styles, each of which is able to change at least one of region presentation properties specified by a region style.
  • Another object of the present invention is to provide a method and an apparatus for decoding a text subtitle stream by using a user control style which changes at least one of the region presentation properties specified by a region style.
  • Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
  • To achieve these objects and other advantages and in accordance with the purpose of the invention, as embodied and broadly described herein, a recording medium includes a data area storing at least one text subtitle stream, each of which includes a dialog style segment defining a set of region styles to be applied to at least one region of dialog text. Each text subtitle stream may further include at least one dialog presentation segment, each of which contains at least one region of dialog text and is linked to at least one of the set of region styles. The dialog style segment further defines a set of user control styles for each region style, where each user control style is selectable and is configured to change at least one of region presentation properties specified by a corresponding region style. For example, each user control style may specify a direction and a magnitude of a change in at least one of a region horizontal position, a region vertical position, a text horizontal position, a text vertical position, a line space, and a font size, all which are specified in the corresponding region style.
  • In another aspect of the present invention, a method and an apparatus for decoding a text subtitle stream recorded on a recording medium are provided. A subtitle loading buffer loads the text subtitle stream, which includes a dialog style segment defining a set of region styles and at least on dialog presentation segment. Each dialog presentation contains at least one region of dialog text and is linked to at least one of the set of region styles. The dialog style segment further defines a set of user control styles for each region, where each user control style is selectable and is configured to change at least one of region presentation properties specified by a corresponding region style. A text subtitle decoder is able to decode each dialog presentation segment using the linked region style and one of the set of user control styles defined in the dialog presentation segment.
  • Each user control style may specify a direction and a magnitude of a change in the region presentation properties specified by the corresponding region style. The region presentation properties include at least one of a region horizontal position, a region vertical position, a text horizontal position, a text vertical position, a line space, and a font size, which are specified in the corresponding region style.
  • It is to be understood that both the foregoing general description and the following detailed description of the present invention are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principle of the invention. In the drawings;
  • FIG. 1 illustrates a file structure of data files recorded on an optical disc according to an example of the present invention;
  • FIG. 2 illustrates data storage areas of an optical disc according to an example of the present invention;
  • FIG. 3 illustrates a text subtitle and a main image presented on a display screen when a text subtitle stream and a main AV stream are reproduced;
  • FIG. 4 is a schematic diagram illustrating reproduction control of a main AV clip and text subtitle clips by a PlayList;
  • FIG. 5A illustrates a dialog presented on a display screen according to an example of the present invention;
  • FIG. 5B illustrates regions of a dialog according to an example of the present invention;
  • FIG. 5C illustrates region and inline styles for regions of a dialog according to an example of the present invention;
  • FIG. 6A illustrates presentations of text subtitle dialogs on a display screen in presentation time stamp (PTS) intervals;
  • FIG. 6B illustrates continuities between text subtitle dialogs presented on a display screen in PTS intervals;
  • FIG. 7A illustrates a text subtitle stream file according to an example of the present invention;
  • FIG. 7B illustrates specific information contained within a DPU and a DSU included in a text subtitle stream according to an example of the present invention;
  • FIG. 8 illustrates a syntax for a text subtitle stream according to an example of the present invention;
  • FIG. 9A illustrates a syntax for a dialog style unit according to an example of the present invention;
  • FIG. 9B illustrates a syntax for a dialog style set included in a dialog style unit according to an example of the present invention;
  • FIG. 9C illustrates a syntax for a user changeable style set included in a dialog style set according to an example of the present invention;
  • FIG. 10 illustrates an example of the apparatus for decoding main AV streams and text subtitle streams according to the present invention; and
  • FIG. 11 illustrates an example of the method for decoding a text subtitle stream recorded on an optical disc according to the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
  • In this detailed description, main data represent audio/video (AV) data that belong to a title (e.g., a movie title) recorded in an optical disc by an author. In general, the AV data are recorded in MPEG2 format and are often referred to as AV streams or main AV streams. In addition, supplementary data represent all other data required for reproducing the main data, examples of which are text subtitle streams, interactive graphic streams, presentation graphic streams, and supplementary audio streams (e.g., for a browsable slideshow). Theses supplementary data streams may be recorded in MPEG2 format or in any other data format. They could be multiplexed with the AV streams or could exist as independent data files within the optical disc.
  • A subtitle represents caption information corresponding to video (image) data being reproduced, and it may be represented in a predetermined language. For example, when a user selects an option for viewing one of a plurality of subtitles represented in various languages while viewing images on a display screen, the caption information corresponding to the selected subtitle is displayed on a predetermined portion of the display screen. If the displayed caption information is text data (e.g., characters), the selected subtitle is often called a text subtitle. According to one aspect of the present invention, a plurality of text subtitle streams in MPEG2 format may be recorded in an optical disc, and they may exist as a plurality of independent stream files. Each text subtitle stream file includes text data for a text subtitle and reproduction control data required for reproduction of the text data. According to another aspect of the present invention, only a single text subtitle stream in MPEG2 format may be recorded in an optical disc.
  • FIG. 1 illustrates a file structure of data files recorded on an optical disc, an example of which is a Blu-ray disc (hereinafter “BD”), according to the present invention. Referring to FIG. 1, at least one BD directory (BDMV) is included in a root directory (root). Each BD directory includes an index file (index.bdmv) and an object file (MovieObject.bdmv), which are used for interacting with one or more users. For example, the index file may contain data representing an index table having a plurality of selectable menus and movie titles. Each BD directory further includes four file directories that include audio/video (AV) data to be reproduced and various data required for reproduction of the AV data.
  • The file directories included in each BD directory are a stream directory (STREAM), a clip information directory (CLIPINF), a playlist directory (PLAYLIST), and an auxiliary data directory (AUX DATA). First of all, the stream directory (STREAM) includes audio/video (AV) stream files having a particular data format. For example, the AV stream files may be in the form of MPEG2 transport packets and be named as “*.m2ts”, as shown in FIG. 1. The stream directory may further include one or more text subtitle stream files, where each text subtitle stream file includes text (e.g., characters) data for a text subtitle represented in a particular language and reproduction control information of the text data. The text subtitle stream files exist as independent stream files within the stream directory and may be named as “*.m2ts” or “*.txtst”, as shown in FIG. 1. An AV stream file or text subtitle stream file included in the stream directory is often called a clip stream file.
  • Next, the clip information directory (CLPINF) includes clip information files that correspond to the stream files (AV or text subtitle) included in the stream directory, respectively. Each clip information file contains property and reproduction timing information of a corresponding stream file. For example, A clip information file may includes mapping information, in which presentation time stamps (PTS) and source packet numbers (SPN) are one-to-one mapped by an entry point map (EPM). Using the mapping information, a particular location of a stream file may be determined from timing information (In-Time and Out-Time) provided by a PlayItem or SubPlayItem, which will be discussed later in more details. In the industry standard, each pair of a stream file and its corresponding clip information file is designated as a clip. For example, 01000.clpi included in CLIPINF includes property and reproduction timing information of 01000.m2ts included in STREAM, and 01000.clpi and 01000.m2ts form a clip.
  • Referring back to FIG. 1, the playlist directory (PLAYLIST) includes one or more PlayList files (*.mpls), where each PlayList file includes at least one PlayItem which designates at least one main AV clip and the reproduction time of the main AV clip. More specifically, a PlayItem contains information designating In-Time and Out-Time, which represent reproduction begin and end times for a main AV clip designated by Clip_Information_File_Name within the PlayItem. Therefore, a PlayList file represents the basic reproduction control information for one or more main AV clips. In addition, the PlayList file may further include a SubPlayItem, which represents the basic reproduction control information for a text subtitle stream file. When a SubPlayItem is included in a PlayList file to reproduce one or more text subtitle stream files, the SubPlayItem is synchronized with the PlayItem(s). On the other hand, when the SubPlayItem is used to reproduce a browsable slideshow, it may not be synchronized with the PlayItem(s). According to the present invention, the main function of a SubPlayItem is to control reproduction of one or more text subtitle stream files.
  • Lastly, the auxiliary data directory (AUX DATA) may include supplementary data stream files, examples of which are font files (e.g., *.font or *.otf), pop-up menu files (not illustrated), and sound files (e.g., Sound.bdmv) for generating click sound. The text subtitle stream files mentioned earlier may be included in the auxiliary data directory instead of the stream directory.
  • FIG. 2 illustrates data storage areas of an optical disc according to the present invention. Referring to FIG. 2, the optical disc includes a file system information area occupying the inmost portion of the disc volume, a stream area occupying the outmost portion of the disc volume, and a database area occupied between the file system information area and the stream area. In the file system information area, system information for managing the entire data files shown in FIG. 1 is stored. Next, AV streams and one or more text subtitle streams are stored in the stream area. The general files, PlayList files, and clip information files shown in FIG. 1 are stored in the database area of the disc volume. As discussed above, the general files include an index file and an object file, and the PlayList files and clip information files include information required to reproduce the AV streams and the text subtitle streams stored in the stream area. Using the information stored in the database area and/or stream area, a user is able to select a specific playback mode and to reproduce the main AV and text subtitle streams in the selected playback mode.
  • FIG. 3 illustrates a text subtitle and a main image presented on a display screen when a text subtitle stream and a main AV stream are reproduced. The main image and the text subtitle are simultaneously displayed on the display screen when a main AV stream and a corresponding text subtitle stream are reproduced in synchronization.
  • FIG. 4 is a schematic diagram illustrating reproduction control of a main AV clip and text subtitle clips by a PlayList. Referring to FIG. 4, a PlayList file includes at least one PlayItem controlling reproduction of at least one main AV clip and a SubPlayItem controlling reproduction of a plurality of text subtitle clips. One of text subtitle clip 1 and text subtitle clip 2 shown in FIG. 4 for English and Korean text subtitles may be synchronized with the main AV clip such that a main image and a corresponding text subtitle are displayed on a display screen simultaneously at a particular presentation time. In order to display the text subtitle on the display screen, display control information (e.g., position and size information) and presentation time information, examples of which are illustrated in FIG. 5A to FIG. 5C, are required.
  • FIG. 5A illustrates a dialog presented on a display screen according to the present invention. A dialog represents entire text subtitle data displayed on a display screen during a given presentation time. In general, presentation times of the dialog may be represented in presentation time stamps (PTS). For example, presentation of the dialog shown in FIG. 5A starts at PTS (k) and ends at PTS (k+1). Therefore, the dialog shown in FIG. 5A represents an entire unit of text subtitle data which are displayed on the display screen between PTS (k) and PTS (k+1). A dialog includes at least one line of subtitle text (characters). When there are two or more lines of subtitle text in a dialog, entire text data may be displayed according to a style defined for the dialog. The maximum number of the characters included in a dialog may be limited to about 100.
  • In addition, FIG. 5B illustrates regions of a dialog according to the present invention. A region represents a divided portion of text subtitle data (dialog) displayed on a display screen during a given presentation time. In other words, a dialog includes at least one region, and each region may include at least one line of subtitle text. The entire text subtitle data representing a region may be displayed on the display screen according to a region style (global style) assigned to the region. The maximum number of regions included in a dialog should be determined based on a desired decoding rate of the subtitle data because the greater number of regions generally results a lower decoding ratio. For example, the maximum number of regions for a dialog may be limited to two in order to achieve a reasonably high decoding rate. However, the maximum number could be greater than two for other purposes.
  • FIG. 5C illustrates style information for regions of a dialog according to the present invention. Style information represents information defining properties required for displaying at least a portion of a region included in a dialog. Some of the examples of the style information are position, region size, background color, text alignment, text flow information, and many others. The style information may be classified into region style information (global style information) and inline style information (local style information).
  • Region style information defines a region style (global style) which is applied to an entire region of a dialog. For example, the region style information may contain at least one of a region position, region size, font color, background color, text flow, text alignment, line space, font name, font style, and font size of the region. For example, two different region styles are applied to region 1 and region 2, as shown in FIG. 5C. A region style with position 1, size 1, and blue background color is applied to Region 1, and a different region style with position 2, size 2, and red background color is applied to Region 2.
  • On the other hand, inline style information defines an inline style (local style) which is applied to a particular portion of text strings included in a region. For example, the inline style information may contain at least one of a font type, font size, font style, and font color. The particular portion of text strings may be an entire text line within a region or a particular portion of the text line. Referring to FIG. 5C, a particular inline style is applied to the text portion “mountain” included in Region 1. In other words, at least one of the font type, font size, font style, and font color of the particular portion of text strings is different from the remaining portion of the text strings within Region 1.
  • FIG. 6A illustrates presentations of text subtitle dialogs on a display screen in presentation time stamp (PTS) intervals. There are four dialogs to be displayed between PTS1 to PTS6. More specifically, Dialog # 1 has only one region and Text # 1 is displayed within this region between PTS1 to PTS2. Next, Dialog # 2 has Region 1 and Region 2 and Text # 1 and Text # 2 are displayed within Region 1 and Region 2, respectively, between PTS2 to PTS3. Thereafter, Dialog # 3 also has only one region and Text # 2 is displayed within this region between PTS3 and PTS4. There is no dialog to be presented between PTS4 to PTS5, and Text # 3 is displayed within a region of Dialog # 4 between PTS5 to PTS6. Information defining a dialog includes dialog presentation time information and dialog text data including style information and text strings to be displayed within each region of the dialog. An example of the presentation time information is a set of start PTS start and PTS end, and the style information includes region (global) style information and inline (local) style information described above. It is shown in FIG. 6A that different style information sets may be applied to the dialogs.
  • FIG. 6B illustrates continuities between text subtitle dialogs being presented on a display screen in PTS intervals. Referring to FIG. 6B, the presentation end time of Dialog # 1 is identical to the presentation start time of Dialog # 2. Therefore, a continuity exists between Dialog # 1 and Dialog # 2. Display of Text # 1 in a region of Dialog # 1 is continuous with display of Text # 1 in Region 1 of Dialog # 2. In other words, PTS intervals of both dialogs are continuous and same style information (region and inline) is used when presenting Text # 1 in both regions. Similarly, another continuity exists between Dialog # 2 and Dialog # 3 because display of Text # 2 in Region 2 of Dialog # 2 is continuous with display of Text # 2 in a region of Dialog # 3. In order to ensure a continuity between two consecutive dialogs displaying same subtitle text, presentation times (PTS intervals) of the dialogs must be continuous. In addition, same region and inline style information must be used when presenting the same text in the regions, respectively. Referring back to FIG. 6B, there is no continuity between Dialog # 3 and Dialog # 4 because their PTS intervals are not continuous. An indicator (e.g., continuous_presentation_flag) may be included in presentation information of a current dialog to indicate whether the dialog is continuous with a previous dialog.
  • FIG. 7A illustrates a text subtitle stream file (e.g., 10001.m2ts shown in FIG. 1) according to the present invention. It may be formed of an MPEG2 transport stream including a plurality of transport packets (TP), all of which have a same packet identifier (e.g., PID=0x18xx). When a disc player receives many input streams including a particular text subtitle stream, it finds all the transport packets that belong to the text subtitle stream using their PIDs. Referring to FIG. 7A, each sub-set of transport packets form a packet elementary stream (PES) packet. One of the PES packets shown in FIG. 7A corresponds to a dialog style unit (DSU) defining a group of region styles. A DSU is also often referred as a dialog style segment (DSS). All the remaining PES packets correspond to dialog presentation units (DPUs), each of which includes presentation information for a dialog having at least one region, and dialog text data including a region style indicator, inline style information, and text strings for each region. Similarly, a DPU sis also often referred as a dialog presentation segment (DPS).
  • FIG. 7B illustrates specific information contained within a DPU and a DSU included in a text subtitle stream according to the present invention. A DSU contains information sets defining a group of region styles, each of which is applied to a corresponding region of a dialog. In addition, a DPU contains dialog text data and dialog presentation information for a dialog. The dialog text data includes text strings to be included in each region of the dialog, inline style information to be applied to a particular portion of the text strings, and a region style identifier indicating a region style to be applied to each dialog region. The region style identifier identifies one of the group of region styles defined in the DSU. On the other hand, the dialog presentation information includes presentation time information and palette (color) update information for a dialog. The presentation time information may include presentation start time (e.g., PTS_start) and presentation end time (e.g., PTS_end) for presenting the dialog on a display screen, and the palette update information may include an indicator (e.g., palette_update_flag) indicating whether to update display colors of the dialog and palette information (e.g., Palette for update) to be applied when updating the display colors.
  • All the data included in a text subtitle stream may be classified into three types of data based on their basic functions. For example, the data could be classified into dialog text data, composition information, and rendering information, as shown in FIG. 7B. The dialog text data include text string(s), inline style information, and a region style identifier for each region of a dialog. The composition information includes presentation time information, examples of which are presentation start and end times, position information for a dialog region, and palette update information for a dialog. Lastly, the rendering information includes information required for rendering the text strings to graphic data for presentation. Referring to FIG. 7B, the horizontal and vertical positions of each region included in the DSU is a part of the composition information, and the region width, region height, font color, background color, text flow, text alignment, line space, font name, font style, and font size included in the DSU represent the rendering information.
  • A DSU includes a set of region style information (dialog style set) defining a limited number of author-defined region styles, respectively. For example, the maximum number of the region styles defined in a DSU may be limited to 60, and the region styles may be identified by their region style identifications (region_style_id). Therefore, an author stores a DSU defining only a limited number of region styles in an optical disc. The region styles are used by a disc player when reproducing text subtitle streams recorded on the optical disc. Alternatively, the disc player may use other region styles defined by an additional set of style information, which may be provided from other source. An example of the source is a local data storage included in the disc player. As a result, the subtitle regions reproduced from the text subtitle streams recorded on the optical disc can have a variety of region styles.
  • FIG. 8 illustrates a syntax for a text subtitle stream (Text_subtitle_stream ( )) according to an example of the present invention. As mentioned earlier, the text subtitle stream syntax includes a syntax for a dialog style unit (dialog_style_unit ( )) including a set of information defining a set of region styles, respectively, and syntaxes for a plurality of dialog presentation units (dialog_presentation_unit ( )), where each DPU syntax includes dialog presentation information and at least one region of dialog text. Each region of dialog text includes a region style identifier, one or more text strings, and inline style information, and the region style identifier identifies one of the set of region styles defined in the DSU syntax.
  • FIG. 9A illustrates the syntax for a dialog style unit (dialog_style_unit ( )) included in the text subtitle stream syntax shown in FIG. 8. The dialog style unit syntax includes a syntax for a dialog style set (dialog_styleset ( )) in which a set of author-defined region styles are defined. FIG. 9B illustrates the syntax for a dialog style set (dialog_styleset ( )) included in the dialog style unit syntax shown in FIG. 9A. The dialog style set syntax includes a set of region style information defining a set of region styles (region_style ( )), respectively, and a data field or a flag (player_style_flag) indicating whether the author permitted a player to generate its own set of styles (player styles) for a text subtitle in addition to the set of author-defined style defined in region_style ( ). The dialog style set syntax further includes a syntax for a user-changeable style set (user_changeable_styleset ( )) defining a set of user control styles.
  • Referring to FIG. 9B, region style identifications (region_style_id) are assigned to the set of region styles (region_style ( )), respectively, and each region style information represents global style information to be applied to an entire portion of a region of dialog text. The region style identifier included in a DPU for each region includes one of the region style identifications. Therefore, a region style corresponding to the region style identifier is applied when reproducing at least one region of dialog text contained in each DPU.
  • Reference will now be made in detail to specific region presentation properties defined in each region style (region_style ( )). A region horizontal position (region_horizontal_position) specifies the horizontal address of the top left pixel of a region in a graphics plane, and a region vertical position (region_vertical_position) specifies the vertical address of the top left pixel of the region in the graphics plane. In addition, a region width (region_width) specifies the horizontal length of the region rectangle from the region horizontal position, and a region height (region_height) specifies the vertical length of the region rectangle from the region vertical position. A region background color index (region_bg_color_index) specifies an index value indicating the background color of the region.
  • In addition, a text horizontal position (text_horizontal_position) specifies the horizontal address of an origin of text in the region, and a text vertical position (text_vertical_position) specifies the vertical address of the text origin in the region. A text flow (text_flow) specifies at least one of character progression (left-to-right or right-to-left) and line progression (top-to-bottom or bottom-to-top) in the region. A text alignment (text_alignment) specifies alignment (left, center, or right) of rendered text in the region. When a dialog has more than one regions, the same text flow must be applied to all the regions in order to prevent the viewers' confusion. Referring back to FIG. 9B, a line space (line_space) specifies the distance between two adjacent lines of text in the region. A font identification (font_id) indicates the font identification specified in a clip information file. A font style (font_style) specifies the style of font for the text in the region, examples of which are normal, bold, italic, and bold and italic. A font size (font_size) specifies the size of font for the text in the region, an example of which is the vertical size of a character in unit of pixels. Lastly, a font color index (font_color_index) specifies an index value indicating the color of the text in the region.
  • The player style flag (player_style_flag) shown in FIG. 9B indicates whether au author permitted a disc player to generate and/or use its own set of styles (player styles), which may be pre-stored in a local data storage of the disc player, for a text subtitle in addition to the author-defined region styles defined in an optical disc. For example, if the value of the player style flag is set to 1b, the author permits the player to generate and/or use its own set of player styles. On the other hand, if the value of the player style flag is set to 0b, the author prohibits the player from generating and/or using the set of player styles.
  • FIG. 9C illustrates a syntax for a user changeable style set (user_changeable_styleset ( )) included in the dialog style set syntax shown in FIG. 9B. user_changeable_styleset ( ) includes a set of user control style information defining a set of user control styles (user_control_style( )), where each user control style is configure to change at least one of the region presentation properties specified by a corresponding region style. By selecting one of the set of user control styles, a user is able to change the region style of each region in a very simple manner. However, if all the properties specified by the region style are changeable by a user, the display control of a dialog by the user may be very difficult. For this reason, the region presentation properties that are changeable by a user control style may be limited to at least one of the region horizontal position, region vertical position, font size, text horizontal position, text vertical position, and line space.
  • According to FIG. 9B and FIG. 9C, a set of user control styles are defined for each region style having a region style ID, and user style IDs (user_style_id) are assigned to the set of user control styles, respectively. The maximum number of the user control styles defined for each region style may be limited to 25. Since the maximum number of the region styles defined in a dialog style set is limited to 60, the total number of the user changeable styles defined for a DPU must be less than or equal to 1500.
  • Referring to FIG. 9C, in order to change the region horizontal position, a user control style may include a region horizontal position direction (region_horizontal_position_direction) specifying the direction of the region horizontal position's horizontal movement and a region horizontal position delta (region_horizontal_position_delta) specifying the number of the horizontal movement in the unit of pixels. For example, the horizontal movement may be in a right direction if the horizontal position direction is set to 0 and may be in a left direction if it is set to 1. In order to change the region vertical position, a user control style may include a region vertical position direction (region_vertical_position_direction) specifying the direction of the region horizontal position's vertical movement and a region vertical position delta (region_vertical_position_delta) specifying the number of the vertical movement in the unit of pixels. For example, the vertical movement may be in a downward direction if the vertical position direction is set to 0 and may be in a upward direction if it is set to 1. Furthermore, in order to change the font size defined by a region style with a region style ID, a user control style may include a font size change direction (font_size_inc_dec) specifying the direction of the font size change, and a font size delta (font_size_delta) specifying the number of the font size change in unit of pixels. For example, the font size may be increased if font_size_inc_dec is set to 0 and may be decreased if it is set to 1.
  • Some of the characteristic features of the user changeable style set according to the present invention are as follows. First, a set of user control styles are defined for each of a set of region styles defined in a dialog style unit, and the number of the set of control styles are fixed. Therefore, the numbers of the user control styles defined for two different region styles, respectively, are identical. The number of the set of user control styles to be used when reproducing each region of dialog text is fixed. Next, the set of user control. styles are identified by different user style IDs, respectively. Third, all the changes in the region presentation properties are defined in combination by a single user control style. For example, the region horizontal position and font size are not changed separately by two distinct user control styles. They are changed in combination by a single user control style. Fourth, a change of a certain property is represented with its direction and magnitude rather than with an actual property value. The actual property value may be obtained by applying the magnitude (delta) and direction of the change to the original property value defined in a region style.
  • In conclusion, when an author records main AV streams in an optical disc, the author also records at least one text subtitle stream. Each text subtitle stream includes a DSU defining a set of dialog styles and a plurality of DPUs. The set of region styles have different region style IDs. The DSU further defines a set of user control styles for each region style, where the user control styles have different user style IDs. Each user control style is configured to change at least one of the author-defined region presentation properties which are specified by a corresponding region style. In addition, the dialog style set includes a player style flag indicating whether the author permitted a player to generate and/or use its own set of player styles for a text subtitle in additional to the author-defined style set.
  • Reference will now be made in detail to an apparatus for decoding man AV streams and text subtitle streams according to the present invention, an example of which is illustrated in FIG. 10. The apparatus includes a packet identifier (PID) filter 5 for separating input streams into video streams, audio streams, graphic streams, and text subtitle streams based on their packet identifiers, a video decoding part 20 for decoding the video streams, an audio decoding part 10 for decoding the audio streams, a graphic decoding part 30 for decoding the graphic streams, and a text subtitle decoding part 40 for decoding the text subtitle streams.
  • The text subtitle streams may be extracted from an optical disc or from an additional external source, as shown in FIG. 10. For this reason, the apparatus additionally includes a switch 6 which selects an input data source. Therefore, if the text subtitle streams are extracted from the optical disc, the switch 6 selects data line A connected to the PID filter 5. On the other hand, if they are inputted from the external source, the switch 6 selects line B connected to the external source.
  • Referring back to FIG. 10, the audio decoding part 10, video decoding part 20, and graphic decoding part 30 include transport buffers 11, 21, and 31, respectively, for storing stream data to be decoded. A video plane (VP) 23 and a graphic plane 33 are included in the video decoding part 20 and the graphic decoding part 30, respectively, for converting decoded signals into displayable video and graphic images. The graphic decoding part 30 includes a color look up table (CLUT) 34 for controlling color and transparency levels of the displayable graphic images.
  • When the text subtitle decoding part 40 receives a text subtitle stream supporting a single language from the switch 6, an entire portion of the text subtitle stream may be preloaded into a subtitle preloading buffer (SPB) 41 at once. Alternatively, when there are more than one text subtitle streams for supporting multi-languages, all the text subtitle streams may be preloaded into the SPB 41 at once. Therefore, the size of the SPB 41 should be determined based on a total number of text subtitle stream files received from the switch 6. For example, the size of the SPB 41 should be greater than or equal to 0.5 megabytes for preloading a 0.5 megabyte text subtitle stream file. In addition, in order to ensure seamless presentation of a text subtitle when a user switches among two 0.5 megabyte text subtitle stream files, the size of the SPB 41 should be greater than or equal to 1 megabytes. The size of the SPB 41 should be large enough to preload all the required text subtitle stream files at once.
  • The text subtitle decoding part 40 shown in FIG. 10 further includes a font preloading buffer (FPB) 410 for storing all the associated font files which may be included in the auxiliary data directory shown in FIG. 1. Similarly, the size of the FPB 410 should be large enough to preload all the required font files at once in order to ensure seamless presentation of a text subtitle supporting one or more languages. Since all the available text subtitle stream files and related font files are preloaded, extraction and use of the preloaded data can be done in a simple manner. Also the control of the SPB 41 and the FPB 410 could be quite simple due to the this reason. The text subtitle decoding part 40 further includes a text subtitle decoder 42 which decodes each text subtitle stream stored in the SPB 41, a graphic plane 43 in which the decoded subtitle data are composed as displayable subtitle images, and a color look up table (CLUT) 44 controlling at least one of color and transparency levels of the converted subtitle images.
  • The text subtitle decoding part 40 further includes a local data storage 45 which stores a player style set defining a set of player styles to be selectively used when reproducing a text subtitle stream preloaded in the SPB 41. In addition, the local data storage 45 may further store a user changeable set specifying a set of user control styles to be selectively used when reproducing the text subtitle stream. This user changeable set may be similar to the user changeable set included in a DSU, an example of which is shown in FIG. 9C.
  • In first aspect of the present invention, each player style represents a region style specifying a complete set of region presentation properties for a region of dialog text, examples of which are a region horizontal position, region vertical position, region width, region height, region background color index, text horizontal position, text vertical position, text flow, text alignment, line space, font identification, font style, font size, and font color index. In this case, the set of player styles stored in the local data storage 45 is used independent of a set of region styles defined in a DSU.
  • In second aspect of the present invention, each player style is configured to redefine at least one of region presentation properties which are initially defined by a region style defined in a DSU. For example, if a region style defined in the DSU defines a complete set of region presentation properties including font identification and a player style redefines the font identification, then the redefined font identification and all other properties specified by the region style are used in combination.
  • In third aspect of the present invention, each player style may be configured to change at least one of region presentation properties which are initially defined by a region style defined in a DSU. For example, a player style may specify a direction and a magnitude of a change in a region horizontal position defined in the region style. In this case, the player style set is similar to the user changeable set, an example of which is illustrated in FIG. 9C.
  • FIG. 11 illustrates a method of decoding a text subtitle stream recorded on an optical disc according to an example of the present invention. After the text subtitle decoder 42 starts reproducing a text subtitle stream preloaded into the SPB 41, it initially reads player_style_flag included in a DSU to determine whether the use of a player style set stored in the local data storage 45 is permitted (S110). For example, if player_style_flag is set to 0b, use of the player style set is not permitted. In this case, the text subtitle decoder 42 must use the author-defined region styles recorded on the optical disc (S111). On the other hand, if player_style_flag is set to 1b, the text subtitle decoder 42 is permitted to use the player style set stored in the local data storage 45. Then the text subtitle decoder 42 independently determines whether to use any one of a set of player styles defined in the player style set (S112). For example, the text subtitle decoder 42 may compare the set of player styles with the region styles defined in the text subtitle stream and use a result of this comparison for the determination of step S112. If the set of player styles are not determined to be used in step S112, the region styles recorded on the optical disc are used (S111). On other hand, if the set of player styles are determined to be used in step S112, the text subtitle decoder 42 may use them independently or in combination with the set of region styles recorded on the disc.
  • In addition, when the text subtitle decoder 42 decodes a DPU, it may use a region style identified by a region style identifier included in the DPU. If a user wises to change this region style, he or she may input a command for changing the region style. By selecting one of a set of user control styles, which are defined by a user-changeable style set defined in a DSU, at least one of the region horizontal position, region vertical position, and font size may be changed.
  • The apparatus shown in FIG. 10 further includes an image superimposition part 50 which superimposes the images outputted from the video decoding part 20, the graphic decoding part 30, and the text subtitle decoding part 40. These combined images are displayed on a display screen, as shown in FIG. 3. In general, the video images outputted from the VP 23 of the video decoding part 20 may be displayed as a background of the display screen, and the images outputted from the graphic decoding part 30 and/or text subtitle decoding part 40 may be superimposed over the video images in a predetermined order. For example, if the output images of the graphic decoding part 30 are presentation graphic images, these images may be initially superimposed over the video images by a first adder 52, and subsequently, the text subtitle images from the text subtitle decoding part 40 may be superimposed over the video images by a second adder 53. However, if the output images of the graphic decoding part 30 are interactive graphic images, the text subtitle images from the text subtitle decoding part 40 may be initially superimposed over the video images by the first adder 52. Thereafter, the interactive graphic images may be further superimposed over the subtitle-superimposed images by the second adder 53.
  • Lastly, the apparatus shown in FIG. 10 further includes a system decoder 4 for decoding input transport streams (e.g., MPEG transport streams), and a microprocessor 3 for controlling operations of all the components of the apparatus mentioned above.
  • It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit or scope of the inventions. Thus, it is intended that the present invention covers the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.
  • According to the present invention, a plurality of user control styles are defined for each region style defined in a dialog style segment. Each user control style is selectable by a user and is configured to change the region presentation properties specified by a corresponding region style. Therefore, a user can have options of selecting one of a variety of user control styles.

Claims (21)

1. A method for reproducing at least one text subtitle stream, the method comprising:
receiving the text subtitle stream from external source, each text subtitle stream including a style segment defining region style to be applied to at least one region, the style segment further defining at least one set of user control style for each region style, each set of user control style being selectable and configured to change at least one of region presentation properties specified by a corresponding region style; and
decoding the text subtitle stream using the style segment defining the region style and the at least one set of user control style.
2. The method of claim 1, wherein the style segment includes a data field indicating a number of the region styles.
3. The method of claim 2, wherein the number of region styles is less than or equal to 60.
4. The method of claim 1, wherein the style segment includes a data field indicating a number of the set of user control styles defined in the style segment for each region style.
5. The method of claim 4, wherein the number of the set of user control styles defined for each region style is less than or equal to 25.
6. The method of claim 1, wherein each user control style specifies a direction and a magnitude of a change in the at least one of region presentation properties specified by the corresponding region style.
7. The method of claim 1, wherein the region presentation properties include at least one of a region horizontal position, a region vertical position, a text horizontal position, a text vertical position, a line space, and a font size.
8. The method of claim 1, wherein each user control style specifies a direction and a magnitude of a change in at least one of a region horizontal position, a region vertical position, a text horizontal position, a text vertical position, a line space, and a font size, which are specified in the corresponding region style.
9. A method for reproducing at least one text subtitle stream, the method comprising:
receiving the text subtitle stream from external source, the text subtitle stream including a style segment defining region styles and at least one presentation segment, each presentation segment containing at least one region and being linked to at least one of the region styles, the style segment further defining at least one set of user control styles for each region style, each user control style configured to change at least one of region presentation properties specified by a corresponding region style; and
decoding the presentation segment using the linked region style and the user control styles.
10. A method for reproducing at least one text subtitle stream, the method comprising:
selecting the text subtitle stream from external source or a recording medium, each text subtitle stream including a style segment defining region style to be applied to at least one region, the style segment further defining at least one set of user control style for each region style, each set of user control style being selectable and configured to change at least one of region presentation properties specified by a corresponding region style; and
decoding the text subtitle stream using the style segment defining the region style and the at least one set of user control style.
11. A method for reproducing at least one text subtitle stream, the method comprising:
selecting the text subtitle stream from an external source or a recording medium, the text subtitle stream including a style segment defining region styles and at least one presentation segment, each presentation segment containing at least one region and being linked to at least one of the region styles, the style segment further defining at least one set of user control styles for each region style, each user control style configured to change at least one region presentation property specified by a corresponding region style; and
decoding the presentation segment using the linked region style and the user control styles.
12. An apparatus for reproducing at least one text subtitle stream, the method comprising:
decoder configured to decode the text subtitle stream received from an external source, wherein the text subtitle stream including a style segment defining region style to be applied to at least one region, the style segment further defining at least one set of user control style for each region style, each set of user control style being selectable and configured to change at least one of region presentation properties specified by a corresponding region style; and
controller configured to control operation of the decoder to receive the text subtitle stream from the receiver and decode the text subtitle stream using the style segment defining the region style and the at least one set of user control style.
13. The apparatus of claim 12, wherein each user control style specifies a direction and a magnitude of a change in the at least one of region presentation properties specified by the corresponding region style.
14. The apparatus of claim 12, wherein the region presentation properties include at least one of a region horizontal position, a region vertical position, a text horizontal position, a text vertical position, a line space, and a font size.
15. The apparatus of claim 12, wherein each user control style specifies a direction and a magnitude of a change in at least one of a region horizontal position, a region vertical position, a text horizontal position, a text vertical position, a line space, and a font size, which are specified in the corresponding region style.
16. An apparatus for reproducing at least one text subtitle stream, the method comprising:
decoder configured to decode the text subtitle stream received from an external source, wherein the text subtitle stream including a style segment defining region styles and at least one presentation segment, each presentation segment containing at least one region and being linked to at least one of the region styles, the style segment further defining user control styles for each region style, each user control style being selectable and configured to change at least one of region presentation properties specified by a corresponding region style; and
controller configured to control operation of the decoder to receive the text subtitle stream from the receiver and decode each dialog presentation segment using the linked region style and one of user control styles in the text subtitle.
17. An apparatus for reproducing at least one text subtitle stream, the method comprising:
decoder configured to decode the text subtitle stream selected from an external source, wherein the text subtitle stream including a style segment defining region style to be applied to at least one region, the style segment further defining at least one set of user control style for each region style, each set of user control style being selectable and configured to change at least one of region presentation properties specified by a corresponding region style; and
controller configured to control operation of the decoder to decode the text subtitle stream using the style segment defining the region style and the at least one set of user control style.
18. The apparatus of claim 17, wherein each user control style specifies a direction and a magnitude of a change in the at least one of region presentation properties specified by the corresponding region style.
19. The apparatus of claim 17, wherein the region presentation properties include at least one of a region horizontal position, a region vertical position, a text horizontal position, a text vertical position, a line space, and a font size.
20. The apparatus of claim 17, wherein each user control style specifies a direction and a magnitude of a change in at least one of a region horizontal position, a region vertical position, a text horizontal position, a text vertical position, a line space, and a font size, which are specified in the corresponding region style.
21. An apparatus for decoding at least one text subtitle stream recorded on a recording medium or received from an external source, the apparatus comprising:
decoder configured to decode the text subtitle stream selected from an external source, wherein the text subtitle stream including a style segment defining region styles and at least one presentation segment, each presentation segment containing at least one region and being linked to at least one of the region styles, the style segment further defining at least one set of user control styles for each region style, each a set o user control style configured to change at least one of region presentation properties specified by a corresponding region style; and
controller configured to control operation of the decoder to decode each presentation segment using the linked region style and one of the set of user control styles.
US11/633,027 2004-02-10 2006-12-04 Recording medium and method and apparatus for decoding text subtitle streams Abandoned US20070127886A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/633,027 US20070127886A1 (en) 2004-02-10 2006-12-04 Recording medium and method and apparatus for decoding text subtitle streams

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US54285204P 2004-02-10 2004-02-10
US54285004P 2004-02-10 2004-02-10
US54332804P 2004-02-11 2004-02-11
KR1020040017935A KR20050092836A (en) 2004-03-17 2004-03-17 Apparatus and method for reproducing a text subtitle stream of high density optical disc
KR10-2004-0017935 2004-03-17
US11/033,494 US7643732B2 (en) 2004-02-10 2005-01-12 Recording medium and method and apparatus for decoding text subtitle streams
US11/633,027 US20070127886A1 (en) 2004-02-10 2006-12-04 Recording medium and method and apparatus for decoding text subtitle streams

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/033,494 Continuation US7643732B2 (en) 2004-02-10 2005-01-12 Recording medium and method and apparatus for decoding text subtitle streams

Publications (1)

Publication Number Publication Date
US20070127886A1 true US20070127886A1 (en) 2007-06-07

Family

ID=34841856

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/033,494 Expired - Fee Related US7643732B2 (en) 2004-02-10 2005-01-12 Recording medium and method and apparatus for decoding text subtitle streams
US11/633,027 Abandoned US20070127886A1 (en) 2004-02-10 2006-12-04 Recording medium and method and apparatus for decoding text subtitle streams

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/033,494 Expired - Fee Related US7643732B2 (en) 2004-02-10 2005-01-12 Recording medium and method and apparatus for decoding text subtitle streams

Country Status (6)

Country Link
US (2) US7643732B2 (en)
EP (1) EP1714281A2 (en)
JP (1) JP2007522596A (en)
KR (1) KR20070028326A (en)
BR (1) BRPI0507542A (en)
WO (1) WO2005074400A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050185929A1 (en) * 2004-02-21 2005-08-25 Samsung Electronics Co., Ltd Information storage medium having recorded thereon text subtitle data synchronized with AV data, and reproducing method and apparatus therefor

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101053619B1 (en) 2003-04-09 2011-08-03 엘지전자 주식회사 Recording medium having data structure for managing reproduction of text subtitle data, recording and reproducing method and apparatus accordingly
KR100739682B1 (en) 2003-10-04 2007-07-13 삼성전자주식회사 Information storage medium storing text based sub-title, processing apparatus and method thereof
KR20050078907A (en) 2004-02-03 2005-08-08 엘지전자 주식회사 Method for managing and reproducing a subtitle of high density optical disc
KR100739680B1 (en) 2004-02-21 2007-07-13 삼성전자주식회사 Storage medium for recording text-based subtitle data including style information, reproducing apparatus, and method therefor
US7529467B2 (en) * 2004-02-28 2009-05-05 Samsung Electronics Co., Ltd. Storage medium recording text-based subtitle stream, reproducing apparatus and reproducing method for reproducing text-based subtitle stream recorded on the storage medium
KR100727921B1 (en) * 2004-02-28 2007-06-13 삼성전자주식회사 Storage medium recording text-based subtitle stream, reproducing apparatus and reproducing method thereof
BRPI0509231A (en) 2004-03-26 2007-09-04 Lg Electronics Inc recording medium, method and apparatus for reproducing text subtitle streams
DE602005018180D1 (en) 2004-03-26 2010-01-21 Lg Electronics Inc RECORDING MEDIUM AND METHOD AND DEVICE FOR REPRODUCING AND RECORDING TEXT SUBTITLE STREAMS
DE602005017878D1 (en) * 2004-03-26 2010-01-07 Lg Electronics Inc RECORDING MEDIUM AND METHOD AND DEVICE FOR REPRODUCING A TEXT SUBTITLE FLOW RECORDED ON THE RECORDING MEDIUM
WO2007086860A1 (en) * 2006-01-27 2007-08-02 Thomson Licensing Closed-captioning system and method
US20090044218A1 (en) * 2007-08-09 2009-02-12 Cyberlink Corp. Font Changing Method for Video Subtitle
JP4518194B2 (en) * 2008-06-10 2010-08-04 ソニー株式会社 Generating apparatus, generating method, and program
US8644688B2 (en) 2008-08-26 2014-02-04 Opentv, Inc. Community-based recommendation engine
US20110080521A1 (en) * 2009-10-05 2011-04-07 Sony Corporation On-screen display to highlight what a demo video is meant to illustrate
TW201426529A (en) * 2012-12-26 2014-07-01 Hon Hai Prec Ind Co Ltd Communication device and playing method thereof
EP2866436A1 (en) * 2013-10-23 2015-04-29 Thomson Licensing Method and apparatus for transmission and reception of media data
KR200484470Y1 (en) 2017-01-31 2017-09-08 대룡금속(주) Trench Cover
WO2020009709A1 (en) * 2018-07-06 2020-01-09 Google Llc User-specific text record-based format prediction

Citations (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3128434A (en) * 1960-04-28 1964-04-07 Bendix Corp Transfluxor with amplitude modulated driving pulse input converted to alternating sine wave output
US5253530A (en) * 1991-08-12 1993-10-19 Letcher Iii John H Method and apparatus for reflective ultrasonic imaging
US5467142A (en) * 1992-04-24 1995-11-14 Victor Company Of Japan, Ltd. Television receiver for reproducing video images having different aspect ratios and characters transmitted with video images
US5519443A (en) * 1991-12-24 1996-05-21 National Captioning Institute, Inc. Method and apparatus for providing dual language captioning of a television program
US5537151A (en) * 1994-02-16 1996-07-16 Ati Technologies Inc. Close caption support with timewarp
US5832530A (en) * 1994-09-12 1998-11-03 Adobe Systems Incorporated Method and apparatus for identifying words described in a portable electronic document
US5847770A (en) * 1995-09-25 1998-12-08 Sony Corporation Apparatus and method for encoding and decoding a subtitle signal
US5987214A (en) * 1995-06-30 1999-11-16 Sony Corporation Apparatus and method for decoding an information page having header information and page data
US6009234A (en) * 1995-04-14 1999-12-28 Kabushiki Kaisha Toshiba Method of reproducing information
US6128434A (en) * 1993-10-29 2000-10-03 Kabushiki Kaisha Toshiba Multilingual recording medium and reproduction apparatus
US6148140A (en) * 1997-09-17 2000-11-14 Matsushita Electric Industrial Co., Ltd. Video data editing apparatus, optical disc for use as a recording medium of a video data editing apparatus, and computer readable recording medium storing an editing program
US6173113B1 (en) * 1995-09-29 2001-01-09 Matsushita Electric Industrial Co., Ltd. Machine readable information recording medium having audio gap information stored therein for indicating a start time and duration of an audio presentation discontinuous period
US6219043B1 (en) * 1995-07-13 2001-04-17 Kabushiki Kaisha Toshiba Method and system to replace sections of an encoded video bitstream
US6222532B1 (en) * 1997-02-03 2001-04-24 U.S. Philips Corporation Method and device for navigating through video matter by means of displaying a plurality of key-frames in parallel
US6230295B1 (en) * 1997-04-10 2001-05-08 Lsi Logic Corporation Bitstream assembler for comprehensive verification of circuits, devices, and systems
US6253221B1 (en) * 1996-06-21 2001-06-26 Lg Electronics Inc. Character display apparatus and method for a digital versatile disc
US6262775B1 (en) * 1997-06-17 2001-07-17 Samsung Electronics Co., Ltd. Caption data processing circuit and method therefor
US6297797B1 (en) * 1997-10-30 2001-10-02 Kabushiki Kaisha Toshiba Computer system and closed caption display method
US6320621B1 (en) * 1999-03-27 2001-11-20 Sharp Laboratories Of America, Inc. Method of selecting a digital closed captioning service
US20010044809A1 (en) * 2000-03-29 2001-11-22 Parasnis Shashank Mohan Process of localizing objects in markup language documents
US20020004755A1 (en) * 2000-06-29 2002-01-10 Neil Balthaser Methods, systems, and processes for the design and creation of rich-media applications via the internet
US20020010924A1 (en) * 2000-05-03 2002-01-24 Morteza Kalhour Push method and system
US6393196B1 (en) * 1996-09-27 2002-05-21 Matsushita Electric Industrial Co., Ltd. Multimedia stream generating method enabling alternative reproduction of video data, and a multimedia optical disk authoring system
US20020106193A1 (en) * 2001-02-05 2002-08-08 Park Sung-Wook Data storage medium in which multiple bitstreams are recorded, apparatus and method for reproducing the multiple bitstreams, and apparatus and method for reproducing the multiple bitstreams
US20020135607A1 (en) * 2000-04-21 2002-09-26 Motoki Kato Information processing apparatus and method, program, and recorded medium
US20020151922A1 (en) * 1998-05-13 2002-10-17 Michael Hogendijk Apparatus and methods for removing emboli during a surgical procedure
US20020194618A1 (en) * 2001-04-02 2002-12-19 Matsushita Electric Industrial Co., Ltd. Video reproduction apparatus, video reproduction method, video reproduction program, and package media for digital video content
US20030039472A1 (en) * 2001-08-25 2003-02-27 Kim Doo-Nam Method of and apparatus for selecting subtitles from an optical recording medium
US20030078858A1 (en) * 2001-10-19 2003-04-24 Angelopoulos Tom A. System and methods for peer-to-peer electronic commerce
US20030085997A1 (en) * 2000-04-10 2003-05-08 Satoshi Takagi Asset management system and asset management method
US20030086690A1 (en) * 2001-06-16 2003-05-08 Samsung Electronics Co., Ltd. Storage medium having preloaded font information, and apparatus for and method of reproducing data from storage medium
US20030099464A1 (en) * 2001-11-29 2003-05-29 Oh Yeong-Heon Optical recording medium and apparatus and method to play the optical recording medium
US20030103604A1 (en) * 2000-04-21 2003-06-05 Motoki Kato Information processing apparatus and method, program and recorded medium
US20030188312A1 (en) * 2002-02-28 2003-10-02 Bae Chang Seok Apparatus and method of reproducing subtitle recorded in digital versatile disk player
US20030189571A1 (en) * 1999-11-09 2003-10-09 Macinnis Alexander G. Video and graphics system with parallel processing of graphics windows
US20030189669A1 (en) * 2002-04-05 2003-10-09 Bowser Todd S. Method for off-image data display
US20030194211A1 (en) * 1998-11-12 2003-10-16 Max Abecassis Intermittently playing a video
US20030202431A1 (en) * 2002-04-24 2003-10-30 Kim Mi Hyun Method for managing summary information of play lists
US20030206553A1 (en) * 2001-12-13 2003-11-06 Andre Surcouf Routing and processing data
US6661467B1 (en) * 1994-12-14 2003-12-09 Koninklijke Philips Electronics N.V. Subtitling transmission system
US20030235404A1 (en) * 2002-06-24 2003-12-25 Seo Kang Soo Recording medium having data structure for managing reproduction of multiple reproduction path video data for at least a segment of a title recorded thereon and recording and reproducing methods and apparatuses
US20030235402A1 (en) * 2002-06-21 2003-12-25 Seo Kang Soo Recording medium having data structure for managing reproduction of video data recorded thereon
US20030235406A1 (en) * 2002-06-24 2003-12-25 Seo Kang Soo Recording medium having data structure including navigation control information for managing reproduction of video data recorded thereon and recording and reproducing methods and apparatuses
US20040003347A1 (en) * 2002-06-28 2004-01-01 Ubs Painewebber Inc. System and method for providing on-line services for multiple entities
US20040001699A1 (en) * 2002-06-28 2004-01-01 Seo Kang Soo Recording medium having data structure for managing reproduction of multiple playback path video data recorded thereon and recording and reproducing methods and apparatuses
US20040027369A1 (en) * 2000-12-22 2004-02-12 Peter Rowan Kellock System and method for media production
US20040047605A1 (en) * 2002-09-05 2004-03-11 Seo Kang Soo Recording medium having data structure for managing reproduction of slideshows recorded thereon and recording and reproducing methods and apparatuses
US20040054771A1 (en) * 2002-08-12 2004-03-18 Roe Glen E. Method and apparatus for the remote retrieval and viewing of diagnostic information from a set-top box
US6727902B2 (en) * 1997-11-24 2004-04-27 Thomson Licensing, S.A. Process for coding characters and associated display attributes in a video system and device implementing this process
US20040081434A1 (en) * 2002-10-15 2004-04-29 Samsung Electronics Co., Ltd. Information storage medium containing subtitle data for multiple languages using text data and downloadable fonts and apparatus therefor
US6744998B2 (en) * 2002-09-23 2004-06-01 Hewlett-Packard Development Company, L.P. Printer with video playback user interface
US6747920B2 (en) * 2001-06-01 2004-06-08 Pioneer Corporation Information reproduction apparatus and information reproduction
US20040151472A1 (en) * 2003-01-20 2004-08-05 Seo Kang Soo Recording medium having data structure for managing reproduction of still pictures recorded thereon and recording and reproducing methods and apparatuses
US6792577B1 (en) * 1999-06-21 2004-09-14 Sony Corporation Data distribution method and apparatus, and data receiving method and apparatus
US20040202454A1 (en) * 2003-04-09 2004-10-14 Kim Hyung Sun Recording medium having a data structure for managing reproduction of text subtitle data and methods and apparatuses of recording and reproducing
US20040252234A1 (en) * 2003-06-12 2004-12-16 Park Tae Jin Management method of option for caption display
US20050013207A1 (en) * 2003-05-13 2005-01-20 Yasufumi Tsumagari Information storage medium, information reproduction device, information reproduction method
US20050105888A1 (en) * 2002-11-28 2005-05-19 Toshiya Hamada Reproducing device, reproduction method, reproduction program, and recording medium
US20060013563A1 (en) * 2002-11-15 2006-01-19 Dirk Adolph Method and apparatus for composition of subtitles
US20060098936A1 (en) * 2002-09-25 2006-05-11 Wataru Ikeda Reproduction device, optical disc, recording medium, program, and reproduction method
US20060156358A1 (en) * 2002-10-11 2006-07-13 Dirk Adolph Method and apparatus for synchronizing data streams containing audio, video and/or other data
US20060259941A1 (en) * 2000-08-23 2006-11-16 Jason Goldberg Distributed publishing network
US7151617B2 (en) * 2001-01-19 2006-12-19 Fuji Photo Film Co., Ltd. Image synthesizing apparatus
US7174560B1 (en) * 1999-02-25 2007-02-06 Sharp Laboratories Of America, Inc. Method of synchronizing events with a digital television audio-visual program
US7188353B1 (en) * 1999-04-06 2007-03-06 Sharp Laboratories Of America, Inc. System for presenting synchronized HTML documents in digital television receivers
US7370274B1 (en) * 2003-09-18 2008-05-06 Microsoft Corporation System and method for formatting objects on a page of an electronic document by reference
US7502549B2 (en) * 2002-12-26 2009-03-10 Canon Kabushiki Kaisha Reproducing apparatus
US7587405B2 (en) * 2004-02-10 2009-09-08 Lg Electronics Inc. Recording medium and method and apparatus for decoding text subtitle streams

Family Cites Families (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1586431A (en) 1978-04-11 1981-03-18 Philips Electronic Associated Data transmission
JPH04170875A (en) * 1990-11-05 1992-06-18 Victor Co Of Japan Ltd Recording medium and picture/character information reproduction device
JPH0817473B2 (en) * 1990-11-14 1996-02-21 凸版印刷株式会社 CD-I disk and data storage method on CD-I disk
US6400996B1 (en) 1999-02-01 2002-06-04 Steven M. Hoffberg Adaptive pattern recognition based control system and method
JP3211362B2 (en) * 1992-04-30 2001-09-25 松下電器産業株式会社 Monitoring recording and playback device
US5781687A (en) 1993-05-27 1998-07-14 Studio Nemo, Inc. Script-based, real-time, video editor
US5497241A (en) 1993-10-29 1996-03-05 Time Warner Entertainment Co., L.P. System and method for controlling display of motion picture subtitles in a selected language during play of a software carrier
US5684542A (en) 1993-12-21 1997-11-04 Sony Corporation Video subtitle processing system
CA2168641C (en) 1995-02-03 2000-03-28 Tetsuya Kitamura Image information encoding/decoding system
JPH08275205A (en) 1995-04-03 1996-10-18 Sony Corp Method and device for data coding/decoding and coded data recording medium
JP3577794B2 (en) * 1995-07-18 2004-10-13 ソニー株式会社 Data decryption device
JPH09102940A (en) * 1995-08-02 1997-04-15 Sony Corp Encoding method, encoder, decoder, recording medium and transmitting method for moving image signal
KR100276950B1 (en) 1995-11-24 2001-03-02 니시무로 타이죠 Multi-language recording media and their playback devices
JPH11252518A (en) 1997-10-29 1999-09-17 Matsushita Electric Ind Co Ltd Sub-video unit title preparing device and storing medium
JP3597690B2 (en) 1998-01-21 2004-12-08 株式会社東芝 Digital information recording and playback system
US6189064B1 (en) 1998-11-09 2001-02-13 Broadcom Corporation Graphics display system with unified memory architecture
US6542694B2 (en) 1998-12-16 2003-04-01 Kabushiki Kaisha Toshiba Optical disc for storing moving pictures with text information and apparatus using the disc
KR100297206B1 (en) 1999-01-08 2001-09-26 노영훈 Caption MP3 data format and a player for reproducing the same
JP4140745B2 (en) 1999-05-14 2008-08-27 独立行政法人情報通信研究機構 How to add timing information to subtitles
KR20010001725A (en) 1999-06-08 2001-01-05 윤종용 Method for controlling display of a caption graphic signal
KR100341444B1 (en) 1999-12-27 2002-06-21 조종태 Subtitle management method for digital video disk
KR100341030B1 (en) 2000-03-16 2002-06-20 유태욱 method for replaying caption data and audio data and a display device using the same
CN1186930C (en) 2000-04-21 2005-01-26 索尼公司 Recording appts. and method, reproducing appts. and method, recorded medium, and program
EP1178691A1 (en) 2000-07-17 2002-02-06 Deutsche Thomson-Brandt Gmbh Method and device for recording digital supplementary data
KR100363170B1 (en) 2000-12-04 2002-12-05 삼성전자 주식회사 Recording medium, reproducing apparatus, and text displaying method thereof
JP2002290895A (en) 2001-03-27 2002-10-04 Denon Ltd Optical disk reproducer
JP2003061098A (en) 2001-08-21 2003-02-28 Canon Inc Image processor, image processing method, recording medium and program
KR20030030554A (en) 2001-10-11 2003-04-18 삼성전자주식회사 Caption data transport system and method capable of editting caption data
JP4078581B2 (en) 2002-02-04 2008-04-23 ソニー株式会社 Image processing apparatus and method, recording medium, and program
US7734148B2 (en) 2002-03-20 2010-06-08 Lg Electronics Inc. Method for reproducing sub-picture data in optical disc device, and method for displaying multi-text in optical disc device
US7054804B2 (en) 2002-05-20 2006-05-30 International Buisness Machines Corporation Method and apparatus for performing real-time subtitles translation
JP3718498B2 (en) 2002-11-28 2005-11-24 シャープ株式会社 Moving image recording / playback method
JP4228767B2 (en) 2003-04-25 2009-02-25 ソニー株式会社 REPRODUCTION DEVICE, REPRODUCTION METHOD, REPRODUCTION PROGRAM, AND RECORDING MEDIUM
KR100739682B1 (en) * 2003-10-04 2007-07-13 삼성전자주식회사 Information storage medium storing text based sub-title, processing apparatus and method thereof
CN101093703B (en) 2003-10-04 2010-11-24 三星电子株式会社 Method for processing text-based subtitle
EP1721319A2 (en) 2004-01-06 2006-11-15 LG Electronics Inc. Recording medium and method and apparatus for reproducing and recording text subtitle streams

Patent Citations (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3128434A (en) * 1960-04-28 1964-04-07 Bendix Corp Transfluxor with amplitude modulated driving pulse input converted to alternating sine wave output
US5253530A (en) * 1991-08-12 1993-10-19 Letcher Iii John H Method and apparatus for reflective ultrasonic imaging
US5519443A (en) * 1991-12-24 1996-05-21 National Captioning Institute, Inc. Method and apparatus for providing dual language captioning of a television program
US5467142A (en) * 1992-04-24 1995-11-14 Victor Company Of Japan, Ltd. Television receiver for reproducing video images having different aspect ratios and characters transmitted with video images
US6128434A (en) * 1993-10-29 2000-10-03 Kabushiki Kaisha Toshiba Multilingual recording medium and reproduction apparatus
US5537151A (en) * 1994-02-16 1996-07-16 Ati Technologies Inc. Close caption support with timewarp
US5832530A (en) * 1994-09-12 1998-11-03 Adobe Systems Incorporated Method and apparatus for identifying words described in a portable electronic document
US6661467B1 (en) * 1994-12-14 2003-12-09 Koninklijke Philips Electronics N.V. Subtitling transmission system
US6009234A (en) * 1995-04-14 1999-12-28 Kabushiki Kaisha Toshiba Method of reproducing information
US5987214A (en) * 1995-06-30 1999-11-16 Sony Corporation Apparatus and method for decoding an information page having header information and page data
US6219043B1 (en) * 1995-07-13 2001-04-17 Kabushiki Kaisha Toshiba Method and system to replace sections of an encoded video bitstream
US5847770A (en) * 1995-09-25 1998-12-08 Sony Corporation Apparatus and method for encoding and decoding a subtitle signal
US6173113B1 (en) * 1995-09-29 2001-01-09 Matsushita Electric Industrial Co., Ltd. Machine readable information recording medium having audio gap information stored therein for indicating a start time and duration of an audio presentation discontinuous period
US6253221B1 (en) * 1996-06-21 2001-06-26 Lg Electronics Inc. Character display apparatus and method for a digital versatile disc
US6393196B1 (en) * 1996-09-27 2002-05-21 Matsushita Electric Industrial Co., Ltd. Multimedia stream generating method enabling alternative reproduction of video data, and a multimedia optical disk authoring system
US6222532B1 (en) * 1997-02-03 2001-04-24 U.S. Philips Corporation Method and device for navigating through video matter by means of displaying a plurality of key-frames in parallel
US6230295B1 (en) * 1997-04-10 2001-05-08 Lsi Logic Corporation Bitstream assembler for comprehensive verification of circuits, devices, and systems
US6262775B1 (en) * 1997-06-17 2001-07-17 Samsung Electronics Co., Ltd. Caption data processing circuit and method therefor
US6148140A (en) * 1997-09-17 2000-11-14 Matsushita Electric Industrial Co., Ltd. Video data editing apparatus, optical disc for use as a recording medium of a video data editing apparatus, and computer readable recording medium storing an editing program
US6297797B1 (en) * 1997-10-30 2001-10-02 Kabushiki Kaisha Toshiba Computer system and closed caption display method
US6727902B2 (en) * 1997-11-24 2004-04-27 Thomson Licensing, S.A. Process for coding characters and associated display attributes in a video system and device implementing this process
US20020151922A1 (en) * 1998-05-13 2002-10-17 Michael Hogendijk Apparatus and methods for removing emboli during a surgical procedure
US20030194211A1 (en) * 1998-11-12 2003-10-16 Max Abecassis Intermittently playing a video
US7174560B1 (en) * 1999-02-25 2007-02-06 Sharp Laboratories Of America, Inc. Method of synchronizing events with a digital television audio-visual program
US6320621B1 (en) * 1999-03-27 2001-11-20 Sharp Laboratories Of America, Inc. Method of selecting a digital closed captioning service
US7188353B1 (en) * 1999-04-06 2007-03-06 Sharp Laboratories Of America, Inc. System for presenting synchronized HTML documents in digital television receivers
US6792577B1 (en) * 1999-06-21 2004-09-14 Sony Corporation Data distribution method and apparatus, and data receiving method and apparatus
US20030189571A1 (en) * 1999-11-09 2003-10-09 Macinnis Alexander G. Video and graphics system with parallel processing of graphics windows
US20010044809A1 (en) * 2000-03-29 2001-11-22 Parasnis Shashank Mohan Process of localizing objects in markup language documents
US20030085997A1 (en) * 2000-04-10 2003-05-08 Satoshi Takagi Asset management system and asset management method
US20020135607A1 (en) * 2000-04-21 2002-09-26 Motoki Kato Information processing apparatus and method, program, and recorded medium
US20030103604A1 (en) * 2000-04-21 2003-06-05 Motoki Kato Information processing apparatus and method, program and recorded medium
US20020010924A1 (en) * 2000-05-03 2002-01-24 Morteza Kalhour Push method and system
US20020004755A1 (en) * 2000-06-29 2002-01-10 Neil Balthaser Methods, systems, and processes for the design and creation of rich-media applications via the internet
US20060259941A1 (en) * 2000-08-23 2006-11-16 Jason Goldberg Distributed publishing network
US20040027369A1 (en) * 2000-12-22 2004-02-12 Peter Rowan Kellock System and method for media production
US7151617B2 (en) * 2001-01-19 2006-12-19 Fuji Photo Film Co., Ltd. Image synthesizing apparatus
US20020106193A1 (en) * 2001-02-05 2002-08-08 Park Sung-Wook Data storage medium in which multiple bitstreams are recorded, apparatus and method for reproducing the multiple bitstreams, and apparatus and method for reproducing the multiple bitstreams
US20020194618A1 (en) * 2001-04-02 2002-12-19 Matsushita Electric Industrial Co., Ltd. Video reproduction apparatus, video reproduction method, video reproduction program, and package media for digital video content
US6747920B2 (en) * 2001-06-01 2004-06-08 Pioneer Corporation Information reproduction apparatus and information reproduction
US20030086690A1 (en) * 2001-06-16 2003-05-08 Samsung Electronics Co., Ltd. Storage medium having preloaded font information, and apparatus for and method of reproducing data from storage medium
US20030039472A1 (en) * 2001-08-25 2003-02-27 Kim Doo-Nam Method of and apparatus for selecting subtitles from an optical recording medium
US20030078858A1 (en) * 2001-10-19 2003-04-24 Angelopoulos Tom A. System and methods for peer-to-peer electronic commerce
US20030099464A1 (en) * 2001-11-29 2003-05-29 Oh Yeong-Heon Optical recording medium and apparatus and method to play the optical recording medium
US20030206553A1 (en) * 2001-12-13 2003-11-06 Andre Surcouf Routing and processing data
US20030188312A1 (en) * 2002-02-28 2003-10-02 Bae Chang Seok Apparatus and method of reproducing subtitle recorded in digital versatile disk player
US20030189669A1 (en) * 2002-04-05 2003-10-09 Bowser Todd S. Method for off-image data display
US20030202431A1 (en) * 2002-04-24 2003-10-30 Kim Mi Hyun Method for managing summary information of play lists
US20030235402A1 (en) * 2002-06-21 2003-12-25 Seo Kang Soo Recording medium having data structure for managing reproduction of video data recorded thereon
US20030235406A1 (en) * 2002-06-24 2003-12-25 Seo Kang Soo Recording medium having data structure including navigation control information for managing reproduction of video data recorded thereon and recording and reproducing methods and apparatuses
US20030235404A1 (en) * 2002-06-24 2003-12-25 Seo Kang Soo Recording medium having data structure for managing reproduction of multiple reproduction path video data for at least a segment of a title recorded thereon and recording and reproducing methods and apparatuses
US20040003347A1 (en) * 2002-06-28 2004-01-01 Ubs Painewebber Inc. System and method for providing on-line services for multiple entities
US20040001699A1 (en) * 2002-06-28 2004-01-01 Seo Kang Soo Recording medium having data structure for managing reproduction of multiple playback path video data recorded thereon and recording and reproducing methods and apparatuses
US20040054771A1 (en) * 2002-08-12 2004-03-18 Roe Glen E. Method and apparatus for the remote retrieval and viewing of diagnostic information from a set-top box
US20040047605A1 (en) * 2002-09-05 2004-03-11 Seo Kang Soo Recording medium having data structure for managing reproduction of slideshows recorded thereon and recording and reproducing methods and apparatuses
US6744998B2 (en) * 2002-09-23 2004-06-01 Hewlett-Packard Development Company, L.P. Printer with video playback user interface
US20060098936A1 (en) * 2002-09-25 2006-05-11 Wataru Ikeda Reproduction device, optical disc, recording medium, program, and reproduction method
US20060156358A1 (en) * 2002-10-11 2006-07-13 Dirk Adolph Method and apparatus for synchronizing data streams containing audio, video and/or other data
US20040081434A1 (en) * 2002-10-15 2004-04-29 Samsung Electronics Co., Ltd. Information storage medium containing subtitle data for multiple languages using text data and downloadable fonts and apparatus therefor
US20060013563A1 (en) * 2002-11-15 2006-01-19 Dirk Adolph Method and apparatus for composition of subtitles
US20050105888A1 (en) * 2002-11-28 2005-05-19 Toshiya Hamada Reproducing device, reproduction method, reproduction program, and recording medium
US7502549B2 (en) * 2002-12-26 2009-03-10 Canon Kabushiki Kaisha Reproducing apparatus
US20040151472A1 (en) * 2003-01-20 2004-08-05 Seo Kang Soo Recording medium having data structure for managing reproduction of still pictures recorded thereon and recording and reproducing methods and apparatuses
US20040202454A1 (en) * 2003-04-09 2004-10-14 Kim Hyung Sun Recording medium having a data structure for managing reproduction of text subtitle data and methods and apparatuses of recording and reproducing
US20050013207A1 (en) * 2003-05-13 2005-01-20 Yasufumi Tsumagari Information storage medium, information reproduction device, information reproduction method
US20040252234A1 (en) * 2003-06-12 2004-12-16 Park Tae Jin Management method of option for caption display
US7370274B1 (en) * 2003-09-18 2008-05-06 Microsoft Corporation System and method for formatting objects on a page of an electronic document by reference
US7587405B2 (en) * 2004-02-10 2009-09-08 Lg Electronics Inc. Recording medium and method and apparatus for decoding text subtitle streams

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050185929A1 (en) * 2004-02-21 2005-08-25 Samsung Electronics Co., Ltd Information storage medium having recorded thereon text subtitle data synchronized with AV data, and reproducing method and apparatus therefor

Also Published As

Publication number Publication date
BRPI0507542A (en) 2007-07-03
WO2005074400A2 (en) 2005-08-18
JP2007522596A (en) 2007-08-09
EP1714281A2 (en) 2006-10-25
KR20070028326A (en) 2007-03-12
US20050207736A1 (en) 2005-09-22
WO2005074400A3 (en) 2005-10-06
US7643732B2 (en) 2010-01-05

Similar Documents

Publication Publication Date Title
US7587405B2 (en) Recording medium and method and apparatus for decoding text subtitle streams
US7643732B2 (en) Recording medium and method and apparatus for decoding text subtitle streams
US7561780B2 (en) Text subtitle decoder and method for decoding text subtitle streams
US7848617B2 (en) Recording medium, method, and apparatus for reproducing text subtitle streams
US7982802B2 (en) Text subtitle decoder and method for decoding text subtitle streams
US7756398B2 (en) Recording medium and method and apparatus for reproducing text subtitle stream for updating palette information
US20100061705A1 (en) Recording medium and method and apparatus for reproducing text subtitle stream recorded on the recording medium
US8554053B2 (en) Recording medium storing a text subtitle stream including a style segment and a plurality of presentation segments, method and apparatus for reproducing a text subtitle stream including a style segment and a plurality of presentation segments
RU2380768C2 (en) Record medium, method and device for text caption streams decoding

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE