WO2013009085A2 - Implementation method of user interface and device using same method - Google Patents

Implementation method of user interface and device using same method Download PDF

Info

Publication number
WO2013009085A2
WO2013009085A2 PCT/KR2012/005484 KR2012005484W WO2013009085A2 WO 2013009085 A2 WO2013009085 A2 WO 2013009085A2 KR 2012005484 W KR2012005484 W KR 2012005484W WO 2013009085 A2 WO2013009085 A2 WO 2013009085A2
Authority
WO
WIPO (PCT)
Prior art keywords
interface
information
pattern information
predefined
scriptparamtype
Prior art date
Application number
PCT/KR2012/005484
Other languages
French (fr)
Korean (ko)
Other versions
WO2013009085A3 (en
Inventor
임성용
차지훈
이인재
박상현
임영권
Original Assignee
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020120052257A external-priority patent/KR101979283B1/en
Application filed by 한국전자통신연구원 filed Critical 한국전자통신연구원
Priority to US14/232,155 priority Critical patent/US20140157155A1/en
Publication of WO2013009085A2 publication Critical patent/WO2013009085A2/en
Publication of WO2013009085A3 publication Critical patent/WO2013009085A3/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces

Definitions

  • the present invention relates to a method of implementing a user interface and an apparatus using the method.
  • User interaction devices have evolved in recent years. In addition to devices for interacting with users such as mice, keyboards, touch pads, touch screens, and voice recognition, new types of user interaction devices such as multi-touch pads and motion sensing remote controllers have recently emerged. Doing.
  • An object of the present invention is a method for implementing an Advanced User Interaction Interface (AUI).
  • AUI Advanced User Interaction Interface
  • Another object of the present invention relates to an apparatus for performing a method for implementing an advanced user interaction interface (AUI).
  • AUI advanced user interaction interface
  • a method for implementing a user interface comprising: receiving AUI (Advanced User Interaction) pattern information and interpreting AUI pattern information received based on a predefined interface; It may include a step.
  • the predefined interface may be an interface defining at least one pattern information among Point, Line, Rect, Arc, Circle, SymbolicPattern, TouchPattern, and HandPosture.
  • the predefined interface may be an interface in which at least one of the pattern information is selectively used as pattern information of a widget manager as predetermined predefined pattern information.
  • the predefined interface may be an interface in which at least one of the pattern information is selectively used in a W3C (World Wide Web Consortium) application as predetermined predefined pattern information.
  • W3C World Wide Web Consortium
  • the user interface implementation method for achieving the above object of the present invention by analyzing the input information provided from the scene description based on the step of receiving input information from the scene description and a predefined interface And inputting to a data format converter.
  • the predefined interface may be an interface defining at least one pattern information of Point, Linem Rect, Arc, Circle, SymbolicPattern, TouchPattern, and HandPosture.
  • the predefined interface may be an interface in which at least one of the pattern information is selectively used as pattern information of a widget manager as predetermined predefined pattern information.
  • the predefined interface may be an interface in which at least one of the pattern information is selectively used in a W3C (World Wide Web Consortium) application as predetermined predefined pattern information.
  • W3C World Wide Web Consortium
  • a user interface device for achieving the above object of the present invention is a data format conversion unit for generating AUI (Advanced User Interaction) pattern information and an AUI (Advanced User Interaction generated by the data format conversion unit) ) May include an interface unit for interpreting pattern information based on a predefined interface.
  • the interface unit may be an interface unit that defines at least one pattern information of Point, Line, Rect, Arc, Circle, SymbolicPattern, TouchPattern, and HandPosture.
  • the interface unit may be an interface unit in which at least one of the pattern information is selectively used as pattern information of a widget manager as predetermined predefined pattern information.
  • the interface unit may be an interface unit in which at least one of the pattern information is selectively used in a W3C (World Wide Web Consortium) application as predetermined predefined pattern information.
  • W3C World Wide Web Consortium
  • a predetermined AUI (Advanced) generated in a user interaction device is defined by defining a method for implementing an interface between the user interaction device and a scene description.
  • User Interaction can be implemented in various applications.
  • FIG. 1 is a conceptual diagram illustrating a high-level view between MPEG-U (ISO / IEC 2007) and MPEG-V (ISO / IEC 2005) according to an embodiment of the present invention.
  • FIG. 2 is a flowchart illustrating a method of analyzing pattern information in MPEG-U Part 1 according to an embodiment of the present invention.
  • FIG. 3 is a conceptual diagram illustrating an operation between a widget manager and a physical device according to an embodiment of the present invention.
  • first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • each component shown in the embodiments of the present invention are shown independently to represent different characteristic functions, and do not mean that each component is made of separate hardware or one software component unit.
  • each component is included in each component for convenience of description, and at least two of the components may be combined into one component, or one component may be divided into a plurality of components to perform a function.
  • Integrated and separate embodiments of the components are also included within the scope of the present invention without departing from the spirit of the invention.
  • the components may not be essential components for performing essential functions in the present invention, but may be optional components for improving performance.
  • the present invention can be implemented including only the components essential for implementing the essentials of the present invention except for the components used for improving performance, and the structure including only the essential components except for the optional components used for improving performance. Also included in the scope of the present invention.
  • the ISO / IEC 2007 standard (MPEG-U) is a standard for the Information Technology-Rich Media User Interface, which includes three parts: Part 1: Widget, Part 2: Advanced user interaction (AUI) interfaced, Part 3: Conformance and reference software. Standardization is underway.
  • Widget is a tool that enables extended communication and can be defined as follows.
  • Widget self-contained entity, with extensive communication capabilities, within a Rich Media User Interface; composed of a Manifest and associated Resources, including Scene Descriptions for the Full and Simplified Representations and Context Information.
  • An advanced user interface is to provide a medium for transmitting and receiving information on an enhanced user interaction device (eg, a Motion Sensing remote controller, a multi-touch device, etc.).
  • an enhanced user interaction device eg, a Motion Sensing remote controller, a multi-touch device, etc.
  • FIG. 1 is a conceptual diagram illustrating a high-level view between MPEG-U (ISO / IEC 2007) and MPEG-V (ISO / IEC 2005) according to an embodiment of the present invention.
  • the MPEG-V part 5 105 is used as a description tool for information input to actual physical interaction devices 100 or output from physical interaction devices 100.
  • the information may be used as a description tool for transmitting the information to the Data Format Convertor 120 or the Semantic Generator (UI Format Interpreter 110) to be described below.
  • the data format convertor 120 may perform data conversion on the physical interaction device data interpreted by the MPEG-V part 5 105 so as to be applied to a widget that is an MPEG-U part 1 125. have.
  • the Semantic Generator (UI Format Interpreter, 110) generates syntax elements to be applied to AUI, which is MPEG-U part 2 (115), to physical interaction device data interpreted by MPEG-V part 5 (105), or UI (User Interface) Format analysis can be performed.
  • the MPEG-U Part 2 115 may serve as an interface between the Semantic Generator (UI Format Interpreter 110) and the data format converter 120 or the Scene Description 130.
  • Syntax information or interpreted UI information generated by the Semantic Generator may be interpreted by the MPEG-U Part 2 115 and input to the Scene Description 130 or the data format converter 120. .
  • the MPEG-U part 1 125 defines a method of performing widget management (widget packging, communication and lifecycle management) based on the data converted by the data format conversion unit 120.
  • the scene description 130 may be defined as follows.
  • a scene description is a self-contained living entity composed of video, audio, 2D graphics objects, and animations.
  • a Scene Description can be a widget or a W3C application.
  • data instances created in MPEG-U part 2 115 may be transmitted by a widget communication method according to MPEG-U part 1 125.
  • the Scene Description 130 may receive an event of the AUI pattern without information on the AUI device based on the input AUI pattern value.
  • UPNs Uniform Resource Names
  • Table 2 below shows the input interface.
  • Table 2 defines various pattern information and information constituting the pattern information.
  • Point may be defined as first pattern information. Point can represent a two-dimensional or three-dimensional geometric point in eucladian space. As information included in the Point, capturedTimeStamp, userId, x, y, z, and the like may be used.
  • the capturedTimeStamp represents time information at which the AUI pattern is recognized.
  • capturedTimeStamp may be expressed in millisecond units based on January 1, 1970, at 0: 0.
  • userId may indicate user information.
  • x, y, z may be input values used to indicate the location information of the point.
  • Line can be defined as another message interface.
  • Line is pattern information indicating a straight line connecting two points.
  • the positional information on both ends of the straight line and optionally the time information, velocity and acceleration information at which the straight line started can be used.
  • capturedTimeStamp As information for representing a line, capturedTimeStamp, userId, firstPositionX, firstPositionY, firstPositionZ, secondPositonX, secondPositonY, secondPositonZ, startingTimestamp, averageVelocity, and maxAcceleration may be used.
  • firstPositionX, firstPositionY, firstPositionZ can be the position of the first end point of the line
  • secondPositonX, secondPositonY, secondPositonZ can be the position of the second end point
  • startingTimestamp represents information on when the line started
  • averageVelocity represents average velocity information during line pattern generation
  • maxAcceleration represents highest acceleration information during line pattern generation.
  • Rect may be defined as another pattern information. Rect is a rectangular pattern constituting the four corner positions of the rectangle can be expressed based on two opposite corner positions or four corner positions.
  • TopLeftPosition gives information about the position of the top left corner of the rectangle
  • BottomRightPosition gives information about the position of the lower right corner of the rectangle
  • TopRightPosition gives information about the position of the top right corner of the rectangle
  • BottomLeftPosition gives information about the bottom left position of the rectangle. Indicates.
  • firstTimeStamp is the time information when the first corner was created when creating a square pattern
  • secondTimeStamp is the time information when the second corner was created when creating a square pattern
  • thirdTimeStamp is the time information when the third corner was created when creating a square pattern
  • fourthTimeStamp is When generating the square pattern, the fourth corner shows the generated time information.
  • Arc may be defined as another pattern information.
  • Arc represents an arc pattern, and may include position information of the start and end points of the arc, position information of the center of the circle, angular velocity, angular acceleration, and time information at which the arc pattern starts to be drawn.
  • Information for representing the arc may include capturedTimeStamp, userId, firstPositionX, firstPositionY firstPositionZ, secondPositionX, secondPositionY, secondPositionZ, centerPositionX, centerPositionY, centerPositionZ, startingTimeStamp, averageAngularVelocity.
  • firstPositionX, firstPositionY, and firstPositionZ are information indicating the starting position of an arc and are expressed based on two-dimensional or three-dimensional position information.
  • secondPositionX, secondPositionY, and secondPositionZ are information indicating the position of the end point of the arc and are expressed based on two-dimensional or three-dimensional position information.
  • centerPositionX, centerPositionY, centerPositionZ Information indicating a center point position of an arc is expressed based on two-dimensional or three-dimensional position information.
  • startingTimeStamp may be information indicating time information at which an arc pattern is generated, and averageAngularVelocity may indicate average angular velocity information during arc pattern formation.
  • Circle may be defined as another pattern information. Circle represents a circle pattern and may be expressed based on center position information, radius information, and average angular acceleration information of the circle.
  • Information for representing a circle may include centerPositionX, centerPositionY, centerPositionZ, startingTimeStamp, and averageAngularVelocity.
  • centerPositionX, centerPositionY, and centerPositionZ are information for indicating the center position of the circle and may indicate the center position of the circle in a two-dimensional or three-dimensional plane.
  • startingTimeStamp may be information indicating time information at which the original pattern is generated, and averageAngularVelocity may represent average angular velocity information during the formation of the original pattern.
  • SymbolicPattern may be defined as another pattern information. SymbolicPattern can recognize the user's motion information as a new symbol based on the size and position of the motion information.
  • CaptureTimeStamp, userId, PositionX, PositionY, PositionZ, size, and symbolType may be used as information for representing a SymbolicPattern.
  • PositionX, PositionY, and PositionZ are positional information on which SymbolicPattern is recognized, size may be used as information indicating size of SymbolicPattern, and symboltype as information indicating what kind of symbolic pattern is.
  • TouchPattern may be defined as another pattern information.
  • the TouchPattern may recognize the user's motion information as a new touch pattern based on the input duration, the number of times, the input movement direction, and the rotation direction of the motion information as the pattern information of the user's touch.
  • capturedTimeStamp As information for representing the TouchPattern, capturedTimeStamp, userId, Position X, Position Y, Position Z, touchtype, and value may be used.
  • Position X, Position Y, and Position Z may indicate position information where a touch is generated
  • touchType may be used as information for information on the type of touch
  • value may be used as information for containing additional information required according to the type of touch pattern.
  • HandPosture may be defined as another pattern information.
  • HandPosture describes the pose of the user's hand, and Posture represents the pose type of the hand.
  • capturedTimeStamp As information for representing a HandPosture, capturedTimeStamp, userId, Position X, Position Y, Position Z, postureType, and chirality may be used.
  • Position X, Position Y, and Position Z may indicate position information on which the posture of the hand occurred, and postureType may indicate pose type information of the user's hand, and chirality may indicate whether the user's hand is left or right.
  • HandGesture may be defined as another pattern information. HandGesture represents information about the operation of the user's hand. CaptureTimeStamp, userId, gesturetype, and chirality may be used as information for representing a HandGesture.
  • Position X, Position Y, and Position Z may indicate position information on which a hand gesture occurred
  • gestureType may indicate motion type information of the user's hand
  • chirality may indicate whether the user's hand is left hand or right hand.
  • the pattern information described above in Table 2 may optionally be used. That is, at least one pattern information of the above pattern information may be selectively used as the pattern information of the widget manager.
  • the message interface of the AUI pattern described above in Table 2 may be optional, and only a part of the message interface of the AUI pattern may be used in actual implementation.
  • the AUI pattern may be utilized in a W3C application.
  • the html page may also be implemented based on a user's motion based on a geometric pattern, a touch pattern, or a symbolic pattern.
  • an interface definition language (IDL) event definition for communicating with a W3C widget is disclosed.
  • an embodiment of the present invention discloses event types, syntax elements, and syntax elements for implementing an interface for a W3C (World Wide Web Consortium) application as follows.
  • Point can be defined as the first event type. Point can use capturedTimeStamp and Position as contextual information.
  • capturedTimeStamp is information indicating the time at which the user's interaction was captured (capturedTimeStamp has the same meaning hereinafter, and thus description is omitted in other event types), and Position indicates 2D or 3D position information on screen coordinates.
  • Line can be defined as another event type.
  • capturedTimeStamp, position, timeStamps, averageVelocity, and maxAcceleration can be used as syntax elements.
  • position is 2D or 3D coordinate information of two points forming a line
  • timeStamps is starting time information on which a line is drawn
  • averagevelocity is average speed information while a line is drawn
  • maxAccelation is maximum acceleration information while drawing a line.
  • Rect can be defined as another event type.
  • contextual information for defining a Rect capturedTimeStamp, position, and timeStamps can be used as syntax elements.
  • position is 2D or 3D coordinate value information of a corner point forming a rectangle
  • timeStamps means time information on which each corner is drawn.
  • Arc can be defined as another event type. You can use capturedTimeStamp, position, timeStamps, averageVelocity, maxAccelation as contextual information to define Arc.
  • position is the 2D or 3D coordinate information indicating the position where the arc was drawn and ended and the center of the virtual circle
  • timeStamps is the start time of the user's interaction
  • averageVelocity is the average acceleration while drawing the arc
  • maxAcceleration is The maximum angular acceleration information during the arc drawing.
  • Circle may be defined as another event type.
  • capturedTimeStamp, position, timeStamps, averageVelocity, maxAccelaeration, and fvalue can be used.
  • Position is the origin coordinate information constituting the circle
  • timeStamps is the time information to start drawing the circle
  • averageVelocity is the average angular velocity while drawing the circle
  • maxAcceleration is the average angular acceleration while drawing the circle
  • fValue is the radius information of the circle.
  • TouchPattern may be defined as another event type. CaptureTimeStamp, position, sType, and fValue may be used as context information for defining the TouchPattern. position may be position information where a touch has occurred, sType may be a label indicating a type of touch, and fValue may be value information required in a touch pattern.
  • SymbolicPattern may be defined as another event type.
  • capturedtimeStamp, position, sType, and fValue may be used.
  • position may be position information where a symbol is generated
  • sType may be label information specifying a type of a symbol
  • fValue may be size information of a symbol.
  • HandPosture may be defined as another event type.
  • capturedtimeStamp, position, sType, and fValue may be used as context information for defining a HandPosture.
  • the position may be information on the location of the hand posture
  • the sType may be label information indicating the type of the hand posture
  • the fValue may be size information of the hand posture.
  • FIG. 2 is a flowchart illustrating a method of analyzing pattern information in MPEG-U Part 1 according to an embodiment of the present invention.
  • an embodiment of the present invention describes an interface implementation for generating scene description information based on data input from a data format changer, but on the contrary, implements an interface for receiving scene description information and transferring information to the data format changer. Also applies to and is included in the scope of the present invention.
  • the converted data format is received from the data format changer (step S200).
  • operation information of the physical interaction apparatus interpreted through the MPEG-U Part 2 and the MPEG-V Part 5 may be input to the data format converter.
  • the data format switcher may perform data format conversion in an information format that can be input to MPEG-U Part 1 with respect to the transmitted operation information of the physical interaction apparatus.
  • the predetermined interface may be MPEG-U Part 1.
  • Information interpreted through MPEG-U Part 1 can be used in widget managers or W3C applications.
  • the interface for the widget manager can be defined. Pattern information as shown in Table 2 and input information for expressing each pattern information may be generated. That is, some pattern information of Table 2 may be used as an interface for representing motion information in the widget manager.
  • the IDL interface may be defined to transmit operation information to the W3C application based on the event information defined in Table 3.
  • FIG. 3 is a conceptual diagram illustrating an operation between a widget manager and a physical device according to an embodiment of the present invention.
  • the physical signal is directly input to the interface between the widget manager and the physical device, but the information input through the actual interface is additionally a signal generated from the physical device to the MPEG-V Part. 5 may be information generated through the interface and data format conversion unit.
  • operation information (eg, information generated by the data format conversion unit 300) generated in the user interaction apparatus is input to the MPEG-U Part 1 interface 320.
  • the input operation information is transmitted to the widget manager 340 to implement the operation of the widget.
  • the widget manager 340 controls the operation of the widget based on the operation information interpreted through the MPEG U Part 1 interface.
  • the MPEG-U Part 1 interface may receive information generated from the widget and transmit the received information to the data format conversion unit 300. That is, the reverse operation of the above-described operation may be performed to transmit information generated from the widget to the data format conversion unit 300.
  • the same operation may be performed in the W3C application 340.
  • operation information generated in the user interaction apparatus is input to the MPEG-U Part 1 interface 320.
  • the input operation information is transmitted to the W3C application 340 to implement the operation of the widget.

Abstract

Disclosed are an implementation method of a user interface and a device using the same method. The implementation method of a user interface comprises a step for receiving AUI(Advanced User Interaction) pattern information and a step for interpreting the inputted AUI pattern information based on a predefined interface. Therefore, a preset AUI pattern generated in a user interaction device is implemented in various applications by defining the implementation method of an interface between a user interaction device and a scene description.

Description

사용자 인터페이스 구현 방법 및 이러한 방법을 사용하는 장치How user interfaces are implemented and devices using these methods
본 발명은 사용자 인터페이스 구현 방법 및 이러한 방법을 사용하는 장치에 관한 것이다.The present invention relates to a method of implementing a user interface and an apparatus using the method.
사용자 인터액션(user interaction) 장치들이 최근 들어 진화하고 있다. 기존의 마우스, 키보드, 터치 패드, 터치 스크린, 음성 인식등과 같은 사용자와 상호작용을 위한 장치 외에도 최근에는 멀티 터치 패드, 모션 센싱 리모트 컨트롤러 등과 같은 새로운 타입의 사용자 상호작용(user interaction) 장치들이 등장하고 있다. User interaction devices have evolved in recent years. In addition to devices for interacting with users such as mice, keyboards, touch pads, touch screens, and voice recognition, new types of user interaction devices such as multi-touch pads and motion sensing remote controllers have recently emerged. Doing.
진화된 사용자 인터액션 장치들을 이용하기 위한 어플리케이션 기술을 제공하기 위해 멀티미디어 기술들이 연구되어 왔지만, 현재의 사용자 인터랙션 표준들은 대부분 기존의 전자 제품에서 사용되고 있는 포인팅(pointing) 또는 키입력(keying)과 같은 기본적인 인터액션 장치들에 집중되고 있는 상황이다.Although multimedia technologies have been studied to provide application technology for using advanced user interaction devices, current user interaction standards are mostly used for basic interactions such as pointing or keying that are used in existing electronic products. The situation is focused on devices.
멀티 터치 패드, 모션 센싱 리모트 컨트롤러 등과 같은 진화된 새로운 타입의 사용자 인터액션 장치들에 대한 사용자 인터액션 표준은 존재하지 않았다. 또한, 기존의 포인팅(pointing) 또는 키입력(keying)과 같은 기본적인 인터액션 장치들과 상기 진화된 새로운 타입의 사용자 인터액션 장치들에 모두 적용할 수 있는 사용자 인터액션 표준도 또한 존재하지 않았다.There is no user interaction standard for evolving new types of user interaction devices such as multi-touch pads, motion sensing remote controllers and the like. In addition, there is no user interaction standard that can be applied to both basic interaction devices such as pointing or keying and the evolved new type of user interaction devices.
본 발명의 목적은 향상된 사용자 인터액션 인터페이스(Advanced User Interaction Interface, AUI)를 구현하는 방법에 관한 것이다. An object of the present invention is a method for implementing an Advanced User Interaction Interface (AUI).
또한, 본 발명의 다른 목적은 향상된 사용자 인터액션 인터페이스(Advanced User Interaction Interface, AUI)를 구현하는 방법을 수행하는 장치에 관한 것이다.In addition, another object of the present invention relates to an apparatus for performing a method for implementing an advanced user interaction interface (AUI).
상술한 본 발명의 목적을 달성하기 위한 본 발명의 일 측면에 따른 사용자인터페이스 구현 방법은AUI(Advanced User Interaction) 패턴 정보를 입력받는 단계와 미리 정의된 인터페이스를 기초로 입력받은 AUI 패턴 정보를 해석하는 단계를 포함할 수 있다. 상기 미리 정의된 인터페이스는 Point, Line, Rect, Arc, Circle, SymbolicPattern, TouchPattern, HandPosture 중 적어도 하나의 패턴 정보를 정의한 인터페이스일 수 있다. 상기 미리 정의된 인터페이스는 소정의 미리 정의된 패턴 정보로서 상기 패턴 정보 중 적어도 하나가 위젯 관리자의 패턴 정보로서 선택적으로 사용되는 인터페이스일 수 있다. 상기 미리 정의된 인터페이스는 소정의 미리 정의된 패턴 정보로서 상기 패턴 정보 중 적어도 하나가 W3C(World Wide Web Consortium) 어플리케이션에서 선택적으로 사용되는 인터페이스일 수 있다. According to an aspect of the present invention, there is provided a method for implementing a user interface, the method comprising: receiving AUI (Advanced User Interaction) pattern information and interpreting AUI pattern information received based on a predefined interface; It may include a step. The predefined interface may be an interface defining at least one pattern information among Point, Line, Rect, Arc, Circle, SymbolicPattern, TouchPattern, and HandPosture. The predefined interface may be an interface in which at least one of the pattern information is selectively used as pattern information of a widget manager as predetermined predefined pattern information. The predefined interface may be an interface in which at least one of the pattern information is selectively used in a W3C (World Wide Web Consortium) application as predetermined predefined pattern information.
상술한 본 발명의 목적을 달성하기 위한 본 발명의 일 측면에 따른 사용자 인터페이스 구현 방법은 Scene Description으로부터 입력 정보를 제공받는 단계와 미리 정의된 인터페이스를 기초로 상기 Scene Description으로부터 제공받은 입력 정보를 해석하여 데이터 포맷 전환기로 입력하는 단계를 포함할 수 있다. 상기 미리 정의된 인터페이스는 Point, Linem Rect, Arc, Circle, SymbolicPattern, TouchPattern, HandPosture 중 적어도 하나의 패턴 정보를 정의한 인터페이스일 수 있다. 상기 미리 정의된 인터페이스는 소정의 미리 정의된 패턴 정보로서 상기 패턴 정보 중 적어도 하나가 위젯 관리자의 패턴 정보로서 선택적으로 사용되는 인터페이스일 수 있다. 상기 미리 정의된 인터페이스는 소정의 미리 정의된 패턴 정보로서 상기 패턴 정보 중 적어도 하나가 W3C(World Wide Web Consortium) 어플리케이션에서 선택적으로 사용되는 인터페이스일 수 있다. The user interface implementation method according to an aspect of the present invention for achieving the above object of the present invention by analyzing the input information provided from the scene description based on the step of receiving input information from the scene description and a predefined interface And inputting to a data format converter. The predefined interface may be an interface defining at least one pattern information of Point, Linem Rect, Arc, Circle, SymbolicPattern, TouchPattern, and HandPosture. The predefined interface may be an interface in which at least one of the pattern information is selectively used as pattern information of a widget manager as predetermined predefined pattern information. The predefined interface may be an interface in which at least one of the pattern information is selectively used in a W3C (World Wide Web Consortium) application as predetermined predefined pattern information.
상술한 본 발명의 목적을 달성하기 위한 본 발명의 일 측면에 따른 사용자 인터페이스 장치는 AUI(Advanced User Interaction) 패턴 정보를 생성하는 데이터 포맷 전환부와 상기 데이터 포맷 전환부를 통해 생성된 AUI(Advanced User Interaction) 패턴 정보를 미리 정의된 인터페이스를 기초로 해석하는 인터페이스부를 포함할 수 있다. 상기 인터페이스부는 Point, Line, Rect, Arc, Circle, SymbolicPattern, TouchPattern, HandPosture 중 적어도 하나의 패턴 정보를 정의한 인터페이스부일 수 있다. 상기 인터페이스부는 소정의 미리 정의된 패턴 정보로서 상기 패턴 정보 중 적어도 하나가 위젯 관리자의 패턴 정보로서 선택적으로 사용되는 인터페이스부일 수 있다. 상기 인터페이스부는 소정의 미리 정의된 패턴 정보로서 상기 패턴 정보 중 적어도 하나가 W3C(World Wide Web Consortium) 어플리케이션에서 선택적으로 사용되는 인터페이스부일 수 있다. A user interface device according to an aspect of the present invention for achieving the above object of the present invention is a data format conversion unit for generating AUI (Advanced User Interaction) pattern information and an AUI (Advanced User Interaction generated by the data format conversion unit) ) May include an interface unit for interpreting pattern information based on a predefined interface. The interface unit may be an interface unit that defines at least one pattern information of Point, Line, Rect, Arc, Circle, SymbolicPattern, TouchPattern, and HandPosture. The interface unit may be an interface unit in which at least one of the pattern information is selectively used as pattern information of a widget manager as predetermined predefined pattern information. The interface unit may be an interface unit in which at least one of the pattern information is selectively used in a W3C (World Wide Web Consortium) application as predetermined predefined pattern information.
상술한 바와 같이 본 발명의 실시예에 따른 사용자 인터페이스 구현 방법 및 이러한 방법을 사용하는 장치에 따르면, 사용자 인터액션 장치와 Scene Description 사이에 인터페이스를 구현 방법을 정의함으로서 사용자 인터액션 장치에서 발생한 소정의 AUI(Advanced User Interaction) 패턴을 다양한 어플리케이션에 구현하도록 할 수 있다.As described above, according to the method for implementing a user interface according to an embodiment of the present invention and an apparatus using the method, a predetermined AUI (Advanced) generated in a user interaction device is defined by defining a method for implementing an interface between the user interaction device and a scene description. User Interaction) pattern can be implemented in various applications.
도 1은 본 발명의 실시예에 따른 MPEG-U(ISO/IEC 2007)및 MPEG-V(ISO/IEC 2005) 간의 하이-레벨 뷰를 나타낸 개념도이다.1 is a conceptual diagram illustrating a high-level view between MPEG-U (ISO / IEC 2007) and MPEG-V (ISO / IEC 2005) according to an embodiment of the present invention.
도 2는 본 발명의 실시예에 따른 MPEG-U Part 1에서 패턴 정보 해석 방법을 나타낸 순서도이다.2 is a flowchart illustrating a method of analyzing pattern information in MPEG-U Part 1 according to an embodiment of the present invention.
도 3은 본 발명의 실시예에 따른 위젯 매니져와 물리적 장치 간의 동작을 나타내기 위한 개념도이다. 3 is a conceptual diagram illustrating an operation between a widget manager and a physical device according to an embodiment of the present invention.
이하, 도면을 참조하여 본 발명의 실시 형태에 대하여 구체적으로 설명한다. 본 명세서의 실시예를 설명함에 있어, 관련된 공지 구성 또는 기능에 대한 구체적인 설명이 본 명세서의 요지를 흐릴 수 있다고 판단되는 경우에는 그 상세한 설명은 생략한다.EMBODIMENT OF THE INVENTION Hereinafter, embodiment of this invention is described concretely with reference to drawings. In describing the embodiments of the present specification, when it is determined that a detailed description of a related well-known configuration or function may obscure the gist of the present specification, the detailed description thereof will be omitted.
어떤 구성 요소가 다른 구성 요소에 “연결되어” 있다거나 “접속되어” 있다고 언급된 때에는, 그 다른 구성 요소에 직접적으로 연결되어 있거나 또는 접속되어 있을 수도 있으나, 중간에 다른 구성 요소가 존재할 수도 있다고 이해되어야 할 것이다. 아울러, 본 발명에서 특정 구성을 “포함”한다고 기술하는 내용은 해당 구성 이외의 구성을 배제하는 것이 아니며, 추가적인 구성이 본 발명의 실시 또는 본 발명의 기술적 사상의 범위에 포함될 수 있음을 의미한다. When a component is said to be “connected” or “connected” to another component, it may be directly connected to or connected to that other component, but it may be understood that another component may exist in between. Should be. In addition, the description "include" a specific configuration in the present invention does not exclude a configuration other than the configuration, it means that additional configuration may be included in the scope of the technical spirit of the present invention or the present invention.
제1, 제2 등의 용어는 다양한 구성요소들을 설명하는데 사용될 수 있지만, 상기 구성요소들은 상기 용어들에 의해 한정되어서는 안 된다. 상기 용어들은 하나의 구성요소를 다른 구성요소로부터 구별하는 목적으로만 사용된다. 예를 들어, 본 발명의 권리 범위를 벗어나지 않으면서 제1 구성요소는 제2 구성요소로 명명될 수 있고, 유사하게 제2 구성요소도 제1 구성요소로 명명될 수 있다.Terms such as first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
또한 본 발명의 실시예에 나타나는 구성부들은 서로 다른 특징적인 기능들을 나타내기 위해 독립적으로 도시되는 것으로, 각 구성부들이 분리된 하드웨어나 하나의 소프트웨어 구성단위로 이루어짐을 의미하지 않는다. 즉, 각 구성부는 설명의 편의상 각각의 구성부로 나열하여 포함한 것으로 각 구성부 중 적어도 두 개의 구성부가 합쳐져 하나의 구성부로 이루어지거나, 하나의 구성부가 복수 개의 구성부로 나뉘어져 기능을 수행할 수 있고 이러한 각 구성부의 통합된 실시예 및 분리된 실시예도 본 발명의 본질에서 벗어나지 않는 한 본 발명의 권리범위에 포함된다.In addition, the components shown in the embodiments of the present invention are shown independently to represent different characteristic functions, and do not mean that each component is made of separate hardware or one software component unit. In other words, each component is included in each component for convenience of description, and at least two of the components may be combined into one component, or one component may be divided into a plurality of components to perform a function. Integrated and separate embodiments of the components are also included within the scope of the present invention without departing from the spirit of the invention.
또한, 일부의 구성 요소는 본 발명에서 본질적인 기능을 수행하는 필수적인 구성 요소는 아니고 단지 성능을 향상시키기 위한 선택적 구성 요소일 수 있다. 본 발명은 단지 성능 향상을 위해 사용되는 구성 요소를 제외한 본 발명의 본질을 구현하는데 필수적인 구성부만을 포함하여 구현될 수 있고, 단지 성능 향상을 위해 사용되는 선택적 구성 요소를 제외한 필수 구성 요소만을 포함한 구조도 본 발명의 권리범위에 포함된다.In addition, some of the components may not be essential components for performing essential functions in the present invention, but may be optional components for improving performance. The present invention can be implemented including only the components essential for implementing the essentials of the present invention except for the components used for improving performance, and the structure including only the essential components except for the optional components used for improving performance. Also included in the scope of the present invention.

ISO/IEC 2007 표준(MPEG-U)은 Information Technology-Rich Media User Interface에 대한 표준으로서 Part 1:Widget, Part 2:Advanced user interaction(AUI) interfaced, Part 3:Conformance and reference software에 대한 3 개의 파트로 구분되어 표준화가 진행되고 있다.The ISO / IEC 2007 standard (MPEG-U) is a standard for the Information Technology-Rich Media User Interface, which includes three parts: Part 1: Widget, Part 2: Advanced user interaction (AUI) interfaced, Part 3: Conformance and reference software. Standardization is underway.
위젯(widget)은 확장된 커뮤니케이션을 가능하게 해주는 도구로서 아래와 같이 정의될 수 있다. Widget is a tool that enables extended communication and can be defined as follows.
Widget: self-contained entity, with extensive communication capabilities, within a Rich Media User Interface; composed of a Manifest and associated Resources, including Scene Descriptions for the Full and Simplified Representations and Context Information.Widget: self-contained entity, with extensive communication capabilities, within a Rich Media User Interface; composed of a Manifest and associated Resources, including Scene Descriptions for the Full and Simplified Representations and Context Information.
향상된 사용자 인터액션 인터페이스(Advanced User Interface, AUI)는 향상된 사용자 인터 액션 장치(예를 들어, Motion Sensing remote controller, multi-touch Device 등)에 대한 정보를 송수신하기 위한 매개체를 제공하기 위한 것이다. An advanced user interface (AUI) is to provide a medium for transmitting and receiving information on an enhanced user interaction device (eg, a Motion Sensing remote controller, a multi-touch device, etc.).

도 1은 본 발명의 실시예에 따른 MPEG-U(ISO/IEC 2007)및 MPEG-V(ISO/IEC 2005) 간의 하이-레벨 뷰를 나타낸 개념도이다.1 is a conceptual diagram illustrating a high-level view between MPEG-U (ISO / IEC 2007) and MPEG-V (ISO / IEC 2005) according to an embodiment of the present invention.
도 1을 참조하면, MPEG-V part 5 (105)는 실제 물리적 인터액션 장치(Pysical Interaction Devices, 100)에 입력되는 정보에 대한 description tool로서 사용되거나, 물리적 인터액션 장치(Pysical Interaction Devices, 100)에서 출력되는 정보를 아래에서 설명할 데이터 포맷 전환기(Data Format Convertor, 120) 또는 Semantic Generator(UI Format Interpreter, 110)에 전송하기 위한 description tool로서 사용될 수 있다. Referring to FIG. 1, the MPEG-V part 5 105 is used as a description tool for information input to actual physical interaction devices 100 or output from physical interaction devices 100. The information may be used as a description tool for transmitting the information to the Data Format Convertor 120 or the Semantic Generator (UI Format Interpreter 110) to be described below.
데이터 포맷 전환기(Data Format Convertor, 120)는 MPEG-V part 5(105)에 의해 해석된 물리적 인터 액션 장치 데이터에 대하여 MPEG-U part 1(125)인 위젯에 적용될 수 있도록 데이터 전환을 수행할 수 있다.The data format convertor 120 may perform data conversion on the physical interaction device data interpreted by the MPEG-V part 5 105 so as to be applied to a widget that is an MPEG-U part 1 125. have.
Semantic Generator(UI Format Interpreter, 110)는 MPEG-V part 5(105)에 의해 해석된 물리적 인터 액션 장치 데이터에 대하여 MPEG-U part 2(115)인 AUI에 적용될 수 있도록 구문 요소를 생성하거나, UI(User Interface) 포맷 해석을 수행할 수 있다. The Semantic Generator (UI Format Interpreter, 110) generates syntax elements to be applied to AUI, which is MPEG-U part 2 (115), to physical interaction device data interpreted by MPEG-V part 5 (105), or UI (User Interface) Format analysis can be performed.
MPEG-U Part 2(115)는 Semantic Generator(UI Format Interpreter, 110)와 데이터 포맷 전환기(120) 또는 Scene Description(130) 사이의 인터페이스 역할을 할 수 있다.The MPEG-U Part 2 115 may serve as an interface between the Semantic Generator (UI Format Interpreter 110) and the data format converter 120 or the Scene Description 130.
Semantic Generator(UI Format Interpreter, 110)에 의해 생성된 구문 정보 또는 해석된 UI 정보는 MPEG-U Part 2(115)에 의해 해석되어 Scene Description(130) 또는 데이터 포맷 전환기(120)로 입력될 수 있다.Syntax information or interpreted UI information generated by the Semantic Generator (UI Format Interpreter, 110) may be interpreted by the MPEG-U Part 2 115 and input to the Scene Description 130 or the data format converter 120. .
MPEG-U part 1(125)은 데이터 포맷 전환부(120)에 의해 전환된 데이터에 기초하여 widget 관리(widget packging, communication and lifecycle 관리)를 수행하는 방법을 정의한다.The MPEG-U part 1 125 defines a method of performing widget management (widget packging, communication and lifecycle management) based on the data converted by the data format conversion unit 120.
Scene Description(130)은 아래와 같이 정의 될 수 있다.The scene description 130 may be defined as follows.
A scene description is a self-contained living entity composed of video, audio, 2D graphics objects, and animations. 예를 들어, Scene Description은 위젯이나 W3C 어플리케이션이 될 수 있다.A scene description is a self-contained living entity composed of video, audio, 2D graphics objects, and animations. For example, a Scene Description can be a widget or a W3C application.
도 1을 참조하면, MPEG-U part 2(115)에서 생성된 Data instance들은 MPEG-U part 1(125)에 따른 위젯 커뮤니케이션 방법에 의해 전송될 수 있다. 본 발명의 실시예에 따르면 입력받은 AUI 패턴값을 기초로 AUI 장치에 대한 정보 없이도 Scene Description(130)은 AUI 패턴의 이벤트를 제공받을 수 있다.Referring to FIG. 1, data instances created in MPEG-U part 2 115 may be transmitted by a widget communication method according to MPEG-U part 1 125. According to an embodiment of the present invention, the Scene Description 130 may receive an event of the AUI pattern without information on the AUI device based on the input AUI pattern value.
A. 미리 정의된 메시지 인터페이스(predefined message interface)의 URN A. URN of a predefined message interface
아래는 미리 정의된 메시지 인터페이스의 URN(Uniform Resource Name)을 나타낸 표이다. The following table shows Uniform Resource Names (URNs) for predefined message interfaces.
<표 1>TABLE 1
<mw:interface type="urn:mpeg:mpegu:schema:widgets:aui:2011"><mw: interface type = "urn: mpeg: mpegu: schema: widgets: aui: 2011">
<!--<!-
Detail description of interfaces to be transpotedDetail description of interfaces to be transpoted
-->->
</mw:interface></ mw: interface>

표 1을 참조하면, “urn:mpeg:mpegu:schema:widget:aui:2100”이라는 urn을 사용하여 위젯 매니저를 위해 AUI 패턴 정보의 메시지 인터페이스에 대한 URN을 새롭게 정의하여 사용한다. Referring to Table 1, using urn “urn: mpeg: mpegu: schema: widget: aui: 2100”, a new URN for the message interface of AUI pattern information is defined and used for the widget manager.
아래의 표 2는 입력 인터페이스를 나타낸 것이다. Table 2 below shows the input interface.
<표 2>TABLE 2
<messageIn name="PointType"><messageIn name = "PointType">
<input name="capturedTimeStamp" scriptParamType="number"/><input name = "capturedTimeStamp" scriptParamType = "number" />
<input name="x" scriptParamType="number"/><input name = "x" scriptParamType = "number" />
<input name="y" scriptParamType="number"/><input name = "y" scriptParamType = "number" />
<input name="z" scriptParamType="number"/><input name = "z" scriptParamType = "number" />
</messageIn></ messageIn>
<messageIn name="LineType"><messageIn name = "LineType">
<input name="capturedTimeStamp" scriptParamType="number"/><input name = "capturedTimeStamp" scriptParamType = "number" />
<input name="firstPositionX" scriptParamType="number"/><input name = "firstPositionX" scriptParamType = "number" />
<input name="firstPositionY" scriptParamType="number"/><input name = "firstPositionY" scriptParamType = "number" />
<input name="firstPositionZ" scriptParamType="number"/><input name = "firstPositionZ" scriptParamType = "number" />
<input name="secondPositionX" scriptParamType="number"/><input name = "secondPositionX" scriptParamType = "number" />
<input name="secondPositionY" scriptParamType="number"/><input name = "secondPositionY" scriptParamType = "number" />
<input name="secondPositionZ" scriptParamType="number"/><input name = "secondPositionZ" scriptParamType = "number" />
<input name="startingTimeStamp" scriptParamType="number"/><input name = "startingTimeStamp" scriptParamType = "number" />
<input name="averageVelocity" scriptParamType="number" /><input name = "averageVelocity" scriptParamType = "number" />
<input name="maxAcceleration" scriptParamType="number" /><input name = "maxAcceleration" scriptParamType = "number" />
</messageIn></ messageIn>
<messageIn name="Rectype"><messageIn name = "Rectype">
<input name="capturedTimeStamp" scriptParamType="number"/><input name = "capturedTimeStamp" scriptParamType = "number" />
<input name="topLeftPositionX" scriptParamType="number"/><input name = "topLeftPositionX" scriptParamType = "number" />
<input name="topLeftPositionY" scriptParamType="number"/><input name = "topLeftPositionY" scriptParamType = "number" />
<input name="topLeftPositionZ" scriptParamType="number"/><input name = "topLeftPositionZ" scriptParamType = "number" />
<input name="bottomRightPositionX" scriptParamType="number"/><input name = "bottomRightPositionX" scriptParamType = "number" />
<input name="bottomRightPositionY" scriptParamType="number"/><input name = "bottomRightPositionY" scriptParamType = "number" />
<input name="bottomRightPositionZ" scriptParamType="number"/><input name = "bottomRightPositionZ" scriptParamType = "number" />
<input name="topRightPositionX" scriptParamType="number"/><input name = "topRightPositionX" scriptParamType = "number" />
<input name="topRightPositionY" scriptParamType="number"/><input name = "topRightPositionY" scriptParamType = "number" />
<input name="topRightPositionZ" scriptParamType="number"/><input name = "topRightPositionZ" scriptParamType = "number" />
<input name="bottomLeftPositionX" scriptParamType="number"/><input name = "bottomLeftPositionX" scriptParamType = "number" />
<input name="bottomLeftPositionY" scriptParamType="number"/><input name = "bottomLeftPositionY" scriptParamType = "number" />
<input name="bottomLeftPositionZ" scriptParamType="number"/><input name = "bottomLeftPositionZ" scriptParamType = "number" />
<input name="firstTimeStamp" scriptParamType="number" /><input name = "firstTimeStamp" scriptParamType = "number" />
<input name="secondTimeStamp" scriptParamType="number" /><input name = "secondTimeStamp" scriptParamType = "number" />
<input name="thirdimeStamp" scriptParamType="number" /><input name = "thirdimeStamp" scriptParamType = "number" />
<input name="fourthTimeStamp" scriptParamType="number" /><input name = "fourthTimeStamp" scriptParamType = "number" />
</messageIn></ messageIn>
<messageIn name="ArcType"><messageIn name = "ArcType">
<input name="capturedTimeStamp" scriptParamType="number"/><input name = "capturedTimeStamp" scriptParamType = "number" />
<input name="firstPositionX" scriptParamType="number"/><input name = "firstPositionX" scriptParamType = "number" />
<input name="firstPositionY" scriptParamType="number"/><input name = "firstPositionY" scriptParamType = "number" />
<input name="firstPositionZ" scriptParamType="number"/><input name = "firstPositionZ" scriptParamType = "number" />
<input name="secondPositionX" scriptParamType="number"/><input name = "secondPositionX" scriptParamType = "number" />
<input name="secondPositionY" scriptParamType="number"/><input name = "secondPositionY" scriptParamType = "number" />
<input name="secondPositionZ" scriptParamType="number"/><input name = "secondPositionZ" scriptParamType = "number" />
<input name="centerPositionX" scriptParamType="number"/><input name = "centerPositionX" scriptParamType = "number" />
<input name="centerPositionY" scriptParamType="number"/><input name = "centerPositionY" scriptParamType = "number" />
<input name="centerPositionZ" scriptParamType="number"/><input name = "centerPositionZ" scriptParamType = "number" />

<input name="startingTimeStamp" scriptParamType="number" /><input name = "startingTimeStamp" scriptParamType = "number" />
<input name="averageAngularVelocity" scriptParamType="number" /><input name = "averageAngularVelocity" scriptParamType = "number" />
<input name="maxAngularAcceleration" scriptParamType="number" /><input name = "maxAngularAcceleration" scriptParamType = "number" />
</messageIn></ messageIn>
<messageIn name="CircleType"><messageIn name = "CircleType">
<input name="capturedTimeStamp" scriptParamType="number"/><input name = "capturedTimeStamp" scriptParamType = "number" />
<input name="centerPositionX" scriptParamType="number"/><input name = "centerPositionX" scriptParamType = "number" />
<input name="centerPositionY" scriptParamType="number"/><input name = "centerPositionY" scriptParamType = "number" />
<input name="centerPositionZ" scriptParamType="number"/><input name = "centerPositionZ" scriptParamType = "number" />
<input name="radius" scriptParamType="number"/><input name = "radius" scriptParamType = "number" />
<input name="startingTimeStamp" scriptParamType="number" /><input name = "startingTimeStamp" scriptParamType = "number" />
<input name="averageAngularVelocity" scriptParamType="number" /><input name = "averageAngularVelocity" scriptParamType = "number" />
<input name="maxAngularAcceleration" scriptParamType="number" /><input name = "maxAngularAcceleration" scriptParamType = "number" />
</messageIn></ messageIn>
<messageIn name="SymbolicPatternType"><messageIn name = "SymbolicPatternType">
<input name="capturedTimeStamp" scriptParamType="number"/><input name = "capturedTimeStamp" scriptParamType = "number" />
<input name="PositionX" scriptParamType="number"/><input name = "PositionX" scriptParamType = "number" />
<input name="PositionY" scriptParamType="number"/><input name = "PositionY" scriptParamType = "number" />
<input name="PositionZ" scriptParamType="number"/><input name = "PositionZ" scriptParamType = "number" />
<input name="size" scriptParamType="number"/><input name = "size" scriptParamType = "number" />
<input name="symbolType" scriptParamType="string" /><input name = "symbolType" scriptParamType = "string" />
</messageIn></ messageIn>
<messageIn name="TouchPatternType"><messageIn name = "TouchPatternType">
<input name="capturedTimeStamp" scriptParamType="number"/><input name = "capturedTimeStamp" scriptParamType = "number" />
<input name="PositionX" scriptParamType="number"/><input name = "PositionX" scriptParamType = "number" />
<input name="PositionY" scriptParamType="number"/><input name = "PositionY" scriptParamType = "number" />
<input name="PositionZ" scriptParamType="number"/><input name = "PositionZ" scriptParamType = "number" />
<input name="touchType" scriptParamType="string" /><input name = "touchType" scriptParamType = "string" />
<input name="value" scriptParamType="number" /><input name = "value" scriptParamType = "number" />
</messageIn></ messageIn>
<messageIn name="HandPostureType"><messageIn name = "HandPostureType">
<input name="capturedTimeStamp" scriptParamType="number"/><input name = "capturedTimeStamp" scriptParamType = "number" />
<input name="PositionX" scriptParamType="number"/><input name = "PositionX" scriptParamType = "number" />
<input name="PositionY" scriptParamType="number"/><input name = "PositionY" scriptParamType = "number" />
<input name="PositionZ" scriptParamType="number"/><input name = "PositionZ" scriptParamType = "number" />
<input name="postureType" scriptParamType="string"/><input name = "postureType" scriptParamType = "string" />
<input name="handSide" scriptParamType="string" /><input name = "handSide" scriptParamType = "string" />
</messageIn></ messageIn>

표 2에서는 다양한 패턴 정보와 패턴 정보를 구성하는 정보들에 대하여 정의한다. Table 2 defines various pattern information and information constituting the pattern information.
(1) 첫번째 패턴 정보로 Point가 정의될 수 있다. Point는 유클라디안 공간에서 이차원 또는 삼차원의 기하학적인 점을 나타낼 수 있다. Point에 포함되는 정보로서 capturedTimeStamp, userId, x, y, z 등이 사용될 수 있다.(1) Point may be defined as first pattern information. Point can represent a two-dimensional or three-dimensional geometric point in eucladian space. As information included in the Point, capturedTimeStamp, userId, x, y, z, and the like may be used.
capturedTimeStamp는 AUI 패턴이 인식되는 시간 정보를 나타내는 것으로 일반적으로 1970년 1월 1일 0시 0분을 기준으로 밀리 세컨드 단위로 표시할 수 있다.The capturedTimeStamp represents time information at which the AUI pattern is recognized. In general, capturedTimeStamp may be expressed in millisecond units based on January 1, 1970, at 0: 0.
userId는 사용자 정보를 표시할 수 있다.userId may indicate user information.
x, y, z는 point의 위치 정보를 나타내기 위해 사용되는 입력값이 될 수 있다.x, y, z may be input values used to indicate the location information of the point.
(2) 또 다른 메시지 인터페이스로 Line이 정의될 수 있다. Line은 두 점을 잇는 직선 패턴을 나타내는 패턴 정보이다. 패턴 정보를 나타내기 위해 직선의 양끝 점에 대한 위치 정보와 선택적으로 직선이 시작된 시간 정보, 속도 및 가속도 정보를 사용할 수 있다. (2) Line can be defined as another message interface. Line is pattern information indicating a straight line connecting two points. In order to indicate the pattern information, the positional information on both ends of the straight line and optionally the time information, velocity and acceleration information at which the straight line started can be used.
Line을 표현하기 위한 정보로서 capturedTimeStamp, userId, firstPositionX, firstPositionY, firstPositionZ, secondPositonX, secondPositonY, secondPositonZ, startingTimestamp, averageVelocity, maxAcceleration이 사용될 수 있다. As information for representing a line, capturedTimeStamp, userId, firstPositionX, firstPositionY, firstPositionZ, secondPositonX, secondPositonY, secondPositonZ, startingTimestamp, averageVelocity, and maxAcceleration may be used.
capturedTimeStamp, userId는 전술한 Point에서와 정의가 동일하다.capturedTimeStamp and userId have the same definitions as in the aforementioned Point.
firstPositionX, firstPositionY, firstPositionZ는 선의 첫번째 끝 점의 위치가 될 수 있고 secondPositonX, secondPositonY, secondPositonZ는 두번째 끝 점의 위치가 될 수 있다.firstPositionX, firstPositionY, firstPositionZ can be the position of the first end point of the line, and secondPositonX, secondPositonY, secondPositonZ can be the position of the second end point.
startingTimestamp는 선이 언제 시작되었는지에 대한 정보를 나타내고 averageVelocity는 라인 패턴을 생성하는 동안의 평균 속도 정보, maxAcceleration은 라인 패턴을 생성하는 동안의 최고 가속도 정보를 나타낸다.startingTimestamp represents information on when the line started, averageVelocity represents average velocity information during line pattern generation, and maxAcceleration represents highest acceleration information during line pattern generation.
(3) 또 다른 패턴 정보로 Rect가 정의될 수 있다. Rect는 사각형의 네 개의 코너 위치를 구성하는 사각형 패턴으로서 두 개의 반대편 코너 위치 또는 네 개의 코너 위치를 기초로 표현될 수 있다. (3) Rect may be defined as another pattern information. Rect is a rectangular pattern constituting the four corner positions of the rectangle can be expressed based on two opposite corner positions or four corner positions.
Rect를 표현하기 위한 정보로서 capturedTimeStamp, userId, TopLeftPosition, BottomRightPosition, TopRightPosition, BottomLeftPosition, firstTimeStamp, secondTimeStamp, thirdTimeStamp, fourthTimeStamp가 사용될 수 있다. As information for representing a Rect, capturedTimeStamp, userId, TopLeftPosition, BottomRightPosition, TopRightPosition, BottomLeftPosition, firstTimeStamp, secondTimeStamp, thirdTimeStamp, and fourthTimeStamp may be used.
TopLeftPosition는 사각형의 상단 왼쪽 코너의 위치에 대한 정보, BottomRightPosition은 사각형의 하단 오른쪽 코너의 위치에 대한 정보, TopRightPosition은 사각형의 상단 오른쪽 코너의 위치에 대한 정보, BottomLeftPosition은 사각형의 하단 왼쪽 위치에 대한 정보를 나타낸다.TopLeftPosition gives information about the position of the top left corner of the rectangle, BottomRightPosition gives information about the position of the lower right corner of the rectangle, TopRightPosition gives information about the position of the top right corner of the rectangle, and BottomLeftPosition gives information about the bottom left position of the rectangle. Indicates.
firstTimeStamp는 사각형 패턴을 생성할 때 첫번째 코너가 생성된 시간 정보, secondTimeStamp는 사각형 패턴을 생성할 때 두번째 코너가 생성된 시간 정보, thirdTimeStamp는 사각형 패턴을 생성할 때 세번째 코너가 생성된 시간 정보, fourthTimeStamp는 사각형 패턴을 생성할 때 네번째 코너가 생성된 시간 정보를 나타낸다. firstTimeStamp is the time information when the first corner was created when creating a square pattern, secondTimeStamp is the time information when the second corner was created when creating a square pattern, thirdTimeStamp is the time information when the third corner was created when creating a square pattern, and fourthTimeStamp is When generating the square pattern, the fourth corner shows the generated time information.
(4) 또 다른 패턴 정보로 Arc가 정의될 수 있다. Arc는 원호 패턴을 나타내는 것으로서, 원호의 시작과 끝점의 위치 정보와 원의 중심의 위치 정보, 각속도, 각가속도 및 아크 패턴이 그려지기 시작한 시간 정보를 포함할 수 있다.(4) Arc may be defined as another pattern information. Arc represents an arc pattern, and may include position information of the start and end points of the arc, position information of the center of the circle, angular velocity, angular acceleration, and time information at which the arc pattern starts to be drawn.
Arc을 표현하기 위한 정보로서 capturedTimeStamp, userId, firstPositionX, firstPositionY firstPositionZ, secondPositionX, secondPositionY, secondPositionZ, centerPositionX, centerPositionY, centerPositionZ, startingTimeStamp, averageAngularVelocity가 포함될 수 있다.Information for representing the arc may include capturedTimeStamp, userId, firstPositionX, firstPositionY firstPositionZ, secondPositionX, secondPositionY, secondPositionZ, centerPositionX, centerPositionY, centerPositionZ, startingTimeStamp, averageAngularVelocity.
firstPositionX, firstPositionY, firstPositionZ는 원호의 시작 위치를 나타내는 정보로 이차원 또는 삼차원 위치 정보를 기초로 표현된다.firstPositionX, firstPositionY, and firstPositionZ are information indicating the starting position of an arc and are expressed based on two-dimensional or three-dimensional position information.
secondPositionX, secondPositionY, secondPositionZ는 원호의 끝점 위치를 나타내는 정보로서 이차원 또는 삼차원 위치 정보를 기초로 표현된다. secondPositionX, secondPositionY, and secondPositionZ are information indicating the position of the end point of the arc and are expressed based on two-dimensional or three-dimensional position information.
centerPositionX, centerPositionY, centerPositionZ 원호의 중심점 위치를 나타내는 정보로서 이차원 또는 삼차원 위치 정보를 기초로 표현된다.centerPositionX, centerPositionY, centerPositionZ Information indicating a center point position of an arc is expressed based on two-dimensional or three-dimensional position information.
startingTimeStamp는 아크 패턴이 생성되기 시작한 시간 정보를 표시하는 정보, averageAngularVelocity는 아크 패턴이 형성되는 동안의 평균 각속도 정보를 나타낼 수 있다.startingTimeStamp may be information indicating time information at which an arc pattern is generated, and averageAngularVelocity may indicate average angular velocity information during arc pattern formation.
(5) 또 다른 패턴 정보로 Circle이 정의될 수 있다. Circle은 원 패턴을 나타내는 것으로서 원의 중심 위치 정보, 반지름 정보 및 평균 각가속도 정보를 기초로 표현될 수 있다. (5) Circle may be defined as another pattern information. Circle represents a circle pattern and may be expressed based on center position information, radius information, and average angular acceleration information of the circle.
Circle을 표현하기 위한 정보로 centerPositionX, centerPositionY, centerPositionZ, startingTimeStamp, averageAngularVelocity가 포함될 수 있다. Information for representing a circle may include centerPositionX, centerPositionY, centerPositionZ, startingTimeStamp, and averageAngularVelocity.
centerPositionX, centerPositionY, centerPositionZ는 원의 중심 위치를 나타내기 위한 정보로서 2 차원 또는 3 차원 평면에 원의 중심 위치를 나타낼 수 있다.centerPositionX, centerPositionY, and centerPositionZ are information for indicating the center position of the circle and may indicate the center position of the circle in a two-dimensional or three-dimensional plane.
startingTimeStamp는 원 패턴이 생성되기 시작한 시간 정보를 표시하는 정보, averageAngularVelocity는 원 패턴이 형성되는 동안의 평균 각속도 정보를 나타낼 수 있다.startingTimeStamp may be information indicating time information at which the original pattern is generated, and averageAngularVelocity may represent average angular velocity information during the formation of the original pattern.
(6) 또 다른 패턴 정보로 SymbolicPattern이 정의될 수 있다. SymbolicPattern은 사용자의 동작 정보를 동작 정보의 크기와 위치를 바탕으로 새로운 심볼로 인식할 수 있다.(6) SymbolicPattern may be defined as another pattern information. SymbolicPattern can recognize the user's motion information as a new symbol based on the size and position of the motion information.
SymbolicPattern을 표현하기 위한 정보로 capturedTimeStamp, userId, PositionX, PositionY, PositionZ, size, symbolType이 사용될 수 있다.CaptureTimeStamp, userId, PositionX, PositionY, PositionZ, size, and symbolType may be used as information for representing a SymbolicPattern.
PositionX, PositionY, PositionZ는 SymbolicPattern이 인식된 위치 정보이고 size는 SymbolicPattern의 크기 정보, symboltype은 심볼릭 패턴이 어떤 종류인지를 표시하는 정보로 사용될 수 있다. PositionX, PositionY, and PositionZ are positional information on which SymbolicPattern is recognized, size may be used as information indicating size of SymbolicPattern, and symboltype as information indicating what kind of symbolic pattern is.
(7) 또 다른 패턴 정보로 TouchPattern이 정의될 수 있다. TouchPattern은 사용자의 터치의 패턴 정보로서 사용자의 동작 정보를 동작 정보의 입력지속시간, 입력횟수, 입력 이동 방향 및 회전 방향을 바탕으로 새로운 터치 패턴으로 인식할 수 있다.(7) TouchPattern may be defined as another pattern information. The TouchPattern may recognize the user's motion information as a new touch pattern based on the input duration, the number of times, the input movement direction, and the rotation direction of the motion information as the pattern information of the user's touch.
TouchPattern을 표현하기 위한 정보로 capturedTimeStamp, userId, Position X, Position Y, Position Z, touchtype, value가 사용될 수 있다. As information for representing the TouchPattern, capturedTimeStamp, userId, Position X, Position Y, Position Z, touchtype, and value may be used.
Position X, Position Y, Position Z는 터치가 발생한 위치 정보를 나타낼 수 있고 touchType은 터치의 타입에 대한 정보, value는 터치 패턴의 종류에 따라 필요한 부가적인 정보를 담기 위한 정보로서 사용될 수 있다.Position X, Position Y, and Position Z may indicate position information where a touch is generated, touchType may be used as information for information on the type of touch, and value may be used as information for containing additional information required according to the type of touch pattern.
(8) 또 다른 패턴 정보로 HandPosture가 정의될 수 있다. HandPosture는 사용자 손의 포즈를 설명하는 것으로서 Posture는 손의 포즈 타입을 의미하는 요소이다. (8) HandPosture may be defined as another pattern information. HandPosture describes the pose of the user's hand, and Posture represents the pose type of the hand.
HandPosture를 표현하기 위한 정보로 capturedTimeStamp, userId, Position X, Position Y, Position Z, postureType, chirality가 사용될 수 있다. As information for representing a HandPosture, capturedTimeStamp, userId, Position X, Position Y, Position Z, postureType, and chirality may be used.
Position X, Position Y, Position Z는 손의 Posture가 발생한 위치 정보를 나타낼 수 있고 postureType은 사용자의 손의 포즈 타입 정보를 의미하고 chirality는 사용자의 손이 왼손인지 오른손인지 여부를 나타낼 수 있다. Position X, Position Y, and Position Z may indicate position information on which the posture of the hand occurred, and postureType may indicate pose type information of the user's hand, and chirality may indicate whether the user's hand is left or right.
또 다른 패턴 정보로 HandGesture가 정의될 수 있다. HandGesture는 사용자 손의 동작에 관한 정보를 나타낸다. HandGesture를 표현하기 위한 정보로 capturedTimeStamp, userId, gesturetype, chirality가 사용될 수 있다.HandGesture may be defined as another pattern information. HandGesture represents information about the operation of the user's hand. CaptureTimeStamp, userId, gesturetype, and chirality may be used as information for representing a HandGesture.
Position X, Position Y, Position Z는 손의 gesture가 발생한 위치 정보를 나타낼 수 있고 gestureType은 사용자의 손의 동작 타입 정보를 의미하고 chirality는 사용자의 손이 왼손인지 오른손인지 여부를 나타낼 수 있다.Position X, Position Y, and Position Z may indicate position information on which a hand gesture occurred, gestureType may indicate motion type information of the user's hand, and chirality may indicate whether the user's hand is left hand or right hand.
표 2에서 전술한 패턴 정보는 선택적으로 사용될 수 있다. 즉, 위의 패턴 정보 중 적어도 하나의 패턴 정보가 위젯 관리자의 패턴 정보로서 선택적으로 사용될 수 있다.The pattern information described above in Table 2 may optionally be used. That is, at least one pattern information of the above pattern information may be selectively used as the pattern information of the widget manager.
즉, 표 2에서 전술한 AUI 패턴의 메세지 인터페이스는 optional이 될 수 이있고 실제 구현시 AUI 패턴의 메세지 인터페이스 중 일부만을 사용하는 것도 가능하다.That is, the message interface of the AUI pattern described above in Table 2 may be optional, and only a part of the message interface of the AUI pattern may be used in actual implementation.
본 발명의 또 다른 실시예에 따르면 AUI pattern은 W3C 어플리케이션에서도 활용될 수 있다. 예를 들어 html page 또한 사용자의 모션을 기하 패턴, 터치 패턴, 심볼릭 패턴 등을 기초로 구현될 수 있다. According to another embodiment of the present invention, the AUI pattern may be utilized in a W3C application. For example, the html page may also be implemented based on a user's motion based on a geometric pattern, a touch pattern, or a symbolic pattern.
본 발명의 실시예에서는 W3C widget과 통신하기 위한 IDL(Interface Definition Language) 이벤트 정의에 대해 개신한다.In an embodiment of the present invention, an interface definition language (IDL) event definition for communicating with a W3C widget is disclosed.
B. AUI 패턴들의 IDL(Interface Definition Language)B. Interface Definition Language (IDL) of AUI Patterns
이하, 본 발명의 실시예에서는 W3C(World Wide Web Consortium) 어플리케이션에 대한 인터페이스를 구현하기 위한 이벤트 타입과 구문 요소 및 구문 요소의 정의에 대해 아래와 같이 개시한다.Hereinafter, an embodiment of the present invention discloses event types, syntax elements, and syntax elements for implementing an interface for a W3C (World Wide Web Consortium) application as follows.

<표 3>TABLE 3
interface MPEGAUIEvent : UIEvent {interface MPEGAUIEvent: UIEvent {
typedef float fVectorType[3];typedef float fVectorType [3];
typedef sequence<fVectorType> fVectorListType;typedef sequence <fVectorType> fVectorListType;
typedef sequence<float> floatListType;typedef sequence <float> floatListType;

readonly attribute unsigned long long capturedTimeStamp;              readonly attribute unsigned long long capturedTimeStamp;

readonly attribute string userId;readonly attribute string userId;
readonly attribute float averageVelocity;             readonly attribute float averageVelocity;
readonly attribute float maxAcceleration;             readonly attribute float maxAcceleration;

readonly attribute string sType;             readonly attribute string sType;
readonly attribute string chirality;readonly attribute string chirality;
readonly attribute float fValue;readonly attribute float fValue;

readonly attribute fVectorListType positions;             readonly attribute fVectorListType positions;
readonly attribute floatListType timeStamps;readonly attribute floatListType timeStamps;
}; };
<표 4>TABLE 4
Figure PCTKR2012005484-appb-I000001
Figure PCTKR2012005484-appb-I000001
Figure PCTKR2012005484-appb-I000002
Figure PCTKR2012005484-appb-I000002
Figure PCTKR2012005484-appb-I000003
Figure PCTKR2012005484-appb-I000003
Figure PCTKR2012005484-appb-I000004
Figure PCTKR2012005484-appb-I000004
Figure PCTKR2012005484-appb-I000005
Figure PCTKR2012005484-appb-I000005
Figure PCTKR2012005484-appb-I000006
Figure PCTKR2012005484-appb-I000006
Figure PCTKR2012005484-appb-I000007
Figure PCTKR2012005484-appb-I000007

Figure PCTKR2012005484-appb-I000008
Figure PCTKR2012005484-appb-I000008
Figure PCTKR2012005484-appb-I000009
Figure PCTKR2012005484-appb-I000009


표 4를 참조하면,Referring to Table 4,
(1) 첫번째 이벤트 타입으로 Point가 정의될 수 있다. Point는 문맥 정보로서 capturedTimeStamp와 Position을 사용할 수 있다. capturedTimeStamp는 사용자의 인터액션이 캡쳐된 시간을 나타내는 정보(capturedTimeStamp는 이하, 동일한 의미를 가지므로 다른 이벤트 타입에서는 설명을 생략한다)이고 Position은 스크린의 좌표상에서 2D 또는 3D 위치 정보를 나타낸다. (1) Point can be defined as the first event type. Point can use capturedTimeStamp and Position as contextual information. capturedTimeStamp is information indicating the time at which the user's interaction was captured (capturedTimeStamp has the same meaning hereinafter, and thus description is omitted in other event types), and Position indicates 2D or 3D position information on screen coordinates.
(2) 또 다른 이벤트 타입으로 Line이 정의될 수 있다. Line을 정의하기 위한 문맥 정보로서 capturedTimeStamp, position, timeStamps, averageVelocity, maxAcceleration을 구문요소로 사용할 수 있다. position은 라인을 이루는 두 지점의 2D 또는 3D 좌표 정보, timeStamps는 라인이 그려진 시작 시간 정보, averagevelocity는 라인이 그려지는 동안의 평균 속도 정보, maxAccelation은 라인을 그리는 동안의 최대 가속도 정보를 의미한다. (2) Line can be defined as another event type. As context information to define a Line, capturedTimeStamp, position, timeStamps, averageVelocity, and maxAcceleration can be used as syntax elements. position is 2D or 3D coordinate information of two points forming a line, timeStamps is starting time information on which a line is drawn, averagevelocity is average speed information while a line is drawn, and maxAccelation is maximum acceleration information while drawing a line.
(3) 또 다른 이벤트 타입으로 Rect가 정의될 수 있다. Rect를 정의하기 위한 문맥 정보로서 capturedTimeStamp, position, timeStamps를 구문요소로 사용할 수 있다. position은 사각형을 이루는 모서리 지점의 2D 또는 3D 좌표값 정보, timeStamps는 각 모서리가 그려진 시간 정보를 의미한다. (3) Rect can be defined as another event type. As contextual information for defining a Rect, capturedTimeStamp, position, and timeStamps can be used as syntax elements. position is 2D or 3D coordinate value information of a corner point forming a rectangle, and timeStamps means time information on which each corner is drawn.
(4) 또 다른 이벤트 타입으로 Arc가 정의될 수 있다. Arc를 정의하기 위한 문맥 정보로서 capturedTimeStamp, position, timeStamps, averageVelocity, maxAccelation을 사용할 수 있다. position은 원호가 그려지기 시작한 위치 및 끝난 위치 정보와 가상의 원의 중심 위치를 나타내는 2D 또는 3D 좌표 정보, timeStamps는 사용자의 인터액션이 시작한 시작 시간 정보, averageVelocity는 원호를 그리는 동안의 평균 가속도, maxAcceleration은 원호를 그리는 동안의 최대 각가속도 정보를 의미한다. (4) Arc can be defined as another event type. You can use capturedTimeStamp, position, timeStamps, averageVelocity, maxAccelation as contextual information to define Arc. position is the 2D or 3D coordinate information indicating the position where the arc was drawn and ended and the center of the virtual circle, timeStamps is the start time of the user's interaction, averageVelocity is the average acceleration while drawing the arc, and maxAcceleration is The maximum angular acceleration information during the arc drawing.
(5) 또 다른 이벤트 타입으로 Circle이 정의될 수 있다. Circle을 정의하기 위한 문맥 정보로서 capturedTimeStamp, position, timeStamps, averageVelocity, maxAccelaeration, fvalue를 사용할 수 있다. Position은 원을 구성하는 원점 좌표 정보, timeStamps는 원을 그리기 시작한 시간 정보, averageVelocity는 원을 그리는 동안의 평균 각속도, maxAcceleration은 원을 그리는 동안의 평균 각가속도, fValue는 원의 반지름 정보를 의미한다. (5) Circle may be defined as another event type. As context information for defining a circle, capturedTimeStamp, position, timeStamps, averageVelocity, maxAccelaeration, and fvalue can be used. Position is the origin coordinate information constituting the circle, timeStamps is the time information to start drawing the circle, averageVelocity is the average angular velocity while drawing the circle, maxAcceleration is the average angular acceleration while drawing the circle, fValue is the radius information of the circle.
(6) 또 다른 이벤트 타입으로 TouchPattern이 정의될 수 있다. TouchPattern을 정의하기 위한 문맥 정보로서 capturedTimeStamp, position, sType, fValue가 사용될 수 있다. position은 터치가 일어난 위치 정보, sType은 터치의 종류를 명시하는 레이블, fValue는 터치 패턴에서 필요한 값 정보가 될 수 있다. (6) TouchPattern may be defined as another event type. CaptureTimeStamp, position, sType, and fValue may be used as context information for defining the TouchPattern. position may be position information where a touch has occurred, sType may be a label indicating a type of touch, and fValue may be value information required in a touch pattern.
(7) 또 다른 이벤트 타입으로 SymbolicPattern이 정의될 수 있다. SymbolicPattern을 정의하기 위한 문맥 정보로서 capturedtimeStamp, position, sType, fValue가 사용될 수 있다. position은 심볼이 발생한 위치 정보, sType은 심볼의 종류를 명시하는 레이블 정보, fValue는 심볼의 크기 정보가 될 수 있다. (7) SymbolicPattern may be defined as another event type. As contextual information for defining a SymbolicPattern, capturedtimeStamp, position, sType, and fValue may be used. position may be position information where a symbol is generated, sType may be label information specifying a type of a symbol, and fValue may be size information of a symbol.
(8) 또 다른 이벤트 타입으로 HandPosture가 정의될 수 있다. HandPosture를 정의하기 위한 문맥 정보로서 capturedtimeStamp, position, sType, fValue가 사용될 수 있다. position은 핸드포스쳐가 일어난 위치 정보, sType은 핸드 포스쳐의 종류를 명시하는 레이블 정보, fValue는 핸드 포스쳐의 크기 정보가 될 수 있다. (8) HandPosture may be defined as another event type. As context information for defining a HandPosture, capturedtimeStamp, position, sType, and fValue may be used. The position may be information on the location of the hand posture, the sType may be label information indicating the type of the hand posture, and the fValue may be size information of the hand posture.

도 2는 본 발명의 실시예에 따른 MPEG-U Part 1에서 패턴 정보 해석 방법을 나타낸 순서도이다.2 is a flowchart illustrating a method of analyzing pattern information in MPEG-U Part 1 according to an embodiment of the present invention.
이하, 본 발명의 실시예에서는 데이터 포맷 전환기로부터 입력된 데이터를 기초로 Scene Description 정보를 생성하기 위한 인터페이스 구현에 대해 설명하나, 반대로 Scene Description 정보를 입력받고 데이터 포맷 전환기로 정보를 전달하기 위한 인터페이스 구현에도 적용되며 이 또한 본 발명의 권리 범위에 포함된다.Hereinafter, an embodiment of the present invention describes an interface implementation for generating scene description information based on data input from a data format changer, but on the contrary, implements an interface for receiving scene description information and transferring information to the data format changer. Also applies to and is included in the scope of the present invention.
도 2를 참조하면, 데이터 포맷 전환기로부터 전환된 데이터 포맷을 입력 받는다(단계 S200).Referring to FIG. 2, the converted data format is received from the data format changer (step S200).
전술한 바와 같이 데이터 포맷 전환기에는 MPEG-U Part 2와 MPEG-V Part 5를 통해 해석된 물리적 인터액션 장치의 동작 정보가 입력될 수 있다. 데이터 포멧 전환기에서는 전송된 물리적 인터액션 장치의 동작 정보에 대해 MPEG-U Part 1으로 입력될 수 있는 정보 포맷으로 데이터 포맷 전환을 수행할 수 있다.As described above, operation information of the physical interaction apparatus interpreted through the MPEG-U Part 2 and the MPEG-V Part 5 may be input to the data format converter. The data format switcher may perform data format conversion in an information format that can be input to MPEG-U Part 1 with respect to the transmitted operation information of the physical interaction apparatus.
데이터 포맷 전환기에 입력된 정보는 소정의 인터페이스를 통해 Scene Description으로 전송된다(단계 S210).Information input to the data format changer is transmitted as a scene description via a predetermined interface (step S210).
본 발명의 실시예에 따르면 소정의 인터페이스는 MPEG-U Part 1이 될 수 있다. MPEG-U Part 1을 통해 해석된 정보는 위젯 매니져 또는 W3C 어플리케이션에서 사용될 수 있다.According to an embodiment of the present invention, the predetermined interface may be MPEG-U Part 1. Information interpreted through MPEG-U Part 1 can be used in widget managers or W3C applications.
전술한 표 2와 같이 위젯 매니져를 위한 인터페이스를 정의할 수 있다. 표 2와 같은 패턴 정보와 각 패턴 정보를 표현하기 위한 입력 정보가 생성될 수 있다. 즉, 표 2 중 일부의 패턴 정보가 위젯 매니져에서 동작 정보를 표현하기 위한 인터페이스로 사용될 수 있다.As shown in Table 2, the interface for the widget manager can be defined. Pattern information as shown in Table 2 and input information for expressing each pattern information may be generated. That is, some pattern information of Table 2 may be used as an interface for representing motion information in the widget manager.
또한 표 3과 같이 IDL 인터페이스를 정의하여 표 3에 정의된 이벤트 정보를 기초로 W3C 어플리케이션에 동작 정보를 전달할 수 있다. In addition, as shown in Table 3, the IDL interface may be defined to transmit operation information to the W3C application based on the event information defined in Table 3.

도 3은 본 발명의 실시예에 따른 위젯 매니져와 물리적 장치 간의 동작을 나타내기 위한 개념도이다. 3 is a conceptual diagram illustrating an operation between a widget manager and a physical device according to an embodiment of the present invention.
이하, 본 발명의 실시예에서는 설명의 편의상 물리적 신호가 위젯 매니져와 물리적 장치 사이에 있는 인터페이스에 직접 입력되는 것을 가정하나, 실제 인터페이스로 입력되는 정보는 물리적 장치에서 발생된 신호가 추가적으로 MPEG-V Part 5 인터페이스와 데이터 포맷 전환부를 거쳐서 생성된 정보가 될 수 있다. Hereinafter, in the embodiment of the present invention, for convenience of description, it is assumed that the physical signal is directly input to the interface between the widget manager and the physical device, but the information input through the actual interface is additionally a signal generated from the physical device to the MPEG-V Part. 5 may be information generated through the interface and data format conversion unit.
도 3을 참조하면, 사용자 인터액션 장치에서 발생된 동작 정보(예를 들억,데이터 포맷 전환부(300)에서 생성된 정보)가 MPEG-U Part 1 인터페이스(320)로 입력된다. 입력된 동작 정보는 위젯 매니져(340)로 전달되어 위젯의 동작을 구현할 수 있다. 예를 들어, 터치 매트에서 특정한 부분을 가리키는 동작을 수행한 경우 이러한 동작 정보가 위젯(340)에 표현되기 위해 MPEG U Part 1 인터페이스(320)를 통해 해석될 수 있다. 즉, 인터페이스에 정의된 표 2를 기초로 입력된 동작 정보를 해석하여 위젯 매니져(340)로 해당 정보를 전송한다. 위젯 매니져(340)는 MPEG U Part 1 인터페이스를 통해 해석된 동작 정보를 기초로 위젯의 동작을 제어한다. Referring to FIG. 3, operation information (eg, information generated by the data format conversion unit 300) generated in the user interaction apparatus is input to the MPEG-U Part 1 interface 320. The input operation information is transmitted to the widget manager 340 to implement the operation of the widget. For example, when an operation indicating a specific part of the touch mat is performed, such motion information may be interpreted through the MPEG U Part 1 interface 320 to be displayed in the widget 340. That is, the operation information is input based on Table 2 defined in the interface, and the corresponding information is transmitted to the widget manager 340. The widget manager 340 controls the operation of the widget based on the operation information interpreted through the MPEG U Part 1 interface.
반대로 MPEG-U Part 1 인터페이스에서는 위젯에서 발생한 정보를 입력받고 입력받은 정보를 데이터 포맷 전환부(300)로 전송하기 할 수 있다. 즉, 전술한 동작의 역방향 동작을 수행하여 위젯에서 발생한 정보를 데이터 포맷 전환부(300)로 전송할 수 있다.On the contrary, the MPEG-U Part 1 interface may receive information generated from the widget and transmit the received information to the data format conversion unit 300. That is, the reverse operation of the above-described operation may be performed to transmit information generated from the widget to the data format conversion unit 300.
또 다른 실시예로서 전술한 바와 같이 W3C 어플리케이션(340)에서도 이와 같은 동작이 수행될 수 있다.As another embodiment, as described above, the same operation may be performed in the W3C application 340.
예를 들어, 사용자 인터액션 장치에서 발생된 동작 정보가 MPEG-U Part 1 인터페이스(320)로 입력된다. 입력된 동작 정보는 W3C 어플리케이션(340)으로 전달되어 위젯의 동작을 구현할 수 있다. For example, operation information generated in the user interaction apparatus is input to the MPEG-U Part 1 interface 320. The input operation information is transmitted to the W3C application 340 to implement the operation of the widget.
이상 실시예를 참조하여 설명하였지만, 해당 기술 분야의 숙련된 당업자는 하기의 특허 청구의 범위에 기재된 본 발명의 사상 및 영역으로부터 벗어나지 않는 범위 내에서 본 발명을 다양하게 수정 및 변경시킬 수 있음을 이해할 수 있을 것이다.Although described with reference to the embodiments above, those skilled in the art will understand that the present invention can be variously modified and changed without departing from the spirit and scope of the invention as set forth in the claims below. Could be.

Claims (12)

  1. AUI(Advanced User Interaction) 패턴 정보를 입력받는 단계; 및
    미리 정의된 인터페이스를 기초로 입력받은 AUI 패턴 정보를 해석하는 단계를 포함하는 사용자 인터페이스 구현 방법.
    Receiving advanced user interaction (AUI) pattern information; And
    A method of implementing a user interface comprising interpreting AUI pattern information received based on a predefined interface.
  2. 제1항에 있어서, 상기 미리 정의된 인터페이스는,
    Point, Line, Rect, Arc, Circle, SymbolicPattern, TouchPattern, HandPosture 중 적어도 하나의 패턴 정보를 정의한 인터페이스인 사용자 인터페이스 구현 방법.
    The method of claim 1, wherein the predefined interface,
    A method for implementing a user interface, the interface defining at least one pattern information among Point, Line, Rect, Arc, Circle, SymbolicPattern, TouchPattern, and HandPosture.
  3. 제1항에 있어서, 상기 미리 정의된 인터페이스는,
    소정의 미리 정의된 패턴 정보로서 상기 패턴 정보 중 적어도 하나가 위젯 관리자의 패턴 정보로서 선택적으로 사용되는 인터페이스인 사용자 인터페이스 구현 방법.
    The method of claim 1, wherein the predefined interface,
    And at least one of the pattern information as predetermined predefined pattern information is an interface that is selectively used as pattern information of a widget manager.
  4. 제1항에 있어서, 상기 미리 정의된 인터페이스는,
    소정의 미리 정의된 패턴 정보로서 상기 패턴 정보 중 적어도 하나가 W3C 어플리케이션에서 선택적으로 사용되는 인터페이스인 사용자 인터페이스 구현 방법.
    The method of claim 1, wherein the predefined interface,
    And at least one of the pattern information as predetermined predefined pattern information is an interface selectively used in a W3C application.
  5. Scene Description으로부터 입력 정보를 제공받는 단계; 및
    미리 정의된 인터페이스를 기초로 상기 Scene Description으로부터 제공받은 입력 정보를 해석하여 데이터 포맷 전환기로 입력하는 단계를 포함하는 사용자 인터페이스 구현 방법.
    Receiving input information from a scene description; And
    And interpreting the input information provided from the scene description based on a predefined interface and inputting the input information to a data format converter.
  6. 제5항에 있어서, 상기 미리 정의된 인터페이스는,
    Point, Linem Rect, Arc, Circle, SymbolicPattern, TouchPattern, HandPosture 중 적어도 하나의 패턴 정보를 정의한 인터페이스인 사용자 인터페이스 구현 방법.
    The method of claim 5, wherein the predefined interface,
    A method for implementing a user interface, the interface defining at least one pattern information among Point, Linem Rect, Arc, Circle, SymbolicPattern, TouchPattern, and HandPosture.
  7. 제5항에 있어서, 상기 미리 정의된 인터페이스는,
    소정의 미리 정의된 패턴 정보로서 상기 패턴 정보 중 적어도 하나가 위젯 관리자의 패턴 정보로서 선택적으로 사용되는 인터페이스인 사용자 인터페이스 구현 방법.
    The method of claim 5, wherein the predefined interface,
    And at least one of the pattern information as predetermined predefined pattern information is an interface that is selectively used as pattern information of a widget manager.
  8. 제5항에 있어서, 상기 미리 정의된 인터페이스는,
    소정의 미리 정의된 패턴 정보로서 상기 패턴 정보 중 적어도 하나가 W3C(World Wide Web Consortium) 어플리케이션에서 선택적으로 사용되는 인터페이스인 사용자 인터페이스 구현 방법.
    The method of claim 5, wherein the predefined interface,
    And at least one of the pattern information as predetermined predefined pattern information is an interface selectively used in a W3C (World Wide Web Consortium) application.
  9. AUI(Advanced User Interaction) 패턴 정보를 생성하는 데이터 포맷 전환부; 및
    상기 데이터 포맷 전환부를 통해 생성된 AUI(Advanced User Interaction) 패턴 정보를 미리 정의된 인터페이스를 기초로 해석하는 인터페이스부를 포함하는 사용자 인터페이스 장치.
    A data format switching unit for generating AUI (Advanced User Interaction) pattern information; And
    And an interface unit configured to interpret the advanced user interaction (AUI) pattern information generated by the data format conversion unit based on a predefined interface.
  10. 제9항에 있어서, 상기 인터페이스부는,
    Point, Line, Rect, Arc, Circle, SymbolicPattern, TouchPattern, HandPosture 중 적어도 하나의 패턴 정보를 정의한 인터페이스부인 사용자 인터페이스 장치.
    The method of claim 9, wherein the interface unit,
    A user interface device which is an interface unit that defines at least one pattern information among Point, Line, Rect, Arc, Circle, SymbolicPattern, TouchPattern, and HandPosture.
  11. 제9항에 있어서, 상기 인터페이스부는,
    소정의 미리 정의된 패턴 정보로서 상기 패턴 정보 중 적어도 하나가 위젯 관리자의 패턴 정보로서 선택적으로 사용되는 인터페이스부인 사용자 인터페이스 장치.
    The method of claim 9, wherein the interface unit,
    And at least one of the pattern information as predetermined predefined pattern information is an interface unit selectively used as pattern information of a widget manager.
  12. 제9항에 있어서, 상기 인터페이스부는,
    소정의 미리 정의된 패턴 정보로서 상기 패턴 정보 중 적어도 하나가 W3C(World Wide Web Consortium) 어플리케이션에서 선택적으로 사용되는 인터페이스부인 사용자 인터페이스 장치.

    The method of claim 9, wherein the interface unit,
    And at least one of the pattern information as predetermined predefined pattern information is an interface unit selectively used in a W3C (World Wide Web Consortium) application.

PCT/KR2012/005484 2011-07-12 2012-07-11 Implementation method of user interface and device using same method WO2013009085A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/232,155 US20140157155A1 (en) 2011-07-12 2012-07-11 Implementation method of user interface and device using same method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2011-0069106 2011-07-12
KR20110069106 2011-07-12
KR1020120052257A KR101979283B1 (en) 2011-07-12 2012-05-17 Method of implementing user interface and apparatus for using the same
KR10-2012-0052257 2012-05-17

Publications (2)

Publication Number Publication Date
WO2013009085A2 true WO2013009085A2 (en) 2013-01-17
WO2013009085A3 WO2013009085A3 (en) 2013-04-11

Family

ID=47506704

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2012/005484 WO2013009085A2 (en) 2011-07-12 2012-07-11 Implementation method of user interface and device using same method

Country Status (1)

Country Link
WO (1) WO2013009085A2 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070130547A1 (en) * 2005-12-01 2007-06-07 Navisense, Llc Method and system for touchless user interface control
KR20080104858A (en) * 2007-05-29 2008-12-03 삼성전자주식회사 Method and apparatus for providing gesture information based on touch screen, and information terminal device including the same
US7519223B2 (en) * 2004-06-28 2009-04-14 Microsoft Corporation Recognizing gestures and using gestures for interacting with software applications
KR20100113995A (en) * 2009-04-14 2010-10-22 한국전자통신연구원 Method and apparatus for providing user interaction in laser

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7519223B2 (en) * 2004-06-28 2009-04-14 Microsoft Corporation Recognizing gestures and using gestures for interacting with software applications
US20070130547A1 (en) * 2005-12-01 2007-06-07 Navisense, Llc Method and system for touchless user interface control
KR20080104858A (en) * 2007-05-29 2008-12-03 삼성전자주식회사 Method and apparatus for providing gesture information based on touch screen, and information terminal device including the same
KR20100113995A (en) * 2009-04-14 2010-10-22 한국전자통신연구원 Method and apparatus for providing user interaction in laser

Also Published As

Publication number Publication date
WO2013009085A3 (en) 2013-04-11

Similar Documents

Publication Publication Date Title
KR101640863B1 (en) Sensation enhanced messaging
CN104133581B (en) Physical object detection and touch screen interaction
JP5791799B2 (en) Method and apparatus for target object recognition on machine side in human-machine dialogue
CN104144184B (en) A kind of method and electronic equipment for controlling remote equipment
US9158391B2 (en) Method and apparatus for controlling content on remote screen
Nebeling et al. XDKinect: development framework for cross-device interaction using kinect
US9420144B2 (en) Image forming device to provide preview image for editing manuscript image, display apparatus to display and edit the preview image, and methods thereof
CN105493621A (en) Terminal, server, and terminal control method
CN102637127A (en) Method for controlling mouse modules and electronic device
TW201137672A (en) System and method of haptic communication at a portable computing device
JP2020516983A (en) Live ink for real-time collaboration
CN114077411A (en) Data transmission method and device
KR20120106608A (en) Advanced user interaction interface method and device
Nor’a et al. Fingertips interaction method in handheld augmented reality for 3d manipulation
CN109559274A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN113778217A (en) Display apparatus and display apparatus control method
CN105204748A (en) Terminal interaction method and device
CN110689614B (en) Electronic map-based equipment information collection method, medium, equipment and system
WO2013009085A2 (en) Implementation method of user interface and device using same method
CN104461220A (en) Information processing method and electronic device
KR101979283B1 (en) Method of implementing user interface and apparatus for using the same
KR102188363B1 (en) Method for activating a mobile device in a network, and associated display device and system
US20190096130A1 (en) Virtual mobile terminal implementing system in mixed reality and control method thereof
KR20140020641A (en) Method of providing drawing chatting service based on touch events, and computer-readable recording medium with drawing chatting program for the same
CN102983888A (en) Method of information cooperative processing between smart mobile phone and computer through near field communication (NFC)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12810812

Country of ref document: EP

Kind code of ref document: A2

WWE Wipo information: entry into national phase

Ref document number: 14232155

Country of ref document: US

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12810812

Country of ref document: EP

Kind code of ref document: A2