CA2106891C - Ally mechanism for interconnecting non-distributed computing environment (dce) and dce systems to operate in a network system - Google Patents

Ally mechanism for interconnecting non-distributed computing environment (dce) and dce systems to operate in a network system Download PDF

Info

Publication number
CA2106891C
CA2106891C CA002106891A CA2106891A CA2106891C CA 2106891 C CA2106891 C CA 2106891C CA 002106891 A CA002106891 A CA 002106891A CA 2106891 A CA2106891 A CA 2106891A CA 2106891 C CA2106891 C CA 2106891C
Authority
CA
Canada
Prior art keywords
dce
rpc
ally
component
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002106891A
Other languages
French (fr)
Other versions
CA2106891A1 (en
Inventor
Scott A. Stein
Bruce M. Carlson
Chung S. Yen
Kevin M. Farrington
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bull HN Information Systems Inc
Original Assignee
Bull HN Information Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bull HN Information Systems Inc filed Critical Bull HN Information Systems Inc
Publication of CA2106891A1 publication Critical patent/CA2106891A1/en
Application granted granted Critical
Publication of CA2106891C publication Critical patent/CA2106891C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services

Abstract

A distributed system includes a non-distributed computing environment (DCE) computer system and at least one DCE computer system which are loosely coupled together through a communications network operating with a standard communications protocol. The non-DCE and DCE
computer systems operate under the control of proprietary and UNIX based operating systems respectively. The non-DCE computer system further includes application client software for providing access to distributed DCE service components via a remote procedure call (RPC) mechanism obtained through application server software included on the DCE computer system. A minimum number of software components modules which comprise client RPC runtime component and an import API component included in the non-DCE and an Ally component on the DCE computer systems to operate in conjunction with the client and server software to provide access to DCE services by non-DCE user applications through the RPC mechanisms of both systems eliminating the need to port the DCE software service components onto the non-DCE computer system.

Description

BACKGROUND OF THE TNVENTION
Field of Use This invention relates to data processing systems and more particularly to systems which operate in a distributed computing environment.
Pr7.Or Art In the 1980's, computer hardware vendors have responded to the need to provide users with access to UNIX based systems.
Some vendors have provided such access by interconnecting or integrating their proprietary systems with UNIX based systems through the use of separate processor boards and separate operating systems. An example of this type of system is described in United States Patent No. 5,027,271 entitled "Apparatus and Method for Alterable Resource Partitioning Enforcement in a Data Processing System having Central Processing Units using Differen t Operating Systems" which issued on June 25, 1991.
With the continued increases in the speed and power of computer hardware and the development of high speed local area networks, it becomes even more important to be able to combine large numbers of different vendor systems and high speed networks.
Such systems are called distributed systems in contrast to centralized systems. Such systems are more economical, provide greater total computing power and provide greater reliability than centralized systems.
However, there are certain problems associated with such systems in terms of the lack of distributed software, network communications and message security.
In general, the approach has been to port a substantial number of software services from the proprietary system platform to the UNIX based system platform or visa versa or the addition of special hardware facilities and software for running 'the UPIIX
based operating system in another environment. This has proved quite costly and has required 'the allocation of sub stantial resources. Additionally, these system platforms have required continuous support in order to provide timely upgrades and enhancements.
Accordingly, it is a primary object of the present invention to provide a technical solution which allows DCE
services to provide in an incremental way on a proprietary system platform.
sUMMAR'Y OF THE INVENTION
The above objects and advantages are achieved in a distributed computer system including a plurality of computer systems coupled together through a common communication network, a first one of said systems corresponding to a non distributed computing environment (DCE) system which includes a first type of operating system for running non DCE application programs on said first one of said systems and a second one of said systems corresponding to a DCE system including a second type of operating system which is compatible with said DCE system for running application programs compiled on said second system and wherein said distributed computer system further includes: ~n ally component and a distributed computing environment (DCE) appllCation system installed in said second system to run in conjunction with said second type of operating system, said DCE
system including a plurality of components for providing a ~~,<~s~:9~.~_~
plurality of basic distributed services and a remote procedure call (RPC) component for processing remote procedure calls between client and server application programs communicating through a pair of RPC stub components according to a predetermined RPC
protocol, said ally component including a plurality of management routines for enabling local requests made by said client application programs to said RPC component of said first system to be processed by accessing said plurality of distributed service components of saicl DCE system; and, a RPC runtime component included in said first system, said RPC runtime component including a RPC subcomponent and an application program interface (API) subcomponent operatively coupled to each other, said RPC
runtime component including a minimum number of ported routines responsive to a corresponding number of standard DCE RPC requests for determining when any local client request is to be forwarded to said ally component of said second system and said API
subcomponent including a plurality of subroutines for enable transfer of said each local client request received by said RPC
component of said first system to said ally component of said second system using said predetermined RPC protocol established by said client and server RPC stubs for accessing a designated one of said distributed service components of said DCE system of said second system thereby eliminating the need of having to port said DCE service components to operate on said first system.
The novel features which are believed to be characteristic of the invention both as to its organization and method of operation, together with further objects and advantages will be better understood from the following description, when ~~~tj~~'~~
considered in connection with the accompanying drawings. It is to be expressly understood, however, th at each of the drawings are given for the purpose of illustration and description only and are not intended as a definition of the limits of. the present invention.
BRIEF DESCRIPTION OF T~iE DRAWINGS
Figure 1a shows the major components of the distributed computing environment (DCE) architecture included in the preferred embodiment of the present invention.
7.0 Figures 1b through 1d show in greater detail, the DCE
RPC component service of Figure 1a.

6"~ ~' f! (a r' _ Figure 1e shows a prior art distributed system configured to perform a DCE RPC operation.
Figure 2a shows in block diagram form, a distributed system which incorporates the components of the present invention.
Figure 2b shows in greater detail, the ally component of Figure 2a.
Figures 3a through 3c are flow diagrams which diagrammatically illustrate some of the operations performed by the systems of Figure 2a.
Figures 4a, 4b and 4c are diagrams used to illustrate the operation of the preferred embodiment of the present invention.
~a~N~.R~Ie D~seRIpT~~N of DCE llItC~TTECTt'Ff$
Figure la illustrates the OSF DCE architecture used in the system 'of the present invention. As shown, DCE
includes a set . of server components for providing distributed services that support the development, use and maintenance of distributed applications in a heterogeneous networked environment. The server components include a remote procedure Call (RPC) service component including presentation service, a Naming (Directory) service component, a Security service component, a Threads service component, a Time service component and a Distributed file system service component.
The RPC service component is an access mechanism which implements the model of the simple procedure call within the alient/server architecture for allowing programs to call procedures located on other systems without' such' message passing or r/O being at all visible to the programmer. The DCE RPC service component ~~.~'.J~~~S~j_~
- b -enables distributed applications to be defined and built in an object oriented way.
Remote procedures are viewed as operations on objects (e. g. printers, files, computing units) rather than as calls to particular systems or server processes.
As discussed herein, it hides from the programmer various network complexities such as the means for locating and accessing host systems on which the required services are running. It also makes it easier to produce programs that are less dependent on system configurations such as host names and locations. The ' RPC component service design is based on the Apollo Network Computing System (NCS) which provides a clearly specified RPC protocol that is independent of the underlying transport layer and runs over either connectionless or connection oriented lower layers.
The RPC service component consists of both a development tool and a runtime system/service (RTS).
The development tool consists of a language (and its compiler) that supports the development of distributed applications following the client/server model. It automatically generates code that transforms procedure calls into network messages. The development tool is discussed later herein with refers.nce to Figure 1d to the extent necessary for an understanding of the present invention.
The RPC runtime service/system implements the network protocols by which the client and server sides of an application communicate. Also, DCE RPC includes software for generating unique identifiers which are useful in identifying service interfaces and other resources.
The DCR Directory or Naming Service component is a central repository for information about resources in n~,')~ 4 _ 7 _ the distributed system. Typical resources are users, machines, and RPC-based services. The information consists of the name of the resource and its associated attributes. Typical attributes could include a user°s home directory, or the location of an RPC-based server.
The DCE Directory Service comprises several parts: the Cell Directory Service (CDS), the Global Directory Service (GDS), the Global Directory Agent (GDA), and a directory service programming interface. The Cell Directory Service (CDS) manages a database of information about the resources in a group of machines called a DCE cell. The Global Directory Service implements an international standard directory service and provides a global namespace that connects the local DCE cells into one worldwide hierarchy. The Global Directory Agent (GDA) acts as a go-between for cell and global directory services. Both CDS and GDS are accessed using' a single directory service application programming interface (API) and the X/open Directory Service (XDS) API.
The DCE Security Service component provides secure communications and controlled acceras to resources in the distributed system. There are 'three aspects to DCE
security: authentication, secure communication, and authorization. These aspects are :implemented by several services and facilities that together comprise the DCE
Security Service, including the Registry Service, the Authentication Service, and Privilege Service, the Access Control List (ACL) Facility, and the Login Facility. The identity of a DCE user or service is verified, or authenticated, by the Authentication Service. C~mmunication is protected by the integration ofv DCE,~. RP~'° witho:: ,the, . ~~cuW t'y S~rv3:'c~-communication over the network can be checked fox tampering or encrypted ;l p (l r _g_ for privacy. Finally, access to resources is controlled by comparing the credentials conferred to a user by the Privilege Service with the rights to the resource, which are specified in the resource's Access Control List.
The Login Facility initializes a user's security environment, and the Registry Service manages the information (such as user accounts) in the DCE Security database.
The DCE Threads Service component supports the creation, management, and synchronization of multiple threads of control within a single process. This component is conceptually a part of the operating system layer, the layer below DCE. If the host operating system already supports threads, DCE can use that software and DCE Threads is not necessary. However, not all operating systems provide a threads facility, and DCE components require that threads be present, so this user-level threads package is included in DCE.
The Distributed File System Service component allows users to access and share files stored on a File Server anywhere on the network, without having to know 'the physical location of the file. Files are paxt of a single, global namespace, so no matter where in the network a user is, the file can be found using the same name. The Distributed File Service achieves high performance, particularly through caching of file system data, so that many users can access files that are located on a given File Server without prohibitive amounts of network traffic and resulting delays. DCE
DFS includes a physical file system, the DCE Local File System .(LFS), which supports special features that are useful in a distributed environment. They include the abihity to replicate data; Tog file systemw datay enabling quick recovery after a crash; simplify ~~~.~~'~a ~.

administration by dividing the file system into easily managed units called filesets; and associate ACLs with files and directories.
The DCE Time Service component provides synchronized time on the computers participating in a Distributed Computing Environment. DTS synchronizes a DCE host's time with Coordinated Universal Time (UTC), an international time standard.
TAE DCE TtPC RUNTIME S7f8TEM ~RCHI~'EC3~~RE
Figure 1b shows in greater detail, the DCE RPC run time system (RTS). As shown, the system is divided into three major components. These are the Communication Services (CS), Naming services (NS), and Management services (MS). The CS is responsible for carrying out the actual work being done when a remote procedure is called. That ~is, it accesses the network and provides the means for simultaneously executing different transport and RPC protocols. As shown, the CS includes a network address family extension services section, a common communication services section, a RPC protocol machines section and a common network services section which contains network independent functions such as read, write, open and close.
The network address family extension services section provides functions for manipulating addresses within separate modules for each network service which present the same interface to the CCS. Whenever a network specific operation such as returning the endpoint in a network address needs to be performed, the CCS makes the corresponding function call causing the activatibn of the specific module determined duringw initialization time. The RPC protocol machines provides ~. r7 qj s i~ i~ ;1 .:~ S

functions for handling different protocols. The hiding of two supported RPC protocols, connection oriented or connectionless (datagram) based is achieved in the same way as described above. For each RPC protocol machine, an initialization routine returns function pointer arrays which present a common interface to the CCS. The CCS uses them to make data transfer calls, collect statistics, access the relevant fields in the banding representation structure and notify the RPC protocol machine of network events.
The NS accesses the distributed system°s naming service such as an X.50 directory to locate servers by specifying an interface or an object which have to be exported by servers and imported by clients. The NtS
manages RPC services ~eitlher locally or remotely. Remote management functions are a subset of local ones and are made available via the RPC mechanism.
Figure lc shows the CCS in greater detail. As shown, the CCS includes an initialization service component, a thread service component, a call service .
component, a network listener service component, a binding service component, a secur:tty service component, an interface service component, an object service component, a communications management service component and a utility service component.
The initialization service component sets up the R1~C environment and the creation of prerequisites for communication services such as the allocation of static tables. Also, transport and RPC protocols to be supported are also established. A table is generated during initialization that assigns an identifier (RPC
protocol sequence id), to each combination of network, transport and RPC protocols. These types of identifiers appear later as attributes to structures that specify ~ r\ m ~~ T~ A
.r.. ~ ~ ~) ~j interfaces. Thus, an interface representation data structure may exist several times for the same interface but with different protocol sequence identifiers associated with it. When the RTS library is build as a non-shared image, the initialization routines are called when either the application or stub first calls the CS.
The thread service component contains functions for manipulating threads such as creating and destroying them. Tt also manages a table that keeps track of a thread's status and relates it to the RPC context it is executing. In the DCE RPC implementation, there is a client call thread for each request an application makes. This thread executes all of the stub and CS
code. When all of the parameters are marshalled and passed to the network, the client thread is blocked until the result of the call is returned.
The call service component provides functions of sending and receiving data. It also manipulates the call handle data structure which serves as a parameter for all the functions in the component. The important components of this structure are the binding representation structure which contains information about the remote partner, interface and operation identification, a thread identifier, dynamic information suah as the employed transfer syntax, and RPC protocol specific information. The other parameter c~nuaon to the call service functions is the I/0 vector which is an arr :y of buffer descriptors. The buffers pointed to in the I/O vector provide the memory for the marshalled data. The routines in the call service are used to start and end transfers or to send and receive data:
data.:. . i,s:.... . sent,. . ox.,. . rece:ived,.,. in...:.., f,~a,gm~ts., , f nr,.. larger., quantities ~f data these calls have to be repeated until all the data is processed, There is a special call that i E3 r. ;
~.~ ~ i~ J ~.

is utilized to indicate to the remote RPC process that the last chunk of data belonging to the same request or response is being sent.
The network listener service component detects events on any network and delivers them to the correct RPC protocol manager (PM). It accesses an internal table where network descriptors are related to RPC
protocol sequences (transport and RPC protocols). It manages that table (add, delete operations) as well as monitors the network itself. For example, it checks the liveliness of the remote partner in the case a connectionless protocol is employed. The network listener service component further manages the network listener thread (creation, termination, notification of events) .
The binding service component provides functions to manipulate the binding representation data structure or binding handle.' A client makes a request to the naming service to first obtain this structure. The binding handle contains information that relates a client to the server process. It consists of the location of the server the entire presentation address consisting of the server network address and the communication port of the server process), an object identifier, an interface identifier and ~ further RPC protocol specific information. This structure is pointed to in the call handle which is part of every function in the call service components The RpC security service component provides for the selection ~f four levels of security. These are the performance of authentication on every assoc~.ation establishment, the performance of authentication on e~xe~ry.. ca<1.L.;:,, ~e;;~ err~orceyaent~°vaf-:
integrity°v'on... ~very° padkst°-,:
and the enforcement of privacy on every packet. The interface service component includes functions to handle the internal interface registry table which keeps track of all of the interfaces registered with CCS. It contains interface uuids, version numbers and operation counts.
The object service component manipulates another internal table that relates objects to the types to which they belong. Particular objects include servers, server groups and configuration profiles. The communication management service component includes functions for managing all of the CS: CCS, network services, and the RPC protocol services. This includes processing incoming calls, indicating how fault messages should be treated, allowing or barring shut-downs, and gathering statistics. The utility service component includes the functions for handling uuid, timers and buffers.
The RPC naming service component of Figure 1b provides the following groups of services: operations for manipulating binding handles, none of which imply communications between the client and server (e. g.
export, import, lookup); operations for retrieving information about entries in general (e. g. interfaces and ob j ects exported by a server) f operations on server groups (e. g. create, delete, add, remove members);
operations on profiles; and interfaces for managing applications for creating or deleting name service entries in a consistent manner.
The Management Service component of Figure 1b provides functions that can anly be used by an application to manage itself locally and functions that can be called to manage remote RPC based applications.
In the case of" th'e' I'att'~r, an application to b~ managed has to make itself available by calling the appropriate i ~1 :.y !;v a,~
~_ E.l 4: v y3 run-time functions. An application can control remote access to its management interface through authenticated RPC services. Examples of local management services are calls to inquire about or set time-out values for binding handles or alerts. Examples of remote management functions are to determine whether a server is listening for incoming calls, start/stop listening on a server or collect statistics.
Rpc s~RVics ao~por~rr~rr ARCmmsa~n~
Figure 1d shows the basic components of the DCE RPC
service component. As previously mentioned, this component consists of both a development tool and the runtime system described in connection with Figure lc.
The development tool includes an WID generator, the RPC
Interface Definition Language (IDL) and an IDL compiler.
The RPC runtime system includes a client or server application, an RPC stub interface, an RPC runtime interface and the RPC runtime system (RTS) which includes the previously described components.
~?eyelot~ment Tool The WID generator is an interactive utility that creates WIDs (universal unique identifiers). The significance of a given WID depends entirely on its context and when the tTUID is declared in the definition of an interface, it defines that interface uniquely from all other interfaces.
A specific RPC interface is written in IDL which is a high level descriptive language whose syntax resembles that of the ANSI C programming language. The DCE RPC
interface definition contains two basic components. One is an interface header including an RPC interface header which contains an interface UUID, interface version ~~ , r, ,~~, a ~~.J~:;i.')~J.~

numbers and an interface name, conveniently chosen to identify the interface or its function. The second component is an RPC interface body which declares any application specific data types and constants as well as directives for including data types and constants from other RPC interfaces. This component also contains the operation declarations of each remote procedure to be accessed through, the interface which identifies the parameters of a procedure in terms of data types, access method and call order and declares the data type of the return value, if any.
Using IDL, the programmer writes a definition of an RPC interface for a given set of procedures. The RPC
interface can be implemented using any programming language provided that the abject code is linkable with the C code of the stubs and the procedure declarations conform to the operation declarations of the RPC
interface definition and the calling sequences are compatible.
The IDL compiler processes the RPC interface definitions written in IDL and generates header files and stub object code or source code for stubs written in ANSI C. The generated code produced by the compiler from the RPC interface definition includes client and server stubs that contain the RPC interface. The compiler also generates a data structure called the interface specification which contains identifying and descriptive information about the compiled interface and creates a companion global variable, the interface handle which is a reference to the interface specification. Each header file generated by the IDL
compiler contains the reference that the application code needs to access the interface handle and allows the application code to refer to the interface specification ~ ~ Sr ~.. ~J L.~r vJ aJ
- 1b -in calls to the RPC runtime. The runtime operations obtain required information about the interface such as its WID and version numbers directly from the interface specification. For further information about the IDL
syntax and usage, reference may be made to the publication entitled OSFtm DCE Version 1.0 DCE
Application Development Guide published by the Open Software Foundation, Copyright 1991.
8~i°.APdD~RD DCE BYSTEI~ F~R Il~?PIdEIrIE%d'fINt3 RPC C~.TJLB
Figure 1e illustrates the basic system configuration which implements typical RPC calls in a DCE system. As previously mentioned, the IDL compiler generated a pair of stub files, one for the client and one for the server, for a given application which is installed on the same system or on separate systems as shown in Figure 1e. The client and server stub files consist of RPC routines that handle all of the mechanical details of packaging (marshalling) and unpackaging (unmarshalling) data into messages to be sent over the netwcark, the handling of messages, such as sending and receiving of such mesasages and all other details of managing network communications, all as defined by the specifications made in an .idl file which defines.the set of remote operations that constitute the interface.
The server implementations of remote operations are written in ~ source code which is compiled and then linked to the sever stub code and DCE library. The interfaces to these remote operations are defined and characterized in the IDL language in the .idl file. The criertt:~ imp~leme~rntw~forrs~ ~~,.:,the~ ~~san~gj~ re~o~~ . operations are .
also written in C source code which is compiled and then ~~ J~«=~:~

linked to the client stub code and DCE library. In Figure 1e, the above compilations occur on the two DCE
client and server systems.
As shown, both systems include UNIX libraries which contain the standard routines for handling socket calls (e. g. accept, ioctl, listen), file manipulation functions (e.g. open,.close, read, write), standard I/O
functions (e.g. print, open, get), memory management and exception handling functions {e. g. free, malloc, realloc) as well as other operations. Also, the client and server systems of Figure 1e include a number of DCE
servers for executing different procedures or services for a user application program. The client DCE servers normally perform local services for client application program such as naming or directory services, security services and, time services. The DCE servers may perform remote, services for client application programs such as print services, file sexwices, calendar services, administrative services, etc.
The client side of the application is usually implemented as a library wherein the client side of the application consists of a call to a routine that executes (sending the request over the network and receiving the result) and then returns and continues whatever else it was doing. The server side of the application is a dedicated process that runs continuously waiting for a request, executing it and returning the answer, then waiting for the next request, etC.
As shown, the DCE servers have access to the UNIX
libraries and to the various DCE libraries, APIs and DCE
serv~.ce components previously described herein (i.e., threads, DTS, naming/CDS, security and RPC ). In each system, DCE system or layer is layexed on top of its b ,~ ,~. r, S ~ 'z~ ~~ =j ~

local operating system and networking software. As shown, DCE is layered over a transport level service such as UDP/IP transport service which is accessed through a transport interface such as sockets. The implementation of the DCE system of Figure 1e is dependent upon the use of the Internet Protocol (IP) and socket networking services and UNIX operating facilities.
Figure 1e also shows the control flow for an RPC
call. This operation is performed as follows. The client's application code (Client-side application) makes a remote procedure call, passing the input arguments to the stub for the called RPC interface (RPC
stub routines). The client's stub (RPC stub routines) marshalls the input arguments and dispatches the call to the client DCE RPC runtime. The client's DCE RPC
runtime transmits the input arguments over the communications network (i.e., via the socket interface, socket interface layer and UDF/IP transport service) to the server's DCE RPC runtime.
The server's DCE RPC runtime dispatches the call to the server stub (RPC stub routines) for the called RPC
interface. The server's stub (RPC routines) uses its copy of the RPC interface to unmarshall the input arguments and pass them to the called remote procedure (DCE servers). The procedure executes and returns any results (output arguments or a return value or both) to the server°s stub (RPC stub routines). The server's stub (RPC stub routines) marshalls the results and passes them to the server's DCE RPC runtimQ. The server°s DCE RPC runtime transmits the results over the communicati~ns networDc to the client°s DCE RPC runtime which dispatches the results to the client's stub (RPC
stub routines). The client's stub (RPC stub routines) ,t ~' 4 a G , ~ , uses its copy of the RPC interface to unmarshall the output arguments and pass them to the calling application (Client side application).
BYSTEM OF PREFERRED EMEDDI1~EIJ~' Figure 2a shows in block diagram form, a distributed system constructed according to teachings of the present invention. As shown, the system includes a non-DCE system 10, a DCE (ally) system 12 and a DCE
server system 14 which are loosely coupled together by a canventional communications network 16. The non-DCE
system 10 contains an API import library component 10-12 and a client only RPC runtime component 10-2 (GX-RPC) constructed according to the teachings of the present invention. As shown, the component 10-2 includes an ally API component 10-12. The ally APT component 10-2 contains the special routines for accessing DCE based services on s~rstems 12 and 14 directly or indirectly through the ally on system Z2 as described herein. That is, the ally API component 1tD-12 enables client applications to access DCE based service components without ha~rirag to implement sash components on the non-DCE client system 10.
The GX-RPC runtime component 10-2 contains the necessary runtime routines for initiating RPC requests to ally system 12 through ally AP:L 10-12 on behalf of client local requests for providing access to the DCE
services available on systems 12 and 14. That is, the GX~RPC runtime component 10-2 is responsible for reliably and transparently forwarding requests which ~x°iginate as local requests and receiving responses from ssrch servers through the ally:

~~e~
- Zo -The non-DCE system 10 also includes a GCOS library 10-6 which contains procedures for processing standard operating system calls by an application to the operating system facilities. Also, the library 10-6 is made accessible to the ally API component 10-12 for accessing client platform facilities (e. g. configuration files) to the extent necessary., Lastly, the system l0 includes a conventional ANSI C compiler 10-8 for compiling and linking client side application programs and client RPC stub source routines as explained herein.
System 12 is a DCE system which includes all of the DCE service components of block 12-3 (e.g. DCE
libraries, DCE APIs and DCE RPC runtime component), the DCE servers of block 12-7 and the UNIX libraries of block 12-5. According to the teachings of the present invention, the system 12 also includes an ally component 12-10. The ally component 12-10 operatively couples to the DCE components of block 12-3 and to the UNIX
libraries of block 12-5. It is responsible for taking in the requests from the ally API routines of block 10-12 running on system 10 and then invoking the actual APIs available on ally system 12 (i.e., converts ally RPC calls into actual standard DC:E RPC calls). These calls invoke. the specific DCE servers of block 12-7 as eatplained herein in greater detail.
In the preferred embodiment, the non-DCE system 10 corresponds to a DPS system which operates under the control of the GCOS aperating system and the DCE system 12 is a DPX system which operates under the control of the BOS operating system. Both of these systems are marketed by Bull HN Information Systems Inc. The system 14 can be any DCE based system which includes the standard' UNI7C' J~ased' operating system and DCE server application software.

~ ~~ S, a~ ~~
.'3. i~ ~.i aJ
ALLY C0MF0PIEYdT
Figure 2b shows in greater detail, the ally component of block 12-10. As shown, the ally component 12-10 includes a plurality of subcomponents 12-100 through 12-10~. These correspond to an ally request/DCE
service subcomponent 12-100, an ally naming service subcomponent 12-102, an ally security service subcomponent 12-103, an ally forwarding service subcomponent 12-104 which includes an ally security service subcomponent. Each subcomponent includes the different routines required for performing DCE services on behalf of the client application running on system 10 for requests that are normally handled locally according to the present invention.
The request subcomponent 12-100 processes requests for DCE related services such as getting binding handles and requests that the client RPC runtime 10-2 sends to the ally component for miscellaneous services (e. g, initialization, etc.), The naming subcomponent 12-102 handles naming service requests received from the client RPC runtime component 10-2. The ally security service subcomponent 12-103 handles rectuests for security services made by the client RPC runtime 10-2.
The ally forwarding service subcomponent 12-104 provides communication facilities fox handling requests made by the client RPC runtime component 10-2 enabling access to target servers when the requests are required to be processed by the ally component due to the nature of the type of request (e.g. differences in the communications protocols supported by the target server and ' the ' clf°erit system, ox the security requirement, etc.). The security subcomponent of the forwarding ~:~Y~~a~~:l~

subcomponent 12-104 is used when the ally is acting as a proxy. This subcomponent performs the necessary operations on the request being forwarded to the target server to meet the specified level of security. The functions performed by the ally subcomponents will now be considered in greater detail.
Ally Binding Servic~s Subcompoagnt In the DCE system of Figure 1e, before any client program can call a server function, it must first establish a binding to the server. This is done by acquiring a binding handle from the DCE RPC runtime component. A binding handle contains information about the communications protocol that a server uses, the address that the server is listening on, and optionally, the endpoint that the server is listening on. A binding may also contain information about security and object-oriented features of the DCE RPC runtiiae component.
A binding handle is acquired through one of two techniquess string binding and the DCE naming service.
String binding is used to convert a string representation of the binding handle to an actual binding handle. In the DCE system of Figure 1e, the DCE
naming service is used to look up servers in the name space and return appropriate binding handles to them.
In the system of the present invention, the acquisition of a binding handle by a GCOS client program requires some interaction with the ally 12-10, to allow the ally to perform/support the security and forwarding services. By way Af example, to create a binding handle from a string binding, the client program provides a string representation of the binding to the client RPC
runt~,me5 compan~ent~. T~e~ ~. c~l'xer~~:.., G~"-R'F'C",: ~ntinte . domponent w .
10-2 then extracts the appropriate information from the s~ >r ~a.i,~:~(J~~

string, such as network type, network address, and endpoint.
It is possible to perform this operation entirely on the client system 10 without any interaction with the ally component 12-10 by including the mechanism that the client uses to create its service association with the ally component 12-10. However, this would limit the client program to using only those communications protocols supported by client system 10. This could severely restrict the client system's access to servers.
In the preferred embodiment of the present invention, the ally forwarding service subcomponent 12-104 is used to carry out the above binding operation for the client system l0. Tf the client system finds that the binding can not be supported by the client°s GX-RPC
runtime component 10-2, it calls the ally component 12-with the string binding. The ally 12-10 then tries .
to create a binding handle to the target server, using its own RPC runtime component 12-3. The ally RPC
runtime component 12-3 determines whether the ally 12-10 has access to the target server, and created a binding handle wherever possible: The proxy binding created by the ally 12-10 is provided to the forwarding service subcomponent 12-104 which returns this proxy binding handle back to the client system 10. Thereafter, any calls that the client system makes using the binding handle passes through the ally component 12-10 to the actual target server.
Another service provided by the ally which affects the binding calls is the ally security service. Again, this is a case where the ally must return a proxy binding to the client runtime system so that all traffic passeg~.: throughr the,, a-hly,u~,.: ~e.~,rity~ >.servi'cev. subcomprr~er~tw 12-103. In a DCE system of Figure 1e, security levels l ? C'~ .t : -~
~~ i.~ ~i a ;L

are associated with binding handles. In system 10, when the client sets a security level on a binding handle, the client system 10 CX-RFC component 10-2 passes the binding handle to ally component 12-10. If the ally component 12-10 was not already providing.a forwarding service for the particular binding, the ally component 12-10 creates a new endpoint for the client to bind to, and returns a new binding handle to the client system via the RPC runtime component 12-3. The ally component 12-10 then applies through its forwarding security subcomponent, the specified security level to any messages that are passed on the binding handle which are in turn forwarded to the target server.
ally naming s~rviaes subcompvn~nt In the DCE system of Figure 1e, an integrated access to the Cell name space by is provided by integrating the Cell Directory Service (CDS) or more specifically, the CDS clerk, into the DCE RPC runtime.
These calls can be divided into two categories: binding imports and name space manipulation. The binding import API is used to import binding information from the cell name space. The name space manipulation routines are used to manage the cell name space by performing such functions as creating entries, groups or profiles, placing entries into groups, etc.
Iri the system oil the present invention, the ally component 12-10 performs all naming API operations which are integrated into the DCE RPC runtime component. This is dons since no Cell Directory Service (CDS) code (i.e., CDs clerk) is ported to the client system 10 and that the Ally component may supply proxy bindings to the client,...aystem~.. 20~:..when._pgrfornri~ng~..forw~rdring and°°
s~curity~
services on behalf of the client system. As described :3 r y~ ra r~
~w .~_ ~ tt i.3 ~ .~.

herein, performance sensitive rpc_ns*() functions such as binding, their resolution is carried out as fast as possible by importing such RPC naming service APIs. The following describes such API functions in greater detail.
rpm ns~binding_* fuacti~~
In the DCE system of Figure 1e, the client runtime component uses the rpc ns binding-* functions to locate servers and import their binding handles from the name space. These functions pass an interface-UUID and optionally, an object-UUID, to the name space access functions and expect back one or more binding handles to servers that provide the specified interface.
In the system of the present invention, when the client runtime component l0-2 calls an rpc~ns_binding~*
function, the runtime component l0-2 determines that it is unable to perform the function and issues an RPC call to the ally component 12-10 in which it provides the interface-UUID, version number, and object-UUID. Upon seeing the request from the cliewt system 10, the ally component 12--to issues the appropriate DCE
rpcTns-,binding* call to its RPC runtime component 12°3 (i.e., ally has full access to the DCIE service components).
When the call returns, the ally component 12-l0 .converts the binding handle to its string representation and returns it to the client system 10 as the result of the remote procedure call. The ally component 12-10 next sorts th~ contents of the binding vector table to place those handles directly accessible to the client system 10 at the front of the vect~r. This facilitates the client system 10 in being able to perform direct 1~PC's' to target servers. The client system l0, upon ~.~ V.~ t.i a receiving the response, attempts to form a binding handle from the string binding. This operation may require further interaction with the ally component 12-to have the ally provide the appropriate forwarding service. From this point on, the binding operation is handled like the string binding operation.
Security considerations may be involved in the ally's handling of rpc ns bindingr* functions. In the DCE system, a client application program is allowed to specify a security level on an interface, and optionally, an object. This is done by passing the appropriate interface-WID and object-WID to the rpc_if_registex auth~info() call function. This call has the effect of automatically associating a security level to any binding handles allocated to this interface as discussed herein.
In the system of the present invention, when the ally component '12-10 detects that the requested call is for a secure interface, it returns a proxy binding handle to the client system 10 which forwards all packets through the ally forwarding security service subcomponent 12-104. The ally subcomponent 12-104 then associates the specified security level with the binding handle so that any traffic on this binding handle will have the appropriate security level applied. Since the import function can return more than one binding handle, the ally component 12-10 maps the security service's binding handle to a list of actual server binding handles. When the first RPC reciuest comes in from the client system 10 to the proxy binding handle, the ally component 12-10 selects one of the bindings from the vector table. The selected binding is used or remains inr, . ef fact; ~ : urt'~ivy,., ~ithrrw. the ..: proxy..., bated°i'ng.' handT'e . . i.s freed, or the endpoint is unmapped.

Ally forwarding servic~ subcomponent The Ally forwarding service subcomponent 12-104 is used to give the client more access to servers by acting as a forwarding agent For the client system. This subcomponent service allows the client system 10 to communicate with servers even if the client and server do not share a common communications protocol. However, the forwarding service requires that the client system and Ally component 12-10 share a common protocol and that the Ally component 12-10 and server share a common protocol. As discussed above, the Ally forwarding service is configured through the use of the Ally binding service or Ally request subcomponent 12-100.
When the client system detects that the it unable to reach a given server, it forwards the binding handle to the Ally~s banding service subcomponent 12-100 to see if the Ally is able to make the connection. The Ally component 12-10 determines if it is able to communicate with the server by attempting to bind to the server. If this binding succeeds, the Ally component 12-10 creates a proxy binding to the forwarding service subcomponent 12'104 which is then returned to the client system 10.
The proxy binding represents a transport endpoint that is managed by the forwarding sezvice subcomponent 12-104. The forwarding service subcomponent 12-104 passes any RPC packets coming into this endpoint on to the actual server. It also passes any responses, callbacks, etc. back to the client system GX-RPC runtime component 10-2. To provide this capability, the far~aardin~ service subcomponent 12-104 is given access to the entire RPC packet header, so that it may be forwarded" , intact" ta, the.,. se,~er",., . Thris., . is", achieued- by _ 28 -having the forwarding service subcomponent 12-104 connected to listen directly to a transport endpoint.
Each endpoint, or proxy binding, that the ally forwarding service subcomponent 12-104 manages will be associated to the actual binding handle of the target server. When an incoming packet from the client system is detected on the proxy binding, the ally forwarding service subcomponent will read the packet. Based upon the proxy binding that the packet came in on, the ally then determines the target address of the server, and forwards the packet to the server. Similarly, a packet coming from the server to the ally will be forwarded through this proxy binding to the client system 10.
When using the datagram service, the GX-Ri~C runtime component 10-2 is able to multiplex the ally to server messages over the one ally transport endpoint. This requires that the ally record the activity-WTD of every client to ally message. The forwarding service subcomponent 12-104 then uses this activity-WID to demultiplex the incoming messages from the server so they may be forwarded to ,the client system 10.
. For a connection-oriented service, the ally component 12-10 establishes a connection to the remote server. The forwarding service subcomponent 12-104 forwards messages between these two connections. The ally component 1210 handles connPCtions being dropped by either the client or server. If either connection is dropped, the ally component 12-10 terminates the corresponding connection. This signals the remote end that there has been a communications failure.
The forwarding service subcomponent 12-104 also manages endpoint mapping for the client application.
Binding handles returned from the cell name space usually only contain the network address of the node providing a given service. The actual endpoint at that node is usually not registered in the cell directory service because these entries tend to change frequently.
The DCE system supplies a server called rpcd which runs on a well-known port on every server machine. A server normally binds an endpoint and registers it with the rpcd daemon. When a client detects that it does not have an endpoint to bind to on the remote system, it contacts the endpoint mapper on that node to look up the server's endpoint.
When the ally component 12-10 sees a message from client system 10 on the forwarding service subcomponent 12-104, it checks to see if it knows the server's endpoint on the remote system. If the server's endpaint is unknown, the ally component 12-10 contacts the endpoint mapper on the server node, and by passing it the interface-UUID and object-UUID of the server, acquires the endpoint of the server. It uses the endpoint to forward the packet to the server.
The endpoint mapping service is implemented using the DCE/RPC runtime components. The ally component 12-imports the client stubs for the endpoint mapper interface. When the ally component: 12-l0 has to contact the remote endpoint mapper, it i°irst gets a binding handle from the DCE RPC runtime component connected to the remote server's well-known port. This is done as a string binding operation. The ally then uses this binding handle to call the remote endpoint mapper. Once the server's endpoint is known, the ally then forwards the packet to the server.
~rlly security servio~ subo~~npobe~t The: ally: s~c~yr,3ay.:: swi-Ce::,. ,s.ub~comgreanwn~~:. 1.2:.-.Zo-~., as,.
used to allow the client application on client system 10 s.~ 1;% ~~ -A-perform DCE authenticated and authorized remote procedure calls. To provide these services, the client system interacts with the DCE Authentication Service (Kerberos) and the DCE Registry Service. In a DCE
system, the client's RPC runtime component utilizes these services to provide the necessary levels of authentication and authorization, as specified by the user, on each packet sent out on a binding handle.
In the client system of the present invention, the GK-RPC runtime component 10-2 has no access to the DCE
authentication and authorization services. That is, the client system depends on the ally component 12-10 to add authentication and authorization information to the RPC
packets. To enable the ally to perform this service, the information about what type of authentication and authorization are passed from the client GX-RPC runtime component 10-2 ,to the ally. Any traffic associated with a binding handle that has a security level set also passes through the ally component 12-10, meaning the ally's security service is closely tied to the ally's forwarding service.
In DCE, there are two interfaces/functions that enable the client application to associate a security level with a binding handle. These functions are rpc binding set_auth_info() and rpc_if regis-ter auth~info(j. The first call function takes a binding handle and sets the authorization and authentication information on it. The second call function associates a security level with an interface, and then any binding handles allocated to that interface will automatically inherit the security level. This capability is very useful when dealing with stubs using the [auto_handle] attribute, which causes the client . ~ ( > f i ~ ~ A
~,; ,.J ,.j ~.

stub to automatically import a binding handle from the name space.
Each of the above interfaces require the client to pass in a set of authorization and authentication credentials. These credentials are acquired by the sec_login_setup_identity() and sec_loginwalidate_identi-ty() calls/functions. These calls/functions are used to acquire a ticket granting ticket from Kerberos and get authorization credentials from the registry database.
These functions are exported by the Ally component 12-10. The DCE RPC security functions also require an authentication level, which specifies what type of authentication (level of trust) will be used between the client and server. These range from unauthenticated to total encryption of all data. An option argument is used as the expected principle name of the server. This allows for the server to be authenticated to the client.
The ally component 12-10 handles the rpc~bind-ing set auth_info() call to obtain the necessary security information for the binding. In addition, the binding handle itself (in its string representation) is passed between the client system 10 and ally component 12-10 on~ this call. Since security requires that all packets be forwarded to the ally camponent 12-10, the ally determines whether the binding handle was a true binding handle or a proxy binding handle. In other words, the ally component 12-10 sees if it was already providing a forwarding service for this binding handle.
If it is, the binding remains unchanged and the ally component 1210 simply places the security information into its database.
If the specified binding handle was bound directly tow the.., s.e~rex~y . thy>, al"ly- camgarrea~tw" I2,.~10_. a3:1'i~cates a proxy binding handle. The 'real server address and t~ t f~ '' ~'l, r~
i_ ~i ~~ ,.m security information is stored in the ally's database, and the proxy binding handle is returned to the client.
Upon receiving the new binding handle from the ally component 12-10, the client converts the handle from a string binding to its internal representation within the GX-RPC runtime component 10-2. In this way, all future calls on the binding handle are transparently forwarded to the ally component 12-l0 for security information to be added.
Conversely, if the call to rpc bind-ing-set_auth~info() was being done to remove any security level from the binding handle, the ally component 12-10 then allows the client system 10 to communicate directly to the server again. This results in the ally component 12-10 removing the proxy binding, and returning the actual server's binding information to the client system.
The rpc~if register auth~info() function influences the way the ally component 12-10 handles the binding import requests from the name service. The call/function itself sets the security level for an interface. T~'hen any binding handles to that interface are imported from the name space using the rpc ns~binding_* calls, the binding handles will have the specified security levels associated with them. The ally component 12-10 maintains this informatian and checks it whenever any rpc ns,-binding*() calls are performed by the client system 10.
If an rpc ns binding-*() call tries to import a binding handle from an interface with a security level set, the ally component 12-10 creates a proxy binding handle to the security service. This is returned to the client application as the imported binding. Meanwhile, the ally component 12-10 places the actual binding, as 3 t) t~ a, t;~ t.~ iJ xJ t well as the security information, into its database for the forwarding service subcomponent 12-104 to use.
Whenever a packet arrives at a proxy binding on the ally component 12-10, the forwarding service subcomponent 12-104 checks the database to determine whether the proxy binding has an associated security level. If so, the forwarding service subcomponent 12-104 passes the packet to the security service to have the relevant parts of the message encrypted according to the security information on the binding handle. The new packet is returned to the forwarding service subcomponent 12-104 to be passed on to the server.
Similarly, packets transferred from the server to the client are passed through the security service subcomponent 12-103 to decrypt any data for the client system.
The above requires a trusted link between the client system and ally. That is; all messages between the client and ally are passed in clear text, not encrypted. Also, one of the arguments to the sec_login validate_identity() function is the user's private key (password) which is passed cleartext to the ally component.
Request subcon~ponent~Ally in3tiali~ation As mentioned, the ally request subcomponent 12-100 processes initialization requests received from client system ~Ø As part of the initialization sequence of client system 10, a link is established to the ally component 12-10. This link is the path to the ally's DCE based API. The client system 10, upon establishing this binding, calls a function exported by the ally componentr,.:b2 ~::~,,.-,, . Th;is:..f:uncti~, has,..a~tc;.,.e,f ,eats:": .
F.irat".,,.
it verifies to the client system that the binding to the Fa ~,;
%.3t.it -''' ally system 12 is correct. Second, it enables the ally component 12-10 to establish a client context. The client context contains all information related to a client system that the ally system must track. For example, the client context contains all proxy bindings, security information, forwarding information, etc. The ally system returns a context handle to the client system 10. This context handle is passed on to the ally component 12-10 on all future calls by the client system l0, thus identifying the client on each successive call to the ally component 12-10.
Another initialization function that is exported by ally component 12-10 allows the client GX-RPC runtime coasponent l0-2 to register the list of protocol sequences supported by the client system 10. This enables the ally component 12-10 to filter binding requests in an' attempt to find a server binding where the client system 10 may communicate with the server without using the ally's forwarding service subcomponent 712-104. This reduces load on 'the ally system and increases performance by eliminating unnecessary network transfers.
The Figures 3a through 3c illustrate the control flow in processing different types of requests previously discussed made by the client system ZO which are carried out by the ally system 12, according to the present invention.
DEECRIETIOPI OF OPERATIOPI
t~ith. r.efer.ence,.., to,.. Eig.ur.es:. 2a..: through, 4c, the operation of the system of a preferred embodiment of the present invention will now be described in processing a ;3 a, ~.~
- 35.-specific client request, according to the teachings of the present invention. Prior to such operation, the compilation and installation of the GX-RPC component 10-2, ally API component 10-2, ally component 12-10 and the ally interface specification which produces the client and server RPG stubs will have taken place.
As discussed above, the GX-RPC component 10-2 is constructed to include the minimum amount of code eliminating the need to port a substantial amount of code far specific DCE RPC components which normally reside locally on a DCE based system. The portions of the standard DCE RPC component which have been deleted or modified are designated by #Tfdef statements as explained herein. This eliminates having to port the GX-RPC component each time to run on another platform (e. g. a different GCOS system platform). More specifically, the source code component for each of DCE
RPC runtime routines has been modified to designate which parts are no longer being used and which parts include additional code added for having a particular service request performed by the: ally system, This enables the GX-RPC component to recognize when a particular service is no longer performed locally and to invoke the necessary operations to cause the service request to be performed by the ally.
Figure 4a provides an example of how such source code component is modified for particular DCE RPC name service routines included in the DCE RPC component as nslookup.c. The RPC DCE routines contained in this component are used to create and manage contexts for binding lookup operations. According to the present invention, definition statements (i.e., #ifdef, #endif) are ; a:ncludec~.."in". the~wport~t'cr~s".:c. f..: th~~~ source cad'ew f~xfew of'~
the nslookup.c component which are to be deleted. As ~~_~~~'~'v19~

indicated in Figure 4a by shaded areas, such statement result in the deletion of the internals declarations for all of the locally defined routines in addition to the deletion of other parts of the source code file which correspond to routines that are no longer required to be used.
Additionally, new statements are included in the initial portions of each of the remaining RPC name service routines (e. g. rpc ns binding_lookup begin, rpc,ns~bindingelookup,-next and rpc ns binding~look-up~done) which appear near the end of the source code file of the nslookup.c component. These new statements cause the DCE request to be handed off to the ally by generating an ally request which is unique in form. The result is that the request is directed to the import APT
component l0-12 which directs the request to the ally.
For example, the two routines mentioned call ally specific routines designated as ally~ns~bindingrlook-up begin, ally ns binding-lookup~next and ally ns_bind-ing_lookup done.
The routine rpcpns binding~lobkup begin is used to create a lookup context on the ally system to be used in a search of the namespace for a binding to a specified interface (and optionally, an object) while routine rpc~ns-,bind-ing_lookup next is used to import the next set of binding handles from the ally system. In a similar fashion, the DCE RPC security routines (e. g.
sea_login~setup_identity() and sec_login valida~e_iden-tity ( ) ) All of the above modified RPC runtime routines are compiled using the IDL compiler on' ally system 12.

Additionally, the IDL descriptions of the ally interface (file ally.idl) are also compiled. This produces the RPC client and server stub code. As diagrammatically illustrated in Figure 2a, the client stub code is then transferred to the client system 10 where it is compiled and linked with routines of the GX-RPC component l0-2 including the ally API routines of component 10-12 that provide ally specific functionality and the GCOS client_side application. The object code components resulting from the above compilation process are installed on the client system 10.
The ally interface source code generated by the IDL
compiler on ally system 12 (i.e., RPC stub routine) is compiled and linked with the server side application and ally component routines.
Following the above described operations, the systems 10, 12 and 14 of Figure 2a are initialized. For the most part, the systems are initialized in a conventional manner. That is, the non-DCE system 10 is initialized in the same manner as a DPS system and the GCOS operating system are initialized.
However, the non-DCE client system 10 in addition to loading its system configuration file with the standard system information also loads into the file, parameters designating the address of the ally system 12. As, explained herein, this information is used to establish an association with the ally component 12-10 in accordance with present invention. The DCE based ally system 12 and the server system 14 are initialized in a conventional manner.
With reference to Figures 4b and 4c, the operation of the preferred embodiment of the present invention will now be described. Figures 4b and 4c shows in general, the l9 .~ ~ .~.
_ 3$ _ operations carried aut in response to a standard DCE
request made by an application running on client system 10. It will be assumed that the client application makes a call to the directory name services which in the standard DCE system of Figure 1e would have been processed locally using the DCE naming service facilities available on the system. Tn the case of the present invention, certain components, such as the directory name services (CDS), do not have to be ported to the client system 10. Accordingly, the client system makes a determination that such services are not available and forwards the request to the ally system 12 for processing.
As part of the initialization of the ally, the first operation that generally takes place is that the ally component 12-10 registers the ally interface with the DCE RPC runtime component 12°3 (i.e., the RPCD
daemon process), via a standard DCE call routine (i.e., rpc serverrregister if function). Registering the ally interface informs the DCE RPC runtime component 12-3 that the interface is available to client system 10 and Server 14.
In greater detail, the ally component 12-10 registers its endpoints by placing the following information into the local endpoint map: the RPC
interface identifier which eontains the interface WID
and mayor and manor version numbers; the list of binding handles fox the interface and the list of ally object ~ID.~, if anyo The ally component and stub code establishes a manager entry point vector (EPV) for the interface implemented by the ally. A manager is a set of routines that impl.ement.., the. operatinns.,. of,:. the:,: ally.. inte~r.Eace. . as::
.
explained herein. The EPV is a vector of pointers Yd ... ~ ~ \.:

(i.e., a list of addresses - the entry points of the procedures provided by the manager) to functions implemented by the manager. The manager EPV contains exactly one entry point for each procedure defined in the ally interface definition. When an ally RPC request is received, the operation identifier is used to select an element from the manager EPV. The ally component also registers the EPV manager with the DCE RPC runtime component 12-3. In accordance with the present invention, the ally manager EPV contains the following entries:
ally_Vl~O~epv~t ally v10 manager_epv =
client establish ally_assoc, a_are'you there, a~,load_info, a_rpc binding-from~string bind, a rpc binding~free, a~rpc binding reset, a_rpc ns binding~lookup begin, awrpc ns binding_lookup next, arrpc ns,-binding~lookup done, a rpc binding_set auth_info, a~rpc binding_inc~auth~info, a_sec~logic setupridentity, a~sec~logic validate_identity, a~rpcpif set~,wk endpoint,.
After registering the interface and manager epv with its RPC runtime component 12-3, the ally informs the RPC runtime to use all available protocol sequences via function rpc server use~all protseq. For each protocol combination, .the RPC runtime component creates one or more binding handles. The ally uses the binding handle information to register. its transpart,. protaco,.l,..
and well-known endpoint with the RPC daemon process, cap r.,r.ny9 .iL r~ '~ V~ d.J
_.4p _ recd, using function rpc_ep_regis~ter(). This eliminates the need for the ally to listen on a well-known transport endpoint.
It will be appreciated that the client system 10 only needs to know the network (transport) address of the ally system, not the actual endpoints. Thereafter, the ally is ready to listen for service requests from client system 10. When the ally is ready to accept client requests, it initiates listening, specifying the maximuan number of calls it can execute concurrently by calling the rpc server_listen routine.
It is assumed by way of example that the client application desires to locate a compatible server to perform a particular name service operation (e. g.
endpoint). To locate such a server, the client obtains binding information from the name service database by calling the lookup routines (rpc ns binding_look-up begin, rpcjns lbinding_lookup next and° rpc ns bind-ing_lookup done) to obtain binding handles from which it can obtain a compatible server.
The client application issues the standard DCE call to the GX-RFC routine component. ~rhis invokes the code rpm n~ binding lookup begin function which was discussed earlier in connection with Figure 4a. This results in the determination by the RPC runtime component 10-2 that the function cannot be performed locally, but instead is to be performed by the ally component 12-10. The RPC
lookup routine (nslookup.c) generates a call to the ally ApI component l0-12 (comally.c) via the function ally n~ binding,_lookup,-begin.
As mentioned, the component l0-2 contains routines that provide the client side ally specific functionality. The. component,.. 1.Q..-2, rout3.ne~~:. al,l,~~,ns,..b,ind-~~.
ing_lookup_begin creates a naming service context on the G? .f~ i )y y I ~ Y
- 41.-ally system 12. The routine calls another function establish_ally_assoc (& st) which determines if the client has already established an association with ally component 12-1o (checks a context value). Assuming no association has been established (context = NULL), then the ally nswbinding~lookup begin routine calls an internal routine/function establish-ally_assoc (& st).
This routine uses the local ally address to establish a binding to the ally component 12-10 and thereafter it obtains the "real°' binding of the ally which is used in all future transactions.
The establish ally~assoc (& st) routine performs a sequence of operations which enables the ally component 12-10 to make a decision as to how a binding handle should be returned to a client as a result of the protocols being supported by the client system and the ally for forwarding requests on compatible but not identical tran$port protocols. In the case where there is only one protocol being supported (e. g. UDP/IP), there is no need to check protocol compatibility. In such cases, this function is treated as a no op. In this sequence, it first constrLlcts a list of the protocol ysequences that the client system knows are available to communicate over to be transferred to ally component 12-10 which can be used when the ally is providing a forwarding service function for client system 10.
Next, the routine allocates memory for staring protocol information obtained from the RPC runtime compa~nent ~.0-2 identifying, the protocol sequences supported by the client system 10 available for use by a client application (i.e., uses RpC information RPC PROTSEQ IIdQ PROTSEQ) . This inf.o.rmation. is . stared.. in...
the client's RPC memory area.

l ~ ~ i ~ ~ :~s '~
r,~ a~ .. ~.~ ',J ?J
_ 42 _ The establish~ally~assoc routine assumes none of the binding information it has stored is valid and begins by loading the ally address information stored in the client system file into a file called allylbindings.
In greater detail, the internal function rpc set nextrally_binding which is followed by a call to an external function rpclget_wk ally-bindings(). This function accesses the contents of the system configuration file containing the ally's address information (i.e., well-known ally string binding) and stores it in the file ally binding. The function rpc_get wk ally-bindings() function using the local path ~
name stored by the ally APT component 10-12 accesses the client system configuration file and returns the string of characters in the ally binding file and returns control back to the establishpally~assoc runtime. The platform specific routine rpc_get wk~ally_bindings does this by executing a sequence of standard functions using routines contained in library 10-6 (i.e., fopen, fget and (close) which open the file, readout the contents into the ally binding file and closes the system configuration file.
The establish ally assoc routine obtains the ally binding string and next converts it into an ally binding handle (uses function rpc_int binding~from string~lbind-ing). It then allocates a buffer area in its RPC memory to store any alternate ally binding information returned by ally component 12-10 (i.e., to establish compatible protocols).
The establishrally assc~c routine next tries to call the ally. It makes an RPC call through the client stub using the parameters (ally-,VlrO,C~epv.client estab-lish~_~1-ly~~~ssoc~y: These~~< pa.ramet-ers: in-clwdew the-. .allyw ~ .
binding string parameters and a unique context ~~_~~r~"~
;_ ; W.
parameter. The client stub generates the RPC call to the ally system 12 and carries out the necessary operations required for transmitting the call, according to the ally interface definitions discussed above (e. g.
packaging the call parameters and transferring them to the GX-RPC component 10-2 for application to network 16 via the transport facilities).
The ally system 12 receives the client request via its RPC runtime component 12-3. As previously discussed, the ally request subcomponent 12~100 includes the table of entries which includes a client_estab-lish ally'assoc entry point. The receipt of this request invokes the ally client~establish ally assoc() routine. This routine enables the establishment of a new context with the ally. The ally either accepts the client or redirects the request. Assuming acceptance, the ally returns a context handle to the client for future client use. Additionally, the ally records in its database, the client's supported protocol sequences for later use by the ally's forwarding service subcomponent.
In greater detail, 'the routine issues the call rpc binding to~string,-binding function to the ally system RPC component 12-3 which converts the client supplied ally binding information into a string binding containing the ally handle. In the same manner described above, the client establish ally assoc routine passes the ally binding handle information to the RPC
server stub which packages and transfers the response to the RPC call over network 16. The ally routine then invokes a database routine db-create-,new client_entry which adds the new client entry to its database. Also, the. rQUt,in~.. . invo~kese ., a:... data~base~..,:, r,~~zn~.."

~j V' '~ l~ 9 ~r..i~f';.ie~

dbradd_client_protocol which adds the list of protocols supported by the new client to the Ally°s database.
On the client system side, the client receives the Ally context handle and retrieves a base UUID.
In the above example, upon the completion of the routine establish ally-assoc by the client system, now the client is initiali2ed with the ally and control is returned back to the ally API component routine ally ns binding_lookup begin. Using the ally context, the client system 10 issues an RPC call to the ally (i.e., ally V1~0 C epv.a~rpsrns binding~lookup begin).
The above RPC call is transferred to the ally system in the same manner as described above. This time, the ally rec;uest subcomponent 12-10, upon receiving the call, invokes the ally Naming API routine a-rpc ns~binding_lookup begin defined by the entry point. This routine invokes a call to a corresponding one of the ally naming service routines (i.e., allyns.c file) ns_lookup begin of naming subcomponent 12-102.
This routine creates a lookup context for an interface (e.g., a print server interface) and optionally an object. It also verifies the client context from the Ally's client database.
The ally naming service subcomponent routine ns_lookup~begin() obtains the lockup context handle, registers the lookup context handle in the client database and checks to see if the client has requested a security level for this interface specification (e. g., printer interface). Also, the routine registers the security level information with the context handle and returns the CDS lookup cowtext handle to the client system 10.
Moxe:~.. spec.if,ica~.l.y,..: _: i,t. first., ..caxlls;~, rpc.,~,rrs._b'i.n,c~_,. .
ing~lookupbegin locally to get an NS context handle c ~ f': ~ ' '~? i~
~: 1. ~ '.A) r ~ .;.

from the CDS component/server. Here, the ally RPC
runtime component 12-3 satisfies the client lookup request through the name services (CDS) facilities which is accessible by the ally through its issuing of normal DCE calls.
Normally, the Client system uses the CDS lookup context it obtained from the ally system by calling the rpc~ns~binding~lookup~next routine. This routine is another of the RPC name service routines which, when accessed, results in a call to the comally.c subcomponent routine ally~ns_bindingrlookup_next using the arguments obtained by the previous ally call (i.e., lookup context, bindingwector).
Void a_rpc,ns~binding_lookup~next() return a list of compatible string bindings for a specified interface and, optionally, an object. This is not identical to generic rpc~ns binding_lookup next() since these binding handles are actually pointers to real storage managed by the RPC runtime. The client RPC runtime, upon receipt of the string bindings, converts them back to binding handles which should be identical 'to what were generated by ally's RPC runtime.
In order to maintain transparency of the ally, this call returns a vector of binding handles just as normal case if there is no security involved. otherwise, the ally caches the vector of binding handles and returns a proxy binding handle to the client. When the forwarded RPC request fails to reach the target server, the ally automatically gets the next one from the cached vector to try until it either succeeds or the list exhausts.
Without the involvement of the security, the proxy binding handle is not created until the list of binding handles.,.was.,, fi;~.st., exhausted (. i : a°e , the al Yy. wi.2'1.;
try,...the~
forwarding service only after all compatible binding handles have been used). After the client exhausts all the compatible binding handles, the ally retrieves and caches all binding handles that it can communicate with the target server. The first one is then picked and marked as current and a proxy binding handle is created and returned to the client.
When the client application finishes locating binding handles, it calls the rpc ns binding~lookupTdone routine, a further RPC name service routine for detecting the lookup context. This routine when accessed, results in call to the ally API component routine ally_ns binding_lookup~done routine.
From the above, it is seen how the present invention is able to provide the client system access to CDS naming service facilities via the ally system 12.
This is done in such a manner as not to alter existing DCE RPC interfaces and enabling the use of standard DCE
RPC calls. The sequence of operations for each of the above.lookup operations is summarized in Figures 4b and 4c.
zt will be appreciated that the other ally requests APIs are in a similar manner. For example, several of the security APIs form part of standard security DCE APIs (e. g. sec~login~setupridenti-ty() and sec,-login validate_identity()) are invoked by standard DCE RPC calls (i.e., sec~login_setup-identity() and sec~login ealidate~identity() in tPxe same manner as tile naming APIs discussed above.
From the above, it i~ seen how the present invention is abls to reduce significantly, the amount of components which have to be ported to a non-DCE system.
Further, it reduces the non-DCE system based development, qualification arid maintenance costs.

F S ~' ~J ~~ .~
S..cliGVSJ

Additionally, future RPC extensions (e.g. migration to new protocols and services) to the DGE based system where modification is more easily achieved.
It will be appreciated by those skilled in the art that many changes may be made without departing from the teachings of the present invention. For example, the invention may be used in conjunction with different system platforms and with other server applications included on the same system as the client or on different systems. Also, the teachings of the present invention may be used in conjunction with other DCE
application programming interfaces.

Claims (10)

1. A distributed computer system including a plurality of computer systems coupled together through a common communication network, a first one of said systems corresponding to a non distributed computing environnment (DCE) system which includes a first type of operating system for running non DCE application programs on said first one of said systems and a second one of said systems corresponding to a DCE system including a second type of operating system which is compatible with said DCE system for running application programs compiled on said second system and wherein said distributed computer system further includes:
an ally component and a distributed computing environment (DCE) application system installed in said second system to run in conjunction with said second type of operating system, said DCE system including a plurality of components for providing a plurality of basic distributed services and a remote procedure call (RPC) component for processing remote procedure calls between client and server application programs communicating through a pair of RPC stub components according to a predetermined RPC protocol, said ally component including a plurality of management routines for enabling local requests made by said client application programs to said RPC component of said first system to be processed by accessing said plurality of distributed service components of said DCE system; and, a RPC runtime component included in said first system, said RPC runtime component including a RPC
subcomponent and an application program interface (API) subcomponent operatively coupled to each other, said RPC
runtime component including a minimum number of ported routines responsive to a corresponding number of standard DCE RPC requests for determining when any local client request is to be forwarded to said ally component of said second system and said API subcomponent including a plurality of subroutines for enable transfer of said each local client request received by said RPC
component of said first system to said ally component of said second system using said predetermined RPC protocol established by said client and server RPC stubs for accessing a designated one of said distributed service components of said DCE system of said second system thereby eliminating the need of having to port said DCE
service components to operate on said first system.
2. The system of claim 1 wherein said first type of operating system is a proprietary operating system which does not include facilities to support said DCE
application system.
3. The system of claim 1 wherein said second operating system is a UNIX based operating system which includes facilities to support said DCE system.
4. The system of claim1 wherein said DCE
application system includes a number of software layers, a first DCE layer including said plurality of components for providing a plurality of basic distributed services and a second layer including said DCE RPC component.
5. The system of claim 4 wherein said ally component is also included in said second layer.
6. The system of claim 1 wherein RPC communications between said first and second systems is established according to statements defining an ally interface and wherein said second system further includes an interface definition language (IDL) compiler, said IDL compiler being used to compile said statements into said RPC stub components and said first system including a compatible compiler for compiling said client application programs, said compatible compiler compiling said RPC stubs produced by said IDL compiler before being installed on said first system.
7. The system of claim 1 wherein said ally component includes a plurality of sections for processing different types of requests received from said RPC runtime component of said first system, said plurality of sections including a requests section, a forwarding service section coupled to said request section, a naming service section coupled to said request section, and a security service section coupled to said request section.
8. The system of claim 1 wherein said ally API component of said first system includes routines for performing operations for initially establishing association with said ally component, for having said ally carrying certain binding operations and naming service operations.
9. The system of claim 1 wherein said ally component routines for processing said client requests by generating standard DCE calls to said DCE RPC runtime component of said second system.
10. A method of providing a distributed computing environment (DCE) in a system which includes a plurality of computer systems for a first one of said systems which is non-DCE
computer system that does not have operating system facilities for directly supporting DCE services for application programs running in said non-DCE computer system and a second one of said systems is a DCE system that includes a DCE application system for providing said DCE services, said DCE application system containing a plurality of components for performing said DCE
services without having to port said DCE components, said method comprising the steps of:
a. coupling said first and second systems together for enabling said systems to process remote procedure (RPC) calls between client and server application programs running on said first and second systems respectively which communicate through a pair of RPC stub components;
b. installing in said first system, an RPC runtime component which includes an ally application program interface (API) component to operate in conjunction with said operating system facilities of said first system, said RPC runtime component including a number of routines responsive to standard DCE requests for determining when any client request for local services can not be performed by said first system and said API component including a plurality of subroutines for enabling transfer of said local client request to said second system using a predetermined RPC
protocol established by said client and server RPC stubs;

c. installing in said second system, an ally component to run said second system in conjunction with said DCE components, said ally component including a plurality of routines for processing communicating with said RPC runtime component and for said client requests received from said RPC component of said first system for performing requested DCE services using said DCE
components of said second system for those components which were not been ported to run on said first system;
d. determining by a mapping operation performed by said RPC
runtime component of said first system which local client request can not be performed locally by said RPC runtime component because of not having ported components to said first system; and, e. translating and transferring by said RPC component of said first system each client request which can not be performed locally as determined in step a into a form for receipt by said ally component for execution by said ally or by said ally and said DCE components installed on said second system.
CA002106891A 1992-09-25 1993-09-24 Ally mechanism for interconnecting non-distributed computing environment (dce) and dce systems to operate in a network system Expired - Fee Related CA2106891C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/951,069 US5497463A (en) 1992-09-25 1992-09-25 Ally mechanism for interconnecting non-distributed computing environment (DCE) and DCE systems to operate in a network system
US07/951,069 1992-09-25

Publications (2)

Publication Number Publication Date
CA2106891A1 CA2106891A1 (en) 1994-03-26
CA2106891C true CA2106891C (en) 2003-02-18

Family

ID=25491216

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002106891A Expired - Fee Related CA2106891C (en) 1992-09-25 1993-09-24 Ally mechanism for interconnecting non-distributed computing environment (dce) and dce systems to operate in a network system

Country Status (7)

Country Link
US (1) US5497463A (en)
EP (1) EP0590519B1 (en)
JP (1) JPH06214924A (en)
AU (1) AU663617B2 (en)
CA (1) CA2106891C (en)
DE (1) DE69323675T2 (en)
ES (1) ES2127774T3 (en)

Families Citing this family (195)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5329619A (en) * 1992-10-30 1994-07-12 Software Ag Cooperative processing interface and communication broker for heterogeneous computing environments
FR2699305B1 (en) * 1992-12-11 1995-01-13 Bull Sa Device for using pseudo remote communication point functions (pseudo sockets).
US6157961A (en) * 1992-12-21 2000-12-05 Sun Microsystems, Inc. Client-side stub interpreter
JPH06301555A (en) * 1993-02-26 1994-10-28 Internatl Business Mach Corp <Ibm> System for plural symbiotic operating systems on micro kernel and for personality use
US5896531A (en) * 1993-02-26 1999-04-20 International Business Machines Corporation Method and system for managing environments with a data processing system
US6226690B1 (en) * 1993-06-14 2001-05-01 International Business Machines Corporation Method and apparatus for utilizing proxy objects to communicate with target objects
US6289390B1 (en) 1993-08-18 2001-09-11 Microsoft Corporation System and method for performing remote requests with an on-line service network
WO1995009503A1 (en) * 1993-09-29 1995-04-06 Siemens Aktiengesellschaft Process for implementing dynamic data types in distributed computing networks using an osf/dce platform
US5771354A (en) * 1993-11-04 1998-06-23 Crawford; Christopher M. Internet online backup system provides remote storage for customers using IDs and passwords which were interactively established when signing up for backup services
US7080051B1 (en) 1993-11-04 2006-07-18 Crawford Christopher M Internet download systems and methods providing software to internet computer users for local execution
US5689700A (en) * 1993-12-29 1997-11-18 Microsoft Corporation Unification of directory service with file system services
FR2714746B1 (en) * 1993-12-31 1996-02-02 Bull Sa Method for simulating a "server" architecture from a "client" architecture.
US5522042A (en) * 1994-01-28 1996-05-28 Cabletron Systems, Inc. Distributed chassis agent for distributed network management
JP3488500B2 (en) * 1994-02-07 2004-01-19 富士通株式会社 Distributed file system
EP0671685B1 (en) * 1994-03-08 1998-11-04 Digital Equipment Corporation Method and apparatus for detecting and executing cross-domain calls in a computer system
US5765155A (en) * 1994-05-11 1998-06-09 Nec Corporation Master data managing method
US5694546A (en) * 1994-05-31 1997-12-02 Reisman; Richard R. System for automatic unattended electronic information transport between a server and a client by a vendor provided transport software with a manifest list
EP0701205B1 (en) * 1994-07-22 2003-05-14 Sun Microsystems, Inc. Method and apparatus for space-efficient inter-process communication
US5778228A (en) * 1994-08-16 1998-07-07 International Business Machines Corporation Method and system for transferring remote procedure calls and responses over a network
US5485460A (en) * 1994-08-19 1996-01-16 Microsoft Corporation System and method for running multiple incompatible network protocol stacks
US6128648A (en) * 1994-11-23 2000-10-03 International Business Machines Corporation Information handling system and method for maintaining coherency between network servers and mobile terminals
US5850518A (en) * 1994-12-12 1998-12-15 Northrup; Charles J. Access-method-independent exchange
US5689701A (en) * 1994-12-14 1997-11-18 International Business Machines Corporation System and method for providing compatibility between distributed file system namespaces and operating system pathname syntax
US5617568A (en) * 1994-12-14 1997-04-01 International Business Machines Corporation System and method for supporting file attributes on a distributed file system without native support therefor
US6038395A (en) * 1994-12-16 2000-03-14 International Business Machines Corporation System and method for implementing proxy objects in a visual application builder framework
GB2296351A (en) * 1994-12-23 1996-06-26 Ibm Calling functions from programs running in another environment
US6948070B1 (en) 1995-02-13 2005-09-20 Intertrust Technologies Corporation Systems and methods for secure transaction management and electronic rights protection
US7133845B1 (en) 1995-02-13 2006-11-07 Intertrust Technologies Corp. System and methods for secure transaction management and electronic rights protection
US7095854B1 (en) 1995-02-13 2006-08-22 Intertrust Technologies Corp. Systems and methods for secure transaction management and electronic rights protection
DE69637733D1 (en) 1995-02-13 2008-12-11 Intertrust Tech Corp SYSTEMS AND METHOD FOR SAFE TRANSMISSION
US5793965A (en) * 1995-03-22 1998-08-11 Sun Microsystems, Inc. Method and apparatus for determining the type of an object in a distributed object system
US5615337A (en) * 1995-04-06 1997-03-25 International Business Machines Corporation System and method for efficiently processing diverse result sets returned by a stored procedures
US5826254A (en) * 1995-04-18 1998-10-20 Digital Equipment Corporation System for selectively browsing a large, distributed directory tree using authentication links
US6249822B1 (en) * 1995-04-24 2001-06-19 Microsoft Corporation Remote procedure call method
US5805809A (en) * 1995-04-26 1998-09-08 Shiva Corporation Installable performance accelerator for maintaining a local cache storing data residing on a server computer
US5678044A (en) * 1995-06-02 1997-10-14 Electronic Data Systems Corporation System and method for improved rehosting of software systems
US5956489A (en) * 1995-06-07 1999-09-21 Microsoft Corporation Transaction replication system and method for supporting replicated transaction-based services
US5713017A (en) * 1995-06-07 1998-01-27 International Business Machines Corporation Dual counter consistency control for fault tolerant network file servers
US5774668A (en) * 1995-06-07 1998-06-30 Microsoft Corporation System for on-line service in which gateway computer uses service map which includes loading condition of servers broadcasted by application servers for load balancing
US6901433B2 (en) * 1995-06-07 2005-05-31 Microsoft Corporation System for providing users with a filtered view of interactive network directory obtains from remote properties cache that provided by an on-line service
US5933599A (en) * 1995-07-17 1999-08-03 Microsoft Corporation Apparatus for presenting the content of an interactive on-line network
US5941947A (en) * 1995-08-18 1999-08-24 Microsoft Corporation System and method for controlling access to data entities in a computer network
US5956509A (en) 1995-08-18 1999-09-21 Microsoft Corporation System and method for performing remote requests with an on-line service network
US5657452A (en) * 1995-09-08 1997-08-12 U.S. Robotics Corp. Transparent support of protocol and data compression features for data communication
US5682534A (en) * 1995-09-12 1997-10-28 International Business Machines Corporation Transparent local RPC optimization
US5768503A (en) * 1995-09-25 1998-06-16 International Business Machines Corporation Middleware program with enhanced security
US5790800A (en) * 1995-10-13 1998-08-04 Digital Equipment Corporation Client application program mobilizer
US6240450B1 (en) 1995-10-16 2001-05-29 British Telecommunications Public Limited Company Network data visualization system and method for downloading visualization software to a user station after user authentication
US5740362A (en) * 1995-11-06 1998-04-14 International Business Machines Corporation Management of network distributed agents in a distributed computing environment
US5873092A (en) * 1995-12-14 1999-02-16 International Business Machines Corporation Information handling system, method, and article of manufacture including persistent, distributed object name services including shared properties
US5915112A (en) * 1996-01-02 1999-06-22 International Business Machines Corporation Remote procedure interface with support for multiple versions
US5887172A (en) 1996-01-10 1999-03-23 Sun Microsystems, Inc. Remote procedure call system and method for RPC mechanism independent client and server interfaces interoperable with any of a plurality of remote procedure call backends
US6374287B1 (en) 1996-01-24 2002-04-16 Sun Microsystems, Inc. Method and system for allowing client processes to run on distributed window server extensions
US5754830A (en) 1996-04-01 1998-05-19 Openconnect Systems, Incorporated Server and web browser terminal emulator for persistent connection to a legacy host system and method of operation
US6233542B1 (en) 1996-04-01 2001-05-15 Openconnect Systems Incorporated Server and terminal emulator for persistent connection to a legacy host system with response time monitoring
US6233543B1 (en) 1996-04-01 2001-05-15 Openconnect Systems Incorporated Server and terminal emulator for persistent connection to a legacy host system with printer emulation
US6205416B1 (en) 1996-04-01 2001-03-20 Openconnect Systems Incorporated Server and terminal emulator for persistent connection to a legacy host system with direct OS/390 host interface
US6216101B1 (en) 1996-04-01 2001-04-10 Openconnect Systems Incorporated Server and terminal emulator for persistent connection to a legacy host system with client token authentication
US6205415B1 (en) 1996-04-01 2001-03-20 Openconnect Systems Incorporated Server and terminal emulator for persistent connection to a legacy host system with file transfer
US6205417B1 (en) * 1996-04-01 2001-03-20 Openconnect Systems Incorporated Server and terminal emulator for persistent connection to a legacy host system with direct As/400 host interface
US6128647A (en) 1996-04-05 2000-10-03 Haury; Harry R. Self configuring peer to peer inter process messaging system
US5752023A (en) * 1996-04-24 1998-05-12 Massachusetts Institute Of Technology Networked database system for geographically dispersed global sustainability data
US6945457B1 (en) * 1996-05-10 2005-09-20 Transaction Holdings Ltd. L.L.C. Automated transaction machine
US6233620B1 (en) * 1996-07-02 2001-05-15 Sun Microsystems, Inc. Object-oriented system, method and article of manufacture for a presentation engine in an interprise computing framework system
US5893107A (en) 1996-07-01 1999-04-06 Microsoft Corporation Method and system for uniformly accessing multiple directory services
US6038590A (en) 1996-07-01 2000-03-14 Sun Microsystems, Inc. Object-oriented system, method and article of manufacture for a client-server state machine in an interprise computing framework system
US6266709B1 (en) 1996-07-01 2001-07-24 Sun Microsystems, Inc. Object-oriented system, method and article of manufacture for a client-server failure reporting process
US6424991B1 (en) 1996-07-01 2002-07-23 Sun Microsystems, Inc. Object-oriented system, method and article of manufacture for a client-server communication framework
US6052711A (en) * 1996-07-01 2000-04-18 Sun Microsystems, Inc. Object-oriented system, method and article of manufacture for a client-server session web access in an interprise computing framework system.
US5987245A (en) 1996-07-01 1999-11-16 Sun Microsystems, Inc. Object-oriented system, method and article of manufacture (#12) for a client-server state machine framework
US5848246A (en) 1996-07-01 1998-12-08 Sun Microsystems, Inc. Object-oriented system, method and article of manufacture for a client-server session manager in an interprise computing framework system
US6304893B1 (en) 1996-07-01 2001-10-16 Sun Microsystems, Inc. Object-oriented system, method and article of manufacture for a client-server event driven message framework in an interprise computing framework system
US6434598B1 (en) 1996-07-01 2002-08-13 Sun Microsystems, Inc. Object-oriented system, method and article of manufacture for a client-server graphical user interface (#9) framework in an interprise computing framework system
US6272555B1 (en) 1996-07-01 2001-08-07 Sun Microsystems, Inc. Object-oriented system, method and article of manufacture for a client-server-centric interprise computing framework system
US5999972A (en) 1996-07-01 1999-12-07 Sun Microsystems, Inc. System, method and article of manufacture for a distributed computer system framework
US5857191A (en) * 1996-07-08 1999-01-05 Gradient Technologies, Inc. Web application server with secure common gateway interface
US5857203A (en) * 1996-07-29 1999-01-05 International Business Machines Corporation Method and apparatus for dividing, mapping and storing large digital objects in a client/server library system
GB9620196D0 (en) * 1996-09-27 1996-11-13 British Telecomm Distributed processing
US5796393A (en) * 1996-11-08 1998-08-18 Compuserve Incorporated System for intergrating an on-line service community with a foreign service
US7058892B1 (en) 1996-11-08 2006-06-06 America Online, Inc. Displaying content from multiple servers
WO1998022922A1 (en) * 1996-11-15 1998-05-28 Siemens Aktiengesellschaft System for co-ordinating the activities of aircraft guidance personnel in an airport
US5918228A (en) * 1997-01-28 1999-06-29 International Business Machines Corporation Method and apparatus for enabling a web server to impersonate a user of a distributed file system to obtain secure access to supported web documents
AU5772998A (en) * 1997-01-28 1998-08-18 British Telecommunications Public Limited Company Managing operation of servers in a distributed computing environment
US5875296A (en) * 1997-01-28 1999-02-23 International Business Machines Corporation Distributed file system web server user authentication with cookies
US6115549A (en) * 1997-02-12 2000-09-05 Novell, Inc. Directory-services-based software distribution apparatus and method
US6282581B1 (en) * 1997-03-27 2001-08-28 Hewlett-Packard Company Mechanism for resource allocation and for dispatching incoming calls in a distributed object environment
US6085030A (en) * 1997-05-02 2000-07-04 Novell, Inc. Network component server
US6594689B1 (en) * 1997-05-08 2003-07-15 Unisys Corporation Multi-platform helper utilities
US6170017B1 (en) 1997-05-08 2001-01-02 International Business Machines Corporation Method and system coordinating actions among a group of servers
US6049799A (en) * 1997-05-12 2000-04-11 Novell, Inc. Document link management using directory services
US5913217A (en) * 1997-06-30 1999-06-15 Microsoft Corporation Generating and compressing universally unique identifiers (UUIDs) using counter having high-order bit to low-order bit
US5920725A (en) * 1997-07-02 1999-07-06 Adaptivity Inc. Run-time object-synthesis and transparent client/server updating of distributed objects using a meta server of all object descriptors
US6006278A (en) * 1997-07-18 1999-12-21 Electronic Data Systems Corporation Method and system for importing remote functions to a network computer
US6029201A (en) * 1997-08-01 2000-02-22 International Business Machines Corporation Internet application access server apparatus and method
US6138150A (en) * 1997-09-03 2000-10-24 International Business Machines Corporation Method for remotely controlling computer resources via the internet with a web browser
US6898591B1 (en) * 1997-11-05 2005-05-24 Billy Gayle Moon Method and apparatus for server responding to query to obtain information from second database wherein the server parses information to eliminate irrelevant information in updating databases
US6148402A (en) * 1998-04-01 2000-11-14 Hewlett-Packard Company Apparatus and method for remotely executing commands using distributed computing environment remote procedure calls
US6138269A (en) * 1998-05-20 2000-10-24 Sun Microsystems, Inc. Determining the actual class of an object at run time
US7305451B2 (en) * 1998-08-24 2007-12-04 Microsoft Corporation System for providing users an integrated directory service containing content nodes located in different groups of application servers in computer network
US6847987B2 (en) 1998-09-30 2005-01-25 International Business Machines Corporation System and method for extending client-server software to additional client platforms for servicing thin clients requests
US6219835B1 (en) 1998-10-15 2001-04-17 International Business Machines Corporation Multi-language DCE remote procedure call
US6236999B1 (en) * 1998-11-05 2001-05-22 Bea Systems, Inc. Duplicated naming service in a distributed processing system
US6571274B1 (en) 1998-11-05 2003-05-27 Beas Systems, Inc. Clustered enterprise Java™ in a secure distributed processing system
US6581088B1 (en) 1998-11-05 2003-06-17 Beas Systems, Inc. Smart stub or enterprise javaTM bean in a distributed processing system
US6609153B1 (en) 1998-12-24 2003-08-19 Redback Networks Inc. Domain isolation through virtual network machines
US6473894B1 (en) 1999-01-29 2002-10-29 International Business Machines Corporation Dynamic runtime and test architecture for Java applets
US6745332B1 (en) * 1999-06-29 2004-06-01 Oracle International Corporation Method and apparatus for enabling database privileges
US6496975B1 (en) 1999-10-15 2002-12-17 International Business Machines Corporation Method, system, and program for performing conditional program operations
US6526433B1 (en) * 1999-12-15 2003-02-25 International Business Machines Corporation Adaptive timeout value setting for distributed computing environment (DCE) applications
US7331058B1 (en) 1999-12-16 2008-02-12 International Business Machines Corporation Distributed data structures for authorization and access control for computing resources
US6728788B1 (en) * 1999-12-16 2004-04-27 International Business Machines Corporation Method and system for converting a remote procedure call to a local procedure call when the service is on the same device as the calling client
US7624172B1 (en) 2000-03-17 2009-11-24 Aol Llc State change alerts mechanism
US9736209B2 (en) 2000-03-17 2017-08-15 Facebook, Inc. State change alerts mechanism
US7451476B1 (en) * 2000-06-20 2008-11-11 Motorola, Inc. Method and apparatus for interfacing a network to an external element
US7024471B2 (en) * 2000-12-12 2006-04-04 International Business Machines Corporation Mechanism to dynamically update a windows system with user specific application enablement support from a heterogeneous server environment
US7440962B1 (en) 2001-02-28 2008-10-21 Oracle International Corporation Method and system for management of access information
US7320141B2 (en) * 2001-03-21 2008-01-15 International Business Machines Corporation Method and system for server support for pluggable authorization systems
US7020867B2 (en) * 2001-03-23 2006-03-28 S2 Technologies, Inc. System and method for automatically generating code templates for communication via a predefined communication interface
US7530076B2 (en) * 2001-03-23 2009-05-05 S2 Technologies, Inc. Dynamic interception of calls by a target device
US7552239B2 (en) * 2001-05-14 2009-06-23 Canon Information Systems, Inc. Network device mimic support
DE10137693A1 (en) * 2001-06-18 2002-05-16 Mueschenborn Hans Joachim Transparent services for communication over a network using log on services and client servers
US7024467B2 (en) * 2001-06-29 2006-04-04 Bull Hn Information Systems Inc. Method and data processing system providing file I/O across multiple heterogeneous computer systems
US7737134B2 (en) * 2002-03-13 2010-06-15 The Texas A & M University System Anticancer agents and use
US7191467B1 (en) 2002-03-15 2007-03-13 Microsoft Corporation Method and system of integrating third party authentication into internet browser code
FR2838266B1 (en) * 2002-04-05 2004-09-03 Thales Sa METHOD AND DEVICE FOR COMMUNICATING WITH A REDUNDANT SYSTEM
US7756956B2 (en) 2002-11-14 2010-07-13 Canon Development Americas, Inc. Mimic support address resolution
US8122137B2 (en) 2002-11-18 2012-02-21 Aol Inc. Dynamic location of a subordinate user
US8965964B1 (en) 2002-11-18 2015-02-24 Facebook, Inc. Managing forwarded electronic messages
US7640306B2 (en) 2002-11-18 2009-12-29 Aol Llc Reconfiguring an electronic message to effect an enhanced notification
US7428580B2 (en) 2003-11-26 2008-09-23 Aol Llc Electronic message forwarding
US8701014B1 (en) 2002-11-18 2014-04-15 Facebook, Inc. Account linking
CA2506585A1 (en) 2002-11-18 2004-06-03 Valerie Kucharewski People lists
US7899862B2 (en) * 2002-11-18 2011-03-01 Aol Inc. Dynamic identification of other users to an online user
US8005919B2 (en) 2002-11-18 2011-08-23 Aol Inc. Host-based intelligent results related to a character stream
US7590696B1 (en) 2002-11-18 2009-09-15 Aol Llc Enhanced buddy list using mobile device identifiers
US20040205127A1 (en) 2003-03-26 2004-10-14 Roy Ben-Yoseph Identifying and using identities deemed to be known to a user
US7496662B1 (en) 2003-05-12 2009-02-24 Sourcefire, Inc. Systems and methods for determining characteristics of a network and assessing confidence
US7653693B2 (en) 2003-09-05 2010-01-26 Aol Llc Method and system for capturing instant messages
US7539681B2 (en) * 2004-07-26 2009-05-26 Sourcefire, Inc. Methods and systems for multi-pattern searching
US7669213B1 (en) 2004-10-28 2010-02-23 Aol Llc Dynamic identification of other viewers of a television program to an online viewer
US7366734B2 (en) * 2004-12-25 2008-04-29 Oracle International Corporation Enabling client systems to discover services accessible by remote procedure calls (RPC) on server systems
US8495664B2 (en) * 2005-07-06 2013-07-23 International Business Machines Corporation System, method and program product for invoking a remote method
US7945677B2 (en) * 2005-09-06 2011-05-17 Sap Ag Connection manager capable of supporting both distributed computing sessions and non distributed computing sessions
US7765560B2 (en) * 2005-10-26 2010-07-27 Oracle America, Inc. Object oriented communication between isolates
US8046833B2 (en) * 2005-11-14 2011-10-25 Sourcefire, Inc. Intrusion event correlation with network discovery information
US7733803B2 (en) * 2005-11-14 2010-06-08 Sourcefire, Inc. Systems and methods for modifying network map attributes
US8707323B2 (en) * 2005-12-30 2014-04-22 Sap Ag Load balancing algorithm for servicing client requests
US7948988B2 (en) * 2006-07-27 2011-05-24 Sourcefire, Inc. Device, system and method for analysis of fragments in a fragment train
US7701945B2 (en) * 2006-08-10 2010-04-20 Sourcefire, Inc. Device, system and method for analysis of segments in a transmission control protocol (TCP) session
CA2672908A1 (en) * 2006-10-06 2008-04-17 Sourcefire, Inc. Device, system and method for use of micro-policies in intrusion detection/prevention
US20080201481A1 (en) * 2007-02-20 2008-08-21 Microsoft Corporation Remote interface marshalling
US8069352B2 (en) * 2007-02-28 2011-11-29 Sourcefire, Inc. Device, system and method for timestamp analysis of segments in a transmission control protocol (TCP) session
WO2008122092A1 (en) * 2007-04-10 2008-10-16 Web Evaluation Pty Ltd System and/or method for evaluating network content
US8127353B2 (en) * 2007-04-30 2012-02-28 Sourcefire, Inc. Real-time user awareness for a computer network
US8260934B2 (en) * 2007-08-31 2012-09-04 Red Hat, Inc. Multiplex transport
US8474043B2 (en) * 2008-04-17 2013-06-25 Sourcefire, Inc. Speed and memory optimization of intrusion detection system (IDS) and intrusion prevention system (IPS) rule processing
WO2010022459A1 (en) 2008-08-27 2010-03-04 Rob Chamberlain System and/or method for linking network content
US8555297B1 (en) * 2008-09-29 2013-10-08 Emc Corporation Techniques for performing a remote procedure call using remote procedure call configuration information
US8272055B2 (en) 2008-10-08 2012-09-18 Sourcefire, Inc. Target-based SMB and DCE/RPC processing for an intrusion detection system or intrusion prevention system
US8301687B2 (en) * 2009-03-31 2012-10-30 Software Ag Systems and/or methods for standards-based messaging
US8260814B2 (en) * 2009-09-17 2012-09-04 Erkki Heilakka Method and an arrangement for concurrency control of temporal data
US9141449B2 (en) * 2009-10-30 2015-09-22 Symantec Corporation Managing remote procedure calls when a server is unavailable
CA2789824C (en) 2010-04-16 2018-11-06 Sourcefire, Inc. System and method for near-real time network attack detection, and system and method for unified detection via detection routing
US8433790B2 (en) 2010-06-11 2013-04-30 Sourcefire, Inc. System and method for assigning network blocks to sensors
US8671182B2 (en) 2010-06-22 2014-03-11 Sourcefire, Inc. System and method for resolving operating system or service identity conflicts
US8601034B2 (en) 2011-03-11 2013-12-03 Sourcefire, Inc. System and method for real time data awareness
GB2490374B (en) * 2011-09-30 2013-08-07 Avecto Ltd Method and apparatus for controlling access to a resource in a computer device
JP6424499B2 (en) * 2014-07-10 2018-11-21 株式会社リコー Image forming apparatus, information processing method, and program
US10229250B2 (en) * 2015-02-16 2019-03-12 Arebus, LLC System, method and application for transcoding data into media files
US11582202B2 (en) 2015-02-16 2023-02-14 Arebus, LLC System, method and application for transcoding data into media files
EP3062142B1 (en) 2015-02-26 2018-10-03 Nokia Technologies OY Apparatus for a near-eye display
US10650552B2 (en) 2016-12-29 2020-05-12 Magic Leap, Inc. Systems and methods for augmented reality
EP4300160A2 (en) 2016-12-30 2024-01-03 Magic Leap, Inc. Polychromatic light out-coupling apparatus, near-eye displays comprising the same, and method of out-coupling polychromatic light
US10578870B2 (en) 2017-07-26 2020-03-03 Magic Leap, Inc. Exit pupil expander
CN111448497B (en) 2017-12-10 2023-08-04 奇跃公司 Antireflective coating on optical waveguides
CN115826240A (en) 2017-12-20 2023-03-21 奇跃公司 Insert for augmented reality viewing apparatus
US10755676B2 (en) 2018-03-15 2020-08-25 Magic Leap, Inc. Image correction due to deformation of components of a viewing device
WO2019232282A1 (en) 2018-05-30 2019-12-05 Magic Leap, Inc. Compact variable focus configurations
EP3803450A4 (en) 2018-05-31 2021-08-18 Magic Leap, Inc. Radar head pose localization
EP3804306B1 (en) 2018-06-05 2023-12-27 Magic Leap, Inc. Homography transformation matrices based temperature calibration of a viewing system
JP7421505B2 (en) 2018-06-08 2024-01-24 マジック リープ, インコーポレイテッド Augmented reality viewer with automated surface selection and content orientation placement
WO2020010097A1 (en) 2018-07-02 2020-01-09 Magic Leap, Inc. Pixel intensity modulation using modifying gain values
US11510027B2 (en) 2018-07-03 2022-11-22 Magic Leap, Inc. Systems and methods for virtual and augmented reality
US11856479B2 (en) 2018-07-03 2023-12-26 Magic Leap, Inc. Systems and methods for virtual and augmented reality along a route with markers
EP3821340A4 (en) * 2018-07-10 2021-11-24 Magic Leap, Inc. Thread weave for cross-instruction set architecture procedure calls
JP7426982B2 (en) 2018-07-24 2024-02-02 マジック リープ, インコーポレイテッド Temperature-dependent calibration of movement sensing devices
US11624929B2 (en) 2018-07-24 2023-04-11 Magic Leap, Inc. Viewing device with dust seal integration
US11112862B2 (en) 2018-08-02 2021-09-07 Magic Leap, Inc. Viewing system with interpupillary distance compensation based on head motion
JP7438188B2 (en) 2018-08-03 2024-02-26 マジック リープ, インコーポレイテッド Unfused pose-based drift correction of fused poses of totems in user interaction systems
EP3881279A4 (en) 2018-11-16 2022-08-17 Magic Leap, Inc. Image size triggered clarification to maintain image sharpness
EP3921720A4 (en) 2019-02-06 2022-06-29 Magic Leap, Inc. Target intent-based clock speed determination and adjustment to limit total heat generated by multiple processors
JP2022523852A (en) 2019-03-12 2022-04-26 マジック リープ, インコーポレイテッド Aligning local content between first and second augmented reality viewers
JP2022530900A (en) 2019-05-01 2022-07-04 マジック リープ, インコーポレイテッド Content provisioning system and method
JP2022542363A (en) 2019-07-26 2022-10-03 マジック リープ, インコーポレイテッド Systems and methods for augmented reality
WO2021097323A1 (en) 2019-11-15 2021-05-20 Magic Leap, Inc. A viewing system for use in a surgical environment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3614745A (en) * 1969-09-15 1971-10-19 Ibm Apparatus and method in a multiple operand stream computing system for identifying the specification of multitasks situations and controlling the execution thereof
FR2469751A1 (en) * 1979-11-07 1981-05-22 Philips Data Syst SYSTEM INTERCOMMUNICATION PROCESSOR FOR USE IN A DISTRIBUTED DATA PROCESSING SYSTEM
US5228137A (en) * 1985-10-29 1993-07-13 Mitem Corporation Method for controlling execution of host computer application programs through a second computer by establishing relevant parameters having variable time of occurrence and context
US4768150A (en) * 1986-09-17 1988-08-30 International Business Machines Corporation Application program interface to networking functions
US5027271A (en) * 1987-12-21 1991-06-25 Bull Hn Information Systems Inc. Apparatus and method for alterable resource partitioning enforcement in a data processing system having central processing units using different operating systems
US4972368A (en) * 1988-03-04 1990-11-20 Stallion Technologies, Pty. Ltd. Intelligent serial I/O subsystem
US5327532A (en) * 1990-05-16 1994-07-05 International Business Machines Corporation Coordinated sync point management of protected resources
US5280610A (en) * 1990-08-14 1994-01-18 Digital Equipment Corporation Methods and apparatus for implementing data bases to provide object-oriented invocation of applications

Also Published As

Publication number Publication date
US5497463A (en) 1996-03-05
AU4627493A (en) 1994-03-31
DE69323675D1 (en) 1999-04-08
EP0590519B1 (en) 1999-03-03
DE69323675T2 (en) 1999-11-11
AU663617B2 (en) 1995-10-12
ES2127774T3 (en) 1999-05-01
EP0590519A2 (en) 1994-04-06
JPH06214924A (en) 1994-08-05
CA2106891A1 (en) 1994-03-26
EP0590519A3 (en) 1994-05-18

Similar Documents

Publication Publication Date Title
CA2106891C (en) Ally mechanism for interconnecting non-distributed computing environment (dce) and dce systems to operate in a network system
US7051342B2 (en) Method and system for remote automation of object oriented applications
US6065043A (en) Systems and methods for executing application programs from a memory device linked to a server
Karnik et al. Agent server architecture for the ajanta mobile-agent system
US5903732A (en) Trusted gateway agent for web server programs
EP0853279A2 (en) Method and apparatus for controlling software access to system resources
US20130066935A1 (en) Method and system of mapping at least one web service to at least one osgi service and exposing at least one local service as at least one web service
WO2000056028A1 (en) A secure network
US20030055877A1 (en) Remote client manager that facilitates an extendible, modular application server system distributed via an electronic data network and method of distributing same
Berson et al. Introduction to the ABone
US20030046441A1 (en) Teamware repository of teamware workspaces
US7440992B1 (en) Cell-based computing platform where services and agents interface within cell structures to perform computing tasks
US9122686B2 (en) Naming service in a clustered environment
Tripathi et al. Ajanta-a mobile agent programming system
Gustafsson et al. Using nfs to implement role-based access control
Lang Access policies for middleware
Tripathi et al. Ajanta-A system for mobile agent programming
Conde Mobile agents in java
Kolano Mesh: secure, lightweight grid middleware using existing SSH infrastructure
Wickramasuriya et al. A middleware approach to access control for mobile concurrent objects
Agetsuma et al. Exploiting mobile code for user‐transparent distribution of application‐level protocols
Sirer et al. Improving the security, scalability, manageability and performance of system services for network computing
Burger Networking of secure systems
Zhou et al. T2 CAN: A Secure Active Network Prototype
Bakker An object-based software distribution network

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed