WO1998055920A1 - Distributed computer system - Google Patents

Distributed computer system Download PDF

Info

Publication number
WO1998055920A1
WO1998055920A1 PCT/GB1998/001668 GB9801668W WO9855920A1 WO 1998055920 A1 WO1998055920 A1 WO 1998055920A1 GB 9801668 W GB9801668 W GB 9801668W WO 9855920 A1 WO9855920 A1 WO 9855920A1
Authority
WO
WIPO (PCT)
Prior art keywords
components
component
systems
software
hardware
Prior art date
Application number
PCT/GB1998/001668
Other languages
French (fr)
Inventor
Job Maats
Original Assignee
Trust Eeig
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Trust Eeig filed Critical Trust Eeig
Priority to AU80262/98A priority Critical patent/AU8026298A/en
Publication of WO1998055920A1 publication Critical patent/WO1998055920A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/465Distributed object oriented systems

Definitions

  • the present invention relates to a distributed system containing digital and human components.
  • intangible components such as software and data, and ultimately human components
  • very disciplined production processes are much more likely to be economically worthwhile.
  • Intangible production processes are therefore expected to be proven capable of being shifted effectively from the craft era to the assembly line era.
  • the concept of the software factory is well known and data maintenance and enhancement technologies are perceived to be increasingly more critical with data warehousing and analysis and use of data as part of integrated systems.
  • Even in the human arena significant progress has been achieved in using assembly-line technologies as education progresses from the craft era's one-on-one tutors towards physical and virtual universities.
  • the assembly-line technologies are thus well known for intangibles, but it is the social and distribution systems and the legal and financial technologies which are currently inhibiting the roll out of the process.
  • Encryption and other related security technologies have seen enormous development from the Second World War onwards. Significant competition and mutual challenges between East and West in the form of espionage and counter-espionage and mutual code-breaking and eavesdropping activities have resulted in ever further refinements of the principles involved in protecting all system components, whether human, data, algorithmic or physical assets.
  • connection cost are now no longer primarily determined by the physical cost of installing and maintaining the backbone copper and fiber, but by the cost of the switching equipment.
  • circuit switching becoming uncompetitive from a communication cost relative to packet switching.
  • Outdated obligations on Recognised Operating Agencies to provide "perfection" in voice communications relatively independent of the marginal value and well in excess of known human limits to benefit from service quality improvements will mean that circuit-switching will nevertheless be a independent of the marginal value and well in excess of known human limits to benefit from service quality improvements will mean that circuit-switching will nevertheless be a dominant technology for Public Operator voice telecommunications for a considerable period of time.
  • Treaties and security considerations as well as the control over the cable consortia owning the undersea cables which are characterised by extremely anti-competitive terms and conditions restraining aggregate supply between the former monopolists which controlled all international telecommunications, means that arbitrage at the margin in the international arena is constrained. Reliable sources anticipate that it will take another decade, if not more before these constraints become totally ineffective.
  • Distribution of components to fully realise differential terms of trade existing around the world based on natural advantages relative to the cost of conveyancing of digital data is however already fully feasible in a large number of commercial application even based on current tariffs and taking into account many of the inefficiencies imposed by regulations and the anti-competitive actions by dominant suppliers. Distribution is rapidly becoming feasible for yet more applications as international tariff show yet further decline with the impact of deregulation associated with the successful completion of the WTO negotiation in February 1997.
  • the invention provides a distributed system containing digital and human components with explicitly defined aggregatable quality and trust communicating over digital and Analog connections with explicitly graded telecommunications channel security based on intelligent transmission path selection of inter-component communications reflecting the globally relevant jurisdictional metrics.
  • trust is intended to include such issues as data integrity, reliability, the degree of confidence in the reliability and security from external access possessed by the system or any component part of the system the likelihood and cost associated with unauthorised decoding and other invasions of privacy and a range of other culturally defined metrics and less well codified criteria.
  • no universally applicable standard needs to be set for trust.
  • the invention can accommodate any metric for trust as long as it is reasonably explicit and reasonably stable. Each and every social and corporate community can thus select its own rules on what it considers proper and what it requires to maintain trust in communication amongst parties in an era where face to face communication is increasingly going to be the exception rather than the rule.
  • the invention addresses the essential problem that all global and nation-state and corporate infrastructure maintain the mental model which presumes neat and precise boundaries reflecting the legal and political systems. Inter-connection between systems and dynamic and feedback effects inherently require connectionist approaches for realistic modelling. Reality is much more closely mirrored by the complex foodweb than the neat organisation and linear structures of accountants. Realism in modelling can be materially improved by looking at primary producers and maximising efficiency there rather than attempting to establish order through higher trophic levels.
  • the intemexus is inherently a global effort in the nature of a global commons which once set up will grow as long as it is not overgrazed.
  • the exchange of information benefits both sides and with appropriate metering mechanism can be leveraged to self perpetuate its growth.
  • the inter- nexus broker allows systems to be designed to be (1) Technology independent (2) Platform independent (3) Operating System independent (3) Client independent (4) Communications System independent (5)Bandwidth independent, (6) Data format independent (7) Circuit/packet independent (8) Encryption protocol independent. (9) Network trust level independent. (10) Enterprise and Network Management system independent.
  • the invention is based on the innovation about splitting and disciplined compartmentalising of a system into components.
  • the mythical man-month problem is the generic problem of software- writing. It has todate limited software projects in terms of scale and time frame. Anything beyond that scale has proven undeliverable.
  • a system is rigorously divided into trusted components, each component being essentially a "black-box" to the other components, and being identified by a unique code or number.
  • the invention can thus deal with Discrete Components with brokerage/interpreter component to switch efficiently between components and clusters of components.
  • the discrete approach first of all allows specialisation within each of the four generic components and then within each of the generic families it allows individuals or small teams to progress in fine-tuning individual components.
  • Components in any system may be divided into four categories of components. All systems are a combination of individual variations from these four categories of components:
  • the software for processing the data- the software may be divided into a number of components, each having assigned predetermined functions. No more functionality than is appropriate for the intended application should be accessible;
  • Electronic Commerce shows very significant commercial promise but inherently requires integration of component at various levels of granularity, both for realisation of its commercial objective and as dictated by security requires distribution of these components across the world.
  • the extended enterprise associated with the natural progression of Electronic Commerce will lead such enterprises becoming a member of ever more clusters of suppliers and affinity groups of clients.
  • the cluster trend which has proven so inevitable in real estate development is expected to show an equivalent development in cyber real estate and reflects the inherent benefit of loyalty technologies and the high cost of initial customer acquisition relative to the cost of customer maintenance.
  • the logical expectation is therefore that more and more overlapping webs will be proliferating as a natural consequence of Electronic Commerce.
  • the essential element of this innovation is therefore to provide a system capable of distributing components around the world and decoupling end-to-end trust both from the individual components, the cluster or webs of trust, and the elements of the transmission networks used, while being capable of accommodating any and all public policy standards that might exist from time to time in any one of the end-point nation states or any of the transit-nation-states (defined as the nation states in whose jurisdiction components of the intermediate systems through which the connection passes may be located.)
  • a distributed computer system consisting of isolatable webs of trust, the system comprising a multiplicity of components, including at least directly or indirectly one of each of the categories of human, data, software, hardware, and each component having a desired level of trust assigned thereto:
  • Each and every component to know its state and possessing a unique identity and identifying code which precisely defines a class with its own unique number With the exception of human components which are inherently analog, with respect to digital components, and substantially hardware components, this means that the components within a class are perfect substitutes for each others and are only different to the extent of their unique number which is incontrovertibly and permanently associated with them and unchangeable inside the black box and absolutely required for the black box's own autonomous and internal operations.
  • Each component class has a unique number which is a function of the generic component, the specific type and version as well as the specific link to a metering/copyright agent authority. The component itself would specifically have a unique number relative to its certification agent, which is capable of being changed only in coordination with the certification agent. In a protection sense this is conceptually thus not all that much different from a physical dongle.
  • the components enumeration of capabilities needs to be explicit. Ideally this functionality should' be subject to independent verification of the source code prior to compilation, thereby confirming that no redundant code is present which could cause extra functionality to be accessed which might otherwise not be known to integrators which exclusively would only have access to the public descriptions of the components and the compiled version of the black box.
  • Multiple independent testing teams reviewed by multiple third party auditors would allow cumulative endorsement at relatively insignificant incremental costs for truly critical components where extreme levels of trust are required and where the limits of insurable capacity are being reached.
  • the inter-nexus broker controls the number of components in each cluster and checks the cumulative logic in terms of the requisite number of components with specific unique which identities are present.
  • Acceptable standards for TTP technologies are in the public domain.
  • Equally acceptable technologies and processes for encryption of communication between the TTP and the components are in the public domain.
  • Applets and applications should only have the number of components identified by the applications certification authority and no more or less components of any specified particular class should be present. Alterations or substitutions to the applications should at all times require that first a handshake with atypically remote TTP, for the applications certification authority to establish trust, and only then can the applications' components be altered and the changed components be introduced into the application. At start up of the application a new integration test would be performed with full public/private key exchange between all the components and the applications' certification authority to establish the integrity of all components present. After that exchanges between components can take place with session encryption at the requisite level.
  • a cluster is defined by a dedicated TTP ring.
  • This cluster is defined as the web of trust.
  • the trust in any message emanating from this web is by definition never higher than the trust which can be placed in the TTP itself and those controlling access to that TTP.
  • the TTP performs the function of certification agent at time of start up of any applet or application or systems or data centre or entry control into physical facilities such as cars or buildings.
  • the present invention contemplates the use of brokerage components.
  • Each web of trust to ideally have one gateway component, the inter-nexus through which the web communicates with the outside world, itself defined as either another web of trust or an untmsted party.
  • This component performs the designated screening functions and typically contains a VALVE operation for a higher level controller of the TTP to instruct the gateway to fully or partially close down in certain directions.
  • the brokerage component allows introduction of low level and fully tailored security mechanisms. Variations in security between different portions of the systems with possibly different and varying encryption technologies and rules dictated by differing requirements can thus be achieved. These brokers are also the natural "gates" for metering locations. To increase the universality of the components, and to keep up front co-ordination cost down brokerage components can be in addition to the presence of the certification agents.
  • the development of the component and overall system by a number of people acting independently means that data formats may be different from time to time and that the high-level data dictionary might from time to time get out of synch. (Although this can obviously be handled by making the data dictionary part of the high level architecture and making this document an interactive and continuously updated directory. This reference resource can be permanently maintained up to date for any and all parties participating in component creation.
  • the brokerage components are the logical blocks to perform the metering functions to measure how many accesses have been made by other parts of the systems to the functionality in a particular web controlled by a specific certification agent.
  • a brokerage component and a certification agent if not integrated are by definition always paired to ensure that external influences can be controlled with a very precise valve at the entry point into the structure.
  • Components will inevitably come to be distributed based on a super- distribution models where components will typically be free for the first couple of thousand instantiations to allow a developer to test the functionality and to build and test the basic integrated system. Only when the components commence being used by the ultimate customers in daily and intense use will the components clock typically progress beyond the free metering period.
  • the brokerage components have one more benefit. They allow the setting of an extra trapdoor if the IPR stmcture is compromised.
  • the code in individual components might be dissembled but if certain data formats inside the components require encrypted data to be sent for it to work, then dissembling one component has no benefit. As this is a low level security issue, this only needs to be known between the components in the cluster and the brokerage component, i.e. for the rest of the systems it is believed that here is nothing special about data formats for that particular cluster. (Externally it looks normal as the coding/decoding is fully internal to the combined broker /cluster universe)
  • the brokerage component can conveniently allow significant constraining of the theoretical functionality of the components in the web of tmst with the broker setting limits on certain transaction which can be passed direcdy to the web or allowing certain transactions through with a simultaneous alert going out.
  • This type of security functionality can be programmed by parties wholly unrelated to the one's designing the component family and thus allows organisations to limit impact of individual events. (Although most of the components in the world are very generic, the ability to innovate in terms of security and how they are interconnected through the use of the broker is infinite)
  • the brokerage components have benefits not only for the protection of a certain web of components, but also to protect against certain types of program trading implications in compromised systems. Breaking through one web could create problems in another web as a result of interconnections between the cluster which can logically only be controlled by the valves and the algorithms set in the brokers. Brokers could easily agree that certain data flowing between brokerage components can only be passed in a particular encrypted format to be able to be processed. If someone would break the generic encryption applying to the pipes mnning between the individual components and between the components and the brokers, then this would still not be adequate to cause a major impact if and when the brokers exchange certain data formats only in yet further encrypted form.
  • the present invention envisage a certification agent or authority (CA)running inside each cluster and then from there onwards further CA's controlling every other component cluster (hardware, software, data, humans as actuators sensors) in the network to control peer or hierarchical communication with TTP handshake processes with individual components before components can communicate with each other.
  • CA certification agent or authority
  • the certification agent mnning inside a cluster effectively controls access to a cluster of components with the application only capable of being mn or changed if and when a web of tmst has been established, i.e. all the components with the requisite specific ID's for that component instantiation are present as is the metering object and it will have been check that it still has reasonable n time on it.
  • the certification agent obviously will have different security levels with different groupings/clusters required to activate the ordinary mnning of the application and a significantly higher level of tmst to make change requests to the live components. Sign- offs will frequently be required from other parties in other webs to ensure that all other relevant webs have been fully prepared to accommodate those data elements which could have been impacted by the changes in one of the webs. All coding and decoding contains the seeds for code-breaking. Not so much because of the power of any individual machine to break the code by application of raw processing power to test combinations, but most frequently because the opponent is inadvertently given an insight into the architecture and can thus possibly anticipate a certain type of human or system error which will either occur naturally or in some clever way can be invoked. Errors provide the opponent with information, which then causes a problem previously not solvable within a human lifetime to suddenly become manageable in days. Even raw power attacks are possibly going to have very different cost dimensions in the near future.
  • the classic example could be the use of the processing power of the net to give anyone relatively low cost access to the equivalent of a Cray within the next year by accessing unused processing cycles.
  • Limiting insight about system designs and creating explicit audit traps is the only way to create time to respond when the inevitable security infractions will happen.
  • the broker component is conveniently capable of slowing infestation and general spreading of vimses down by a controller turning the valve remotely through the certification agent. It should be presumed that at some point in time it will be uncovered that errors have been made or that mathematical encryption assumptions are no longer valid, or that a breakthrough in terms of processing power is made somewhere using parallel computing or incredibly efficient distribution of tasks across a number of processors. No component and no cluster can ever be presumed to be totally secure. This dictates the requisite levels of compartmentalisation and granularity of the applets in order to be able to jettison compromised clusters and the need to build in redundancies all over the place.
  • the webs of tmst combined with the loose coupling of components means that branded experiences can be created which hide the plumbing from the user while meeting the explicit security and other quality standards through a series of intelligent agents making the requisite decisions. From a marketplace perspective the ability to package inherently complex systems design decision and choices is generally considered to be a material marketing advantage as it is the result which is purchased not the process of getting there.
  • the inherent structure allows complexity to be decoupled from the expertise of the user and to accommodate preferences or inherent needs as they may exist from time to time.
  • connectionist and isolationist principles in the design of the webs of tmst means that administration can be readily simplified though relatively simple contextual logic mles and that tmst becomes a transitory concept.
  • the TTP stmcture allows service authorisation designations to be set regarding the rights that specific components can acquire with respect to other components within the web and in terms of what can be passed externally to another web of tmst or to a designated party.
  • the encryption levels can be re-agreed, from generally prevailing standards within the system, between individual components within a certain software applet once the identity of both the components has been checked and the communication has therefore been cleared between the components. It is of course critical that this inter- component communication is encrypted in order that an outside observer can not learn about either of the components from what is going back and forth between them.
  • TTP stmcture and the audit processes which can be readily associated with the passing of data through the gateways means that forensic accounting is facilitated whenever security is "breached". Frequently breaches will be permitted on purpose in order to observe the hostile parties and to allow them to be trapped as they enter further and further into and through webs of tmst.
  • the architecture allows fake webs of tmst to be operating apparently as real life systems but in reality only providing "dummies" to capture hostile parties and to waste their efforts and confuse them about relative success in penetration into and compromising of systems.
  • the invention allows for a central repository to contain amongst others explicitly codified information on location of help services, where FAQ can be obtained, under what conditions an upgrade of the component is required, processes for handling upgrades in the components based on new operating systems becoming available, processes for handling systemic problems, such as bugs, and parties responsible for addressing systemic events and the time period they in turn have committed to resolve these problems.
  • the repository could contain information on how the potential users of the component can be sure that they will receive information to rectify systemic problems.
  • the systems allows enhanced audit for systems as the time and place of entry and exit from the integrated web of tmst can be determined and at all times the identity of the parties is known with an explicit level of tmst about the tmth of that identity.
  • the systems allows for natural bubbling up of problems as the infrastructure allows for systemic events experienced by others to be conveniently shared with parties using the same class of components.
  • systemic events experienced by others to be conveniently shared with parties using the same class of components.
  • individual operators experiences of systemic events will be a fraction relative to those experienced by users of custom coded systems written from scratch or by users of less distributed systems who will not realise errors as they are less likely to be trapped as codification needs to be inherently less rigorous in geographically non-dispersed systems.
  • the intensity of effective testing of the robustness of code together with the generalised nature of events will readily allow for rating services to emerge both for publishers/authors and for maintenance and other organisations which over time will allow for all dimensions of quality to be reduced to purely financial aspects which are inherently easier to manipulate mentally. Problems can thus be fixed in a systematic way through the normal effect of markets which will drive those who deliver less than the standards out of the marketplace over time and will reward those who excel though premiums reflecting the ultimate benefits of quality systems to theorisers.
  • the proposed system remedies the artificial separation between telecommunications and computing and between human and computer systems. Any and all systems are hybrid and it is pretty much inconceivable that web of tmst at the highest levels would not virtually always contain all of the elements and a number of hybrids in terms of intermediate components. Although hybrids are inherently superior to either pure automated systems or pure human systems, the systems also allow explicitly for overrides beyond the margins where either humans or systems become superior.
  • overrides can specifically be set within a cumulative way in order to address mental overload issues at times of crisis, thus ensuring that data and inputs are reduced and heuristic mles allow scarce human or system resources to be focused on what is most critical or most valuable and dropping where necessary explicitly both traffic and activities which are less valuable or which need to be jettisoned to protect the larger integrated infrastructure or at least elements thereof.
  • the proposed system addresses the specific problem of proven irresponsibility and negligence of certain system houses which have knowingly been delivering systems with material defect since the early eighties. It equally addresses the common sense irresponsibility and negligence of systems houses which have delivered non-CDC code during the nineties.
  • the time frames of many system suppliers has proven to be either very short in terms of harvesting the returns from their activity or they have callously recognised the inability of the legal and financial infrastructure to address issues of warranty and fitness for purpose and the ability of their relatively unsophisticated clients to uncover from a systems perspective the material defects and thus have presumed an inability to collect from the original supplier in terms of the cost of remedying code for CDC defects.
  • the world at large is expected to have to waste $600 billion alone in terms of payment for critical human resources for one single occurrence of systemic digital risk, the CDC event.
  • the CDC event together with the consequential damages of non remedied CDC problems for which users have paid twice, once when they bought the components and again when they paid to remedy the CDC material defect which should have been under warranty, is generally expected to cause a phase change in the systems- component marketplace.
  • the proposed system explicitly allows specialisation within the development of systems and components thereof and eliminates any benefit from vertical integration. This thus introduces the possibility of incremental participation in systems development by SME's all over the world, likely to lead to a surge in creativity and innovation and ever further reduction in monopoly power by any parties attempting to control markets through critical choke points in the value chain of integrated systems.
  • the proposed systems' ability to codify the classes and number of users also means that the financial flows from software and other components can become objectively determinable and that derivative securities can thus be developed allowing authors of software components to arrange for convenient working capital financing of the ultimate fruits of their labour and permitting them to share future risk and rewards and to redistribute the maturity profile of such rewards in ways that meet the needs of the contributors of financial capita land intellectual capital co-producing the requisite components and cluster families.
  • the proposed system is recursive in that the proposed system is inherently well suited both for Electronic Commerce application and applications associated with delivering the components required for building Electronic Commerce systems.
  • the negligible cost associated with digital conveyancing and the independence from distance ensures that the markets for components are likely to be global from inception, thus eliminating opportunities for discrimination regarding any characteristics of the authors which are not reflected objectively in the output products of such authors.
  • the proposed systems and the universal presence of the gateway object conveniently allows for superior distribution and metering of software based on usage and value to the user rather than being based on pure time based or number of users based licensing.
  • the proposed systems also allows inherently for virtual and real modelling and integration and stress testing in a completely segregated manner from the main life- systems.
  • Electronic Commerce systems in particular financial transaction processing systems and mission critical systems require high levels of "tmst". This feature is also particularly important in that in the absence of such models technological engineering- type perfection inevitably becomes the goal.
  • explicit decisions can be made in a responsible manner to build a total infrastmcture which falls well short of perfection, but which optimises total retums. This is particularly important in the area of fraud, which has to be presumed as a constant in any and all systems design. It is extremely unlikely that no fraud is the optimal desired position as it tends to also impose significant cost on both organisation and clients and thus total elimination is not economically optimal.
  • the proposed systems permits objective standards for network management and create the ability to produce explicit diagnostic components which can mn inside any web of tmst thus further enhancing security and facilitating systems administration. Any and all network elements can thus be explicitly interconnected and enhanced with other data elements thus allowing the use of scientific techniques along arrange of metrics for optimisation
  • the proposed system presumes all channels are tapped, but is not concerned since all the tappers get is encoded traffic.
  • One's home government can of course compel one to break security by requesting the TTP identities and by asking to be provided with the decryption keys. This is however precisely defined in TTP legislation and thus an explicit exposure which can only be remedied though business relocation and electing a new sovereign.
  • the proposed systems explicitly allows for mixtures of structured and unstructured systems and permits increasing degrees of codification thus shifting hybrid systems continuously in the direction where the value added of human resources at the margin increases with time.
  • the proposed systems specially meets the requirements of the issuers of Directors' and Officers' liability in the post CDC era where large number of Board are expected to be subject to litigation based on either negligence in reclaiming the cost of CDC work from suppliers, negligence relating to third party certification of their and their interconnected parties systems for CDC compliance or the failure to anticipate that certain parties which failed to remedy would not be able to meet their obligations for the resulting consequential damages. Only those wholesale adopting the invention are likely to be able to handle the complexity in the post millennium D&O world
  • the proposed system provides investors with the possibility of directly investing in codified knowledge assets which are the closest proxy to investing directly in human assets.
  • the maintainable nature of components also avoids the inherent exposure associated with the current vogue of investing in counter-cyclical year 2000 specific hedges.
  • Many of the target companies reportedly providing such a hedge have no demonstrable skills in converting their business proposition beyond 2000 into sustainable digital risk reduction businesses.
  • codified knowledge assets can be rolled out on a global basis for integration and local mass customisation and personalization and thus do represent a hedge which furthermore is very difficult to expropriate through retroactive taxation or unilateral cancellation by certain sovereigns, either acting individual or as a concert party in violation of a treaty or long established sovereign monopoly rights such as trademarks and patents.
  • the market based approach and the ability to provide real time information both on the quality of the accepted knowledge asset, the cumulative experiences with the knowledge asset and the cumulative and outstanding experiences which challenge to the knowledge assets means that the price of the derivative is likely to be established from inception in a relatively efficient global market while the owners/publishers can independently maximise the price of the knowledge assets from the financial investors.
  • the shift in economic contribution from agriculture and industry into information businesses can be expected to impose a trust deficit discount on many large enterprises and depress surplus physical assets as wealth holding preferences adjust.
  • FIG. 1 is a schematic block diagram of a group of nation-state specific components functioning via an intemexus to provide communications between the various nation states in the world.
  • communications between 233 nation states are provided by a single web of tmst 2.
  • the web of tmst is capable of accepting calls in any of the known formats, with routing control codes for example by means of a touch-tone key pad, or voice instructions in any language.
  • the web of tmst 2 comprises four software components 4,6,8,10 installed on a single hardware platform component 12. Each component is assigned a unique identifying code 14 and each component has a communicating link 16 with the other components, which link is encrypted, employing the respective codes for the encryption.
  • Each component functions as a black box, only the respective component knowing its own functions and capabilities.
  • the sole means of communicating and interfacing with each component is via a respective applications programming interface 17.
  • Each component has a certification authority 18, linked to a tmsted third party control 20 external to the system. The authority 18 checks upon start up the integrity of the respective component.
  • Component 4 functions as the intemexus, having an interface 22 which can accept encrypted communications in any of the prescribed telecommunications formats from a user 24 in any country in the world, who may have a handset including a touch tone key pad 26 and a microphone 28.
  • Component 4 in addition includes a metering authority 30 for recording the use made of the intemexus.
  • the combination of systems and financial technologies in this invention allows complexity to be reduced and allows the market to substitute and replace many of the functions historically fulfilled by the state and large corporations.
  • the firm will be able to be reduced to the proverbial box of explicit contracts, while remaining residual exposures to implicit events and risks can increasingly with time be converted into explicit contract and events.
  • the invention will allow event-specific-trust to be translated into a probability.
  • the specifics of an event combined with proven actuarial technology allows for translation of the probabilities into financial expectations with a distribution-range. Complex technological choices can thus be abstracted and both individually and at any aggregated level can be collapsed into market based pricing decisions for TRUST.
  • the invention when combined with triage will thus explicitly create the possibility of technologically and financially ring-fencing and creating specific secure islands /webs which can be the safe harbours from which an organisation can be rebuilt after experiencing hostile attacks or the consequences of its own ma king derived from using unproven and unwarranted components.

Abstract

The present invention provides a distributed computer system, the system comprising: a) a multiplicity of components, including at least one of each of the categories of human, data, software, hardware, and each component having a desired level of trust assigned thereto, b) each of the software and hardware components including no more functionality than is required to carry out the desired task, and the internal functions of each of the hardware and software components not normally being accessible externally of the respective component; c) each component having a unique identifying code associated therewith, and one or more communication links between the components through predetermined interfaces requiring the use of the associated identifying codes, including means for encrypting communications between the components.

Description

DISTRIBUTED COMPUTER SYSTEM
FIELD OF THE INVENTION
The present invention relates to a distributed system containing digital and human components.
BACKGROUND ART
With the advent of distributed computer systems, the emphasis in systems development has shifted from the writing of the algorithm, which was previously the most important operation, to managing availability and complexity. Complexity is managed by standardising building blocks and viewing these blocks at levels of aggregation appropriate to the parties involved. Continuous improvement of the quality of building blocks themselves, particularly the software blocks, is generally achieved using object orientation which at the same time addresses the maintenance and scalablity /availability issues if correctly designed. By keeping the granularity of components down, the code can be kept simple and elegant and thus quality is within reach by using industry specialist domain experts to write these reusable components and through independently testing and financially certifying the associated standardised warranties.
More complex software components can be treated in the same way that the processor manufacturers in essence just modify an existing chip-line by adding go-faster stripes rather than completely new chips. Complexity is thus not really increased and quality can be predictably improved in a sustainable manner. Where complex software components are written from scratch the only long-term realistic solution is longer development cycles and more extensive testing, i.e. intensive investment in quality up front rather than relying on the current trial and error processes using the marketplace as a guinea pig and delivering product not fit for purpose. On balance most engineers consider that quality has to be designed into a product and that it can not be added as an after thought. Trial and error, quite apart from the legal exposures it creates appears not to be a realistic long-term process substitute for designed quality.
The economics of quality component production, due to the requisite documentation and testing and financial warranty processes required for distributed systems are going to result in components a quantum and possibly even several quantum more expensive then the more casual processes used historically with both components and the packaged megalithic end-user product manufactured in-house in a fully vertically integrated operation.
The software industry suffers the absence of efficient intermediaries, and has to build products from scratch in-house without relying on internal and international markets. They thus need to control sourcing all the way through to intermediate products and sometimes even the distribution and logistical aspects, as markets do not perform their functions reliably. Vertical integration typically prevents realistic allocation of cost between stages in the production process, thereby yet further distorting any rudimentary markets which might emerge. The excuse or legitimate fear of cost imposed by having to rectify shoddy purchased-in products naturally limits third party purchases, or makes them prohibitively expensive, both in an administrative sense and in terms of the excessive quality specifications demanded for relatively low volume production runs. This phenomena has long been known in the defence and aerospace industry. The cost of certain garden-variety type items in these industries are a multiple of their production cost in the civilian world as a result of over specification and unwillingness to modify designs to be able to use standardised components. The consequences of a faulty gasket in a rocket makes it clear why high specification processes are used.
The extra cost imposed by very disciplined high-standard component production can typically only be justified for components which are tightly integrated in high value end products and where falling short on these components could materially reduce total quality of the integrated end-product. Most physical products have relatively low levels of inter-connection with other physical products and systemic quality savings have therefore rarely proven recoverable through commercial pricing premiums. This has resulted in a natural downward quality trend to levels which the average customer will pay for, rather than quality based on the marginal contribution of the product concerned to the overall safety of an integrated system. This could well be changing as more and more physical components in systems are now being deployed remotely in locations where service personnel or spares might not be readily available or economical. Under those circumstances higher standards for physical products are likely to emerge if the current trend to wards unified control centres capable of running integrated systems on a global basis continues.
In the case of intangible components such as software and data, and ultimately human components, very disciplined production processes are much more likely to be economically worthwhile. At the margin delivery of quality products does not cost more, it costs less as maintenance and user instruction cost are by far the most material costs in any intangible roll out. Intangible production processes are therefore expected to be proven capable of being shifted effectively from the craft era to the assembly line era. The concept of the software factory is well known and data maintenance and enhancement technologies are perceived to be increasingly more critical with data warehousing and analysis and use of data as part of integrated systems. Even in the human arena significant progress has been achieved in using assembly-line technologies as education progresses from the craft era's one-on-one tutors towards physical and virtual universities. The assembly-line technologies are thus well known for intangibles, but it is the social and distribution systems and the legal and financial technologies which are currently inhibiting the roll out of the process.
SYSTEM DESIGN TRENDS & NATIONAL SECURITY LIMITATIONS With the advent of digital technology in general quantum reductions in interaction and transaction costs have become theoretically within reach. Limits on communication shielding have limited these benefits primarily to cases where all relevant components are contained in tightly defined geographic locations in order to meet and maintain security and data integrity.
Encryption and other related security technologies have seen enormous development from the Second World War onwards. Significant competition and mutual challenges between East and West in the form of espionage and counter-espionage and mutual code-breaking and eavesdropping activities have resulted in ever further refinements of the principles involved in protecting all system components, whether human, data, algorithmic or physical assets.
The threat of geographically focused hostile acts necessitated distribution of systems to secure availability and reliability of systems. The assumption which became the standard operating stress test was that even when significant portions of the connection routes were at risk or actually compromised as a result of hostile actions, communication should still be achievable with minimum delay though alternate routes. The Internet in that context was a direct consequences of these types of needs and as such it was the US Military logical consequential implementation. Aspects of technologies associated with the development of the "Internet" have been released into the public domain for use by academia and latterly commerce. The inter-play between sectors has allowed a certain amount of exchange and mutual learning about network and security technologies, although many still consider the US Military and security services to have significant superiority relative to the global commercial community as a result of their natural head start, their ability to maintain a head start by suppressing really important innovations through secrecy orders thus reserving the benefits of such innovations exclusively to the military and of course from having been in charge of the design processes. Encryption technology is considered in the US to be the technological equivalent of military products and arms. Export and other US domestic security driven restrictions have generally constrained the use of the Internet for intensely reliable and secure communication between components. Certain low-cost systems innovations which significantly impact on end-to-end security and reliability have not been applied. Many suspect that this isn't for technology-based reasons but for reasons of security, in the same way that GPS applications commercially are purposefully made inaccurate to maintain a military lead. Limits on release of security technologies for the Internet are wholly consistent with maintaining the NSA's overall lead in eavesdropping capability and the requisite maintenance of general cryptology leadership by the world's largest concentrated cryptologist population. The enormous diplomatic and political efforts dedicated to the OECD encryption negotiations during 1997 equally indicates that the US continues not to recognise that in the near future commercial needs for security are in the same league as what historically has been required for defence purposes. For spies information is the business. For the digital business world information in its various codified forms is equally the business and hostile acts in defiance of Intellectual Property protection set by international treaties should be expected from outcasts in signatory nations and from pirates operating from nations which are not signatories to these types of treaties.
The limits on release of security technology and the lack of cost effective commercially availability of security technologies and limited public domain technologies have meant that secure distributed commercial systems have thus near universally had to be built using dedicated circuits, largely encrypted with proprietary technologies and procedures, with minimal standardisation in terms of hardware and even software components and without generally accepted codified "best practice" procedures for systems security at global levels. Unlike most technology areas, where transfer and exchange of opinion in the scientific community and through that with commerce is quite high, the systems security domain has been one of the great global exceptions to the generally applicable knowledge exchange processes.
With the advent of the end of the cold war secure technologies relating to the use of encryption have started to become commercially available from non-US locations, particularly those which have historically been "non-aligned". Non-aligned nations had to invest significant resources during the cold war to develop their own proprietary technologies in security. Reduced public sector demand post the cold war has spurred an interest from these nations to commercialise these technologies in order to realise a benefit from their many years of cumulative human investments. As a consequence of COCOM regulation, another direct outgrowth of the US view on certain technologies constituting the mental equivalent of arms, historically powerful microprocessors have seen extremely poor and certainly delayed and higher cost availability in many locations outside the US which have meant materially different approaches between the US and the rest of the world have evolved. Mathematicians from former East Bloc/satellites/affiliate states locations have had to be particularly elegant with their algorithms in view of the disadvantageous terms of trade between human and hardware components.
IMPACT OF TELECOMS REGULATION ON DISTRIBUTED SYSTEMS
With the advent of deregulation and increasing digitalisation of telecommunications systems and the surge in the transmission capacity of fiber, connection cost are now no longer primarily determined by the physical cost of installing and maintaining the backbone copper and fiber, but by the cost of the switching equipment. These developments have lead to circuit switching becoming uncompetitive from a communication cost relative to packet switching. Outdated obligations on Recognised Operating Agencies to provide "perfection" in voice communications relatively independent of the marginal value and well in excess of known human limits to benefit from service quality improvements will mean that circuit-switching will nevertheless be a independent of the marginal value and well in excess of known human limits to benefit from service quality improvements will mean that circuit-switching will nevertheless be a dominant technology for Public Operator voice telecommunications for a considerable period of time.
Regulations have been or are in the process of being lifted in most telecommunications areas but are primarily expected to still remain in the arena of circuit switching, as this historically has been associated with voice communications. The association with universal service obligations and public interest issues will naturally slow down technologically feasible, but regulatory wise not yet permitted, changes. Telecommunications competition has always been about incumbents acting in symbiosis with regulators. Active use of legislation and litigation by both incumbent and new entrants has been dominant, particularly in the US. The primary impact of the involvement of regulators, litigation and threat of litigation and never ending investigations into cartels, trusts etc. has been an overall slowing down of new technologies to prevent the power of dominant historical monopolists and specifically the challenge to the changing economics of telecommunications in the global circuit switched arena.
This development has lead to enormous divergence between the cost of circuit switched telecommunication and packet switched communication. In an efficient market this distortion would not be expected to last beyond the replacement of Central Office Exchange technologies. In the domestic telecommunications business arbitrage frequently already levels efficiently between different technologies with alternative competitive strength depending on geographic conditions and local interconnect regulations.
Treaties and security considerations as well as the control over the cable consortia owning the undersea cables which are characterised by extremely anti-competitive terms and conditions restraining aggregate supply between the former monopolists which controlled all international telecommunications, means that arbitrage at the margin in the international arena is constrained. Reliable sources anticipate that it will take another decade, if not more before these constraints become totally ineffective.
Distribution of components to fully realise differential terms of trade existing around the world based on natural advantages relative to the cost of conveyancing of digital data is however already fully feasible in a large number of commercial application even based on current tariffs and taking into account many of the inefficiencies imposed by regulations and the anti-competitive actions by dominant suppliers. Distribution is rapidly becoming feasible for yet more applications as international tariff show yet further decline with the impact of deregulation associated with the successful completion of the WTO negotiation in February 1997.
The regulatory bias, the accounting rate economics for circuit switched capacity and the still relatively expensive circuit-switching equipment now favour development of new technologies. Wholesale shifts from circuit switching to packet switching can be expected in applications historically reserved for circuit-switching. Voice is one such classic application which has seen a lot of controversial discussions. Many at the leading- edge have commenced to packetize voice to allow it to be transmitted over channels not subject to voice tarrification associated with the ITU accounting regulations.
In the same way that voice at the margin is being moved away from traditional conveyancing technologies and channels, communications between components in an integrated system requiring high levels of trust and security in their communication between components are another natural group of applications interested in achieving the benefits of the superior economics available by migrating away from dedicated circuit switching. Electronic Commerce extends the corporate value chain by including both suppliers and customers and frequently captures whole affinity groups, thus legitimising many structures within the concept of a private end-to-end group capable of being handled within Virtual Private Networks. Both boundaries of corporations and nation- states are becoming less and less relevant in defining the integrated webs. At the leading edge technologies which allow high speed, ATM levels, low latency encryption of all traffic between components and aggregated components, i.e. applets, applications, data centres etc., within a system have now been developed.
The advent of the Intel Pentium 2 chip family has made such high speed encryption not only technologically feasible at 25mb per pentium 2, but also has made the economics of full and total encryption of all communication within one integrated system economically feasible.
The minimal transmission delays on fiber through an efficiently globally packet switched network, when combined with non-blocking switches, has made distinction between data and voice, largely technologically relating to the distinction between realtime and non-real-time, commercially irrelevant. Packet switching has now become near-enough- real-time. Full encryption imposes extra delays only measurable in nanoseconds relating to coding and decoding at the end points of the encrypted network before the traffic either breaks out into the public networks or terminates onto dedicated devices directly connected to the private network.
Specifically new technologies now exist which ensure that perception of quality, even where packets are dropped or in other ways sequencing problem sexist, are not detectable by the human ear and human eye and material deterioration of perceived quality relative to the circuit switched transmission are within reach. Reservation protocols, whose generalised application is imminent, particularly with the advent of Internet Protocol version 6, is likely to further improve any remaining disadvantages for applications requiring extreme degrees of perfection in terms of on-time and in-sequence receipt of packets.
For many applications software technologies and specialised algorithm running on ever more powerful and lower cost processors are likely to become an efficient mechanism to compensate for any problems that may occur from time to time with timely transmission of packets. The value of such algorithms and their ubiquitous use will be further enhanced when packet switching networks are from time to time combined with judicious use of the circuit switched network, to make up for any temporary quality problems in the packet switched network. Hybrids are thus likely to become the a standard. Arbitrage at the margin between circuit and packet switching can reasonably be expected to increase, ultimately leading to a collapse of the cartel, but with major distortion in the meantime causing technologies to be developed which ensure that virtually any communication can be handled fully through packet-switched systems. Economic path-dependency will mean that once the intellectual effort is expended on creating the libraries of code which allow full independence between circuit and packet switching, it is unlikely to be reversed at a later stage, even if superior circuit switches for certain applications should be developed.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide secure and trusted systems for operation in a distributed manner, where all communications between components have a real time gradable degree of end-to-end trust associated with each and every communication event. Trust can thus become a purely economic criteria which can be set and will decide the communication channels used from time to time for segments of the communication network.
The invention provides a distributed system containing digital and human components with explicitly defined aggregatable quality and trust communicating over digital and Analog connections with explicitly graded telecommunications channel security based on intelligent transmission path selection of inter-component communications reflecting the globally relevant jurisdictional metrics.
Definitions of trust will vary from time to time based on the application and on the "community" in which the communications take place. For the purposes of the present specification, "trust" is intended to include such issues as data integrity, reliability, the degree of confidence in the reliability and security from external access possessed by the system or any component part of the system the likelihood and cost associated with unauthorised decoding and other invasions of privacy and a range of other culturally defined metrics and less well codified criteria. For the purposes of the invention no universally applicable standard needs to be set for trust. The invention can accommodate any metric for trust as long as it is reasonably explicit and reasonably stable. Each and every social and corporate community can thus select its own rules on what it considers proper and what it requires to maintain trust in communication amongst parties in an era where face to face communication is increasingly going to be the exception rather than the rule.
The invention addresses the essential problem that all global and nation-state and corporate infrastructure maintain the mental model which presumes neat and precise boundaries reflecting the legal and political systems. Inter-connection between systems and dynamic and feedback effects inherently require connectionist approaches for realistic modelling. Reality is much more closely mirrored by the complex foodweb than the neat organisation and linear structures of accountants. Realism in modelling can be materially improved by looking at primary producers and maximising efficiency there rather than attempting to establish order through higher trophic levels. The feedback loops unleashed through tampering with economic systems beyond the primary producer level make outcomes inherently unpredictable and the conversion and loss rates are likely to be quite high in a world where traditional intermediaries have only very low value added as consumers can directly connect to producer through reintermediators who organise themselves around affinity groupings rather than around some product definition likely to lose meaning in a convergent digital world. Through the use of an inter-nexus, interchange can be universal between a wide range of disciplines and systems while allowing the isolationist fiction to prevail in the interregnum between the nation-state model and more globalized approaches. The inter-nexus relies on the principle that through extraction and parsing and translation it is virtually always for any system capable of being subjected to mathematical codification to map from one intemexus back and forth to multiple variations on the inter-nexus.
The intemexus is inherently a global effort in the nature of a global commons which once set up will grow as long as it is not overgrazed. The exchange of information benefits both sides and with appropriate metering mechanism can be leveraged to self perpetuate its growth.
Component approaches require a holistic framework to ensure that the whole is more than the sum of the parts and to ensure that extension is feasible within reason and change can be accommodated thus making component development suitable. The inter- nexus broker allows systems to be designed to be (1) Technology independent (2) Platform independent (3) Operating System independent (3) Client independent (4) Communications System independent (5)Bandwidth independent, (6) Data format independent (7) Circuit/packet independent (8) Encryption protocol independent. (9) Network trust level independent. (10) Enterprise and Network Management system independent. (ll)TTP independent (12) Nation State independent, (13) Natural language independent, (14) Industry independent (15) Income tax independent (16) Sales and VAT independent (17) IPR clearing and metering systems independent (18)Quality standard independent (19) Time zone independent, (20) Speed of repose independent (21) Output and input format independent (22) Categorisation of standard codes independent, (23) GUI independent, (24) Database independent, (25) Transaction processing and queuing system independent (26) Help systems independent, numbering and directory system independent (27) Analytics independent (28) Brokerage Glue, Converters, Translator/Extractor independent(29) Legal System independent (30) Trading location independent (31)Logistics independent (32) Brand and affinity independent (33) Customer development path independent (34) Human skills levels independent (35) Risk definition independent (36) Accounting and Value Metric independent, while meeting all relevant standards and conventions and rules and complying with the OECD transfer pricing guidelines which are likely to apply to a greater or lesser degree as political forces deal with adjustment towards a more interconnected world. The invention thus is independent of the speed of collapse between a number of disciplines and the breaking down of barriers between industries, corporations, nation-states, languages etc. as digital convergence collapses hitherto separated economic activities.
The invention is based on the innovation about splitting and disciplined compartmentalising of a system into components. (The mythical man-month problem is the generic problem of software- writing. It has todate limited software projects in terms of scale and time frame. Anything beyond that scale has proven undeliverable.) Information about the what and the how should never reside within the same component and the why's and the holistic infrastructure should only be known to very few. Compartmentalising is achieved through distribution temporally and geographically and erecting Chinese Walls between the human components which develop, test, use and modify. In the present invention, a system is rigorously divided into trusted components, each component being essentially a "black-box" to the other components, and being identified by a unique code or number. Thus in order to communicate with a component, it is firstly necessary to verify the identity and to pre control the channel of communication in order to hide the identity code. The communication session will then be encrypted, for example with the public/private key system to define a unique session. The invention can thus deal with Discrete Components with brokerage/interpreter component to switch efficiently between components and clusters of components. A total and exclusive focus on API's- Application Programming Interfaces- rather than the how of the black boxes ensures that mental effort can be distributed between parties. This is probably one of the most critical aspects of the approach. The discrete approach first of all allows specialisation within each of the four generic components and then within each of the generic families it allows individuals or small teams to progress in fine-tuning individual components.
Components in any system may be divided into four categories of components. All systems are a combination of individual variations from these four categories of components:
(a) the human operator- the operator will have a degree of skill and reliability appropriate to the task and will operate on a need to know basis, knowing no more than is necessary to carry out the task;
(b) the data to be processed, the data being validated and defined in accordance with a rigorous data-dictionary and there being no superfluous data;
(c) the software for processing the data- the software may be divided into a number of components, each having assigned predetermined functions. No more functionality than is appropriate for the intended application should be accessible;
(d) the hardware, providing amongst others a platform for the process. Hardware for computers is now near-universally constructed as open systems comprising a large number of individual components.
With codified knowledge/intellectual capital increasingly the key value driver rather than financial capital, the legal anachronism of payment of the residual returns exclusively to financial investors will naturally lead to entropy of large firms. The legitimacy of firms will benefit from the gradual process of limitation of the mandates of firms which will tend to occur as unnatural result of specialisation.
Commingled risks within the firm will thus be reduced and on a product by product analysis can be kept at a minimum thus ensuring that specific risk scan be calculated based on the codified path associated with certain very specific transaction types and based on the incidence of their use of certain specific components which are explicitly warranted. Events can be triggered chronologically or based on exogenous inputs or based on endogenous variables within the system. The explicit definition of events and their explicit codification allows actuarial technologies to be applied both ex-ante and ex- post.
Within each generic class infinite theoretical variations exist but the benefits of customisation need to be traded-off with the social benefits of standardisation of Categorisation such as the ability to communicate using common languages and the associated reduction of complexity.
In terms of hardware components the benefits of standardisation have already resulted in universal components with ever lower and lower prices and ever better performance.
Similar benefits have been achieved in data formats where standards such as ASCII, adobe's postscript and MPEG and others have become widely recognised.
Reduction of complexity, in a world with rapidly collapsing information storage and retrieval and processing cost, has now become the holy grail. Reusable components with reasonably clearly defined functionality, capable of being understood by the common man, while maintaining adequate granularity in terms of facilitating relative low cost substitution are considered to be a critical step to achieve significant reduction in complexity while allowing integrated systems to be maintained and innovated in a cost effective manner.
This challenge of reuse and categorisation of components has significantly escaped the software industry todate. Many cultural variables inherent in a first generation industry can explain this. At a more practical level the absence of significant reuse can be attributed to the relatively poor availability of widely accepted and acknowledged superior tools together with only recent broad standardisation of mainstream operating systems.
The core technologies associated with reuse, object orientation, has now been known for more than twenty five years. Objects are becoming increasingly accepted as the only known technology allowing the building of robust systems from software components.
The technological and regulatory challenges in telecommunications combined with the inherent challenges resulting from first generation computer systems have todate limited the emergence of a market for software components capable of communicating securely with each other and with other systems' components. The challenges referred to earlier are further compounded by multi-disciplinary challenges from a legal liability perspective.
The intangible nature of software has resulted in significant problems for the legal profession and the overall legislative infrastructure in terms of defining whether software are goods or services and whether such goods have to be fit for purpose and under which conditions they have to be so and if the supplier is merely responsible to remedy material defects or also under law has responsibility for consequential damages. Many of these issues have been further compounded by the absence of clear legislation and international treaties relating to the ownership rights associated with Intellectual Property. Although much confusion remains on the exact status of software around the world, many of the issues are being resolved at breakneck speeds benefiting from the impetus generated by the recent agreements in the WIPO which are expected to be particularly beneficial in resolving international disputes.
Reusable components are now thus considered to have become economically feasible with the emergence of a market in largely unwarranted JAVA components, an early indications of the existence of market needs. Substantial legal issues still remain relating to warranties of components and the specific responsibilities of the various parties involved in integrating components into systems which might not have been explicitly or implicitly warranted for this purpose. Issues in this arena have todate been largely the domain of specialists with a very limited amount of true public domain decisions and resolutions on the public records. Early emergence of these issues primarily related to the use of hardware and software components in the fields of life- critical applications such as emergency services, military applications, use of components in medical applications where loss of life risk exists etc.
The essence of integration of components from many suppliers is the cumulative nature of legal liability and the ability to technologically and financially trust parties for the consequences of any negligence or the remedying of material defect and the ability to assume responsibility for consequential damages.
Other than the areas already mentioned above, the legal challenges have todate been largely reserved to specialised areas of the financial world, such as those associated with the running of Stock Exchanges and Financial Money Transfer Systems where inherently a need has existed to integrate components.
Electronic Commerce shows very significant commercial promise but inherently requires integration of component at various levels of granularity, both for realisation of its commercial objective and as dictated by security requires distribution of these components across the world. The extended enterprise associated with the natural progression of Electronic Commerce will lead such enterprises becoming a member of ever more clusters of suppliers and affinity groups of clients. The cluster trend which has proven so inevitable in real estate development is expected to show an equivalent development in cyber real estate and reflects the inherent benefit of loyalty technologies and the high cost of initial customer acquisition relative to the cost of customer maintenance. The logical expectation is therefore that more and more overlapping webs will be proliferating as a natural consequence of Electronic Commerce.
The legal issues with components integration can therefore be expected to proliferate as a result of the combined impact of more geographically distributed systems requiring secure inter-component communication. These types of applications are likely to be amongst the most highly valued and mission critical types of systems, with a particularly high uptake expected within the financial markets. Security has tended to have certain defence type connotations. It might be appropriate to clarify that the protection of IPR against competitive intelligence efforts increasingly introduces the same types of issues in commercial organisations which historically have prevailed in military and defence type organisations. This is particularly important in that in most countries the legal system is not sufficiently well advanced to recognise that hacking and attacking a commercial system is theft and destruction of property. As such prevention through the threat of criminal sanctions is justness realistic for another couple of decades. The perpetrators are typically sufficiently well isolated from their commercial masters motivating the attack that it becomes very unlikely that damages will ever be collected even if the crime is ultimately proven.
Electronic Commerce importance from a legal and systems perspective is further compounded as systemic events due to human errors will frequently have been replicated in many places around the world. The infinitesimal cost of replication of digital algorithms and code favours multiple sales due to typically very high marginal revenue contributions from the sale of code once the work has been professionally completed. Events associated with systems problems and defects in code will near universally turn out to be systemic events. When systemic events occur in systems a global centre for reporting of events is required. In the case of humans the global Centre for Disease Control is alerted and reasonable containment technologies have as a result evolved with time. The frequency of sexual and other interaction between individual and clusters of individuals have typically limited the speed with which viruses and contagious diseases are spread. In the case of digital events with unexpected consequences, these events equally need to be identified and codified globally in the shortest possible time. This then ensures that the global best can apply their minds to their resolution allowing global coping with certain systemic events. Once codified resolutions are finalised they can then be distributed without delay thus avoiding that events experienced in one location in one particular system cause damage to others. With financial responsibility in the case of interconnected systems ever more explicit all responsible parties will desire to apply diligence in not imposing cost on other organisations by using the same materially defective components and ultimately having to reimburse third parties for consequential damages as well as incurring the cost of remedying internal problems.
In the case of Electronic Commerce the systemic risks are particularly severe as the nature of business design means that relatively few components allow all requisite variety of commerce to be accommodated. Dynamic coupling of functionality between loosely coupled components allows not only meeting of existing systems demand but also enormous variations thereon to be met. Systemic risks in Electronic Commerce are thus likely to be inherently concentrated amongst relatively few system components. This is further compounded as it is inherent in the logic for Electronic Commerce that systems are part of integrated and overlapping webs. Unless systemic events are arrested at a very early stage, the risk exists that the systematic transmission of the initially localised systemic events results in the impact becoming wide spread. This is generally referred to as the systematic implication of digital risk. In a very short period of time under those circumstances the sheer complexity of unravelling forensically the chains of evidence and figuring out "who did what to whom" would make, what started out as systemic risks, rapidly to become systematic through the interconnections and would then translate into a legal/financial problem well beyond the capabilities of today's legal and financial technologies at the nation-state level.
It is therefore particularly critical for future distributed systems that systemic events with possible systematic implications become capable of being addressed very soon after their occurrence. Resources need to be capable of being effectively channelled towards their resolution. Only explicit mutualisation can avoid the free-rider problem. Systematic problems need to be contained before they spread across multiple organisations and multiple nation-states. This requires warranties to be made explicit and interconnections between overlapping webs need to be capable of being closed in any direction to avoid infecting others being infected by parties without adequate or meaningfully enforceable warranties. Webs of trust are the only known solutions to this challenge.
The essential element of this innovation is therefore to provide a system capable of distributing components around the world and decoupling end-to-end trust both from the individual components, the cluster or webs of trust, and the elements of the transmission networks used, while being capable of accommodating any and all public policy standards that might exist from time to time in any one of the end-point nation states or any of the transit-nation-states (defined as the nation states in whose jurisdiction components of the intermediate systems through which the connection passes may be located.)
In accordance therefore with the first aspect of the invention, there is provided a distributed computer system consisting of isolatable webs of trust, the system comprising a multiplicity of components, including at least directly or indirectly one of each of the categories of human, data, software, hardware, and each component having a desired level of trust assigned thereto:
Each and every component to know its state and possessing a unique identity and identifying code which precisely defines a class with its own unique number. With the exception of human components which are inherently analog, with respect to digital components, and substantially hardware components, this means that the components within a class are perfect substitutes for each others and are only different to the extent of their unique number which is incontrovertibly and permanently associated with them and unchangeable inside the black box and absolutely required for the black box's own autonomous and internal operations. Each component class has a unique number which is a function of the generic component, the specific type and version as well as the specific link to a metering/copyright agent authority. The component itself would specifically have a unique number relative to its certification agent, which is capable of being changed only in coordination with the certification agent. In a protection sense this is conceptually thus not all that much different from a physical dongle.
Each and every component to know its functionality and the number of inputs it can react to, the operations it can perform on those inputs and the communication handshakes required for the outputs it carries responsibility for to be passed on. Each component to hide its internal algorithms within a black box which is to be properly sealed in such a way that alterations can be readily detected. The components enumeration of capabilities needs to be explicit. Ideally this functionality should' be subject to independent verification of the source code prior to compilation, thereby confirming that no redundant code is present which could cause extra functionality to be accessed which might otherwise not be known to integrators which exclusively would only have access to the public descriptions of the components and the compiled version of the black box. Multiple independent testing teams reviewed by multiple third party auditors would allow cumulative endorsement at relatively insignificant incremental costs for truly critical components where extreme levels of trust are required and where the limits of insurable capacity are being reached.
By breaking down high-level specification logically, the coordination problem disappears. It truly is no longer necessary for a specialist to understand the block-box for which he is responsible beyond the specific data structure in terms of inputs from other components and with respect to the output for the components for which he is responsible. It thus becomes feasible to describe relatively low granularity components in great detail. The specialist's work can therefore be checked. A financially substantial party can then confirm accuracy through a more or less automated testing procedures with reusable virtual rigs. An other benefit from the discreteness of the individual components is that in terms of the high level specification itself only a handful need to be able to understand/given insight into the high-level picture. The rest of the team merely needs to understand webs/clusters of components at ever lower levels of granularity down to the individual trusted components. The most critical systems security issues, compartmentalisation, is thereby addressed. The presumption is that no one individual can ever be trusted completely. As more people need to betaken into confidence a project's security is exponentially compromised. By having webs/clusters of components which overlap and interconnect at different levels, each having their own controlling TiP structures, activation and making changes can be distributed and spread amongst several pairs of eyes. Trust then becomes a group type activity. The ability to preserve anonymity within the "team" emerges as to who might be uncomfortable about certain aspects. This makes a web of trust stronger than its weakest part, rather than the classic chain being only as strong as its weakest link. This issue about discrete components can also be expressed by saying that here is loose coupling between the components. This ensures that changes to the components can take place independent of the rest of the systems. The impact of changes can be relatively speaking isolated and individually anticipated. The function of upgrading a component can thus become a task which can be managed at a cluster level. All of the impact can be limited to the cluster as impact on the rest of the system is either zero, or so minimal, that coordination tasks with other clusters stay manageable. The small granular components also mean that testing of the trusted components as a function of development time can be very short. The description will be reasonably accurate, the goals motivating the specification will not have not been lost with time as the impetus for a change and the implementation thereof are closely spaced together. This thus leads naturally to truly user determined functionality. Furthermore ideally the compiled code inside the black box should be enhanced by explicitly differentiating in terms of authorisation access levels between viewing and storing/printing events, while scrambling and watermarking/glyph incorporation should be applied near routinely to increase the chances of tracing of any tampering and forensically confirming such tampering. Each and every components' identity to be subject to verification by an independent Trusted Third Party designated with the responsibility to verify for and on behalf of any other component that the identity of the component is indeed as claimed. Substantially this is the same function performed by banks verifying signatures for and on behalf of third parties, which then rely on the banks diligence and professionalism relating to the maintenance of the signature cards and the verification processes against the original to satisfy themselves that they can proceed based on the trust they have in the bank prior to proceeding with execution of contract. The inter-nexus broker controls the number of components in each cluster and checks the cumulative logic in terms of the requisite number of components with specific unique which identities are present. Acceptable standards for TTP technologies are in the public domain. Equally acceptable technologies and processes for encryption of communication between the TTP and the components are in the public domain. Specialised container technologies allowing instantaneous substitutions between encryption approaches are in the twilight zone, generally available but maybe subject to certain patent protections. Applets and applications should only have the number of components identified by the applications certification authority and no more or less components of any specified particular class should be present. Alterations or substitutions to the applications should at all times require that first a handshake with atypically remote TTP, for the applications certification authority to establish trust, and only then can the applications' components be altered and the changed components be introduced into the application. At start up of the application a new integration test would be performed with full public/private key exchange between all the components and the applications' certification authority to establish the integrity of all components present. After that exchanges between components can take place with session encryption at the requisite level.
Components to be clustered in such a way that a cluster is defined by a dedicated TTP ring. This cluster is defined as the web of trust. The trust in any message emanating from this web is by definition never higher than the trust which can be placed in the TTP itself and those controlling access to that TTP. The TTP performs the function of certification agent at time of start up of any applet or application or systems or data centre or entry control into physical facilities such as cars or buildings. Once the application has been started up and as long as the codified transactions pre-authorised for the system are the only ones performed, then there might not be a need to maintain relatively processing intensive asymmetric encryption technologies and reasonable security compromises can be achieved with symmetric session type controls.
The present invention contemplates the use of brokerage components. Each web of trust to ideally have one gateway component, the inter-nexus through which the web communicates with the outside world, itself defined as either another web of trust or an untmsted party. (Many names can be assigned to this gateway components ranging from interpreters, valves, bridges, trading posts, gateways, glue providing linkages between components, clusters of components, components and cluster of components - i.e. applets and applications - and components and the outside world or with legacy systems)This component performs the designated screening functions and typically contains a VALVE operation for a higher level controller of the TTP to instruct the gateway to fully or partially close down in certain directions.
The brokerage component allows introduction of low level and fully tailored security mechanisms. Variations in security between different portions of the systems with possibly different and varying encryption technologies and rules dictated by differing requirements can thus be achieved. These brokers are also the natural "gates" for metering locations. To increase the universality of the components, and to keep up front co-ordination cost down brokerage components can be in addition to the presence of the certification agents The development of the component and overall system by a number of people acting independently means that data formats may be different from time to time and that the high-level data dictionary might from time to time get out of synch. (Although this can obviously be handled by making the data dictionary part of the high level architecture and making this document an interactive and continuously updated directory. This reference resource can be permanently maintained up to date for any and all parties participating in component creation. An explicit responsibility can be imposed by the component publisher that the author checks this repository before terms of formats are added or decided on.) Format inconsistencies can however be allowed as it also facilitates integration with legacy components and thus extends the functionality of the component systems to everything that has gone before. (The explicit proviso is that any interconnection with components which are not trusted obviously introduces security risks and breakdown risks. By having such components enter the systems through brokerage components it becomes effectively possible to create very precisely defined entry blocks where security risks can be isolated and become capable of being closed down and replaced and upgraded in an incremental way.)
Within webs of t st it is critical that security violations can be isolated by jettisoning certain parts of the system and closing them down until it becomes possible to rebuild with webs in which tmst has been recertified.
The brokerage components are the logical blocks to perform the metering functions to measure how many accesses have been made by other parts of the systems to the functionality in a particular web controlled by a specific certification agent. (A brokerage component and a certification agent if not integrated are by definition always paired to ensure that external influences can be controlled with a very precise valve at the entry point into the structure.) Components will inevitably come to be distributed based on a super- distribution models where components will typically be free for the first couple of thousand instantiations to allow a developer to test the functionality and to build and test the basic integrated system. Only when the components commence being used by the ultimate customers in daily and intense use will the components clock typically progress beyond the free metering period. At that point in time value will have to be digitally conveyed in encrypted form to load the meter. (The certification agent obviously checks for the presence of all the functional components in the cluster and the brokerage/ metering component. If the metering component doesn't have adequate value loaded for a normal production n (say 24 hours, a week, a month or whatever is specified, then the application will not startup)
The brokerage components have one more benefit. They allow the setting of an extra trapdoor if the IPR stmcture is compromised. The code in individual components might be dissembled but if certain data formats inside the components require encrypted data to be sent for it to work, then dissembling one component has no benefit. As this is a low level security issue, this only needs to be known between the components in the cluster and the brokerage component, i.e. for the rest of the systems it is believed that here is nothing special about data formats for that particular cluster. (Externally it looks normal as the coding/decoding is fully internal to the combined broker /cluster universe)
If someone from a different cluster would however be attempting to access a component in another cluster directly and for whatever reasons succeed -which should be impossible because of the TTP & the source code component encryption and the pipes between the components within the cluster all being encrypted - then sooner or later this problem would be discovered. An inconsistent data format would trigger an audit alarm which could not possibly be anticipated by the hacker. Related to this type of security is that the brokerage component can conveniently allow significant constraining of the theoretical functionality of the components in the web of tmst with the broker setting limits on certain transaction which can be passed direcdy to the web or allowing certain transactions through with a simultaneous alert going out.
This type of security functionality can be programmed by parties wholly unrelated to the one's designing the component family and thus allows organisations to limit impact of individual events. (Although most of the components in the world are very generic, the ability to innovate in terms of security and how they are interconnected through the use of the broker is infinite)
The brokerage components have benefits not only for the protection of a certain web of components, but also to protect against certain types of program trading implications in compromised systems. Breaking through one web could create problems in another web as a result of interconnections between the cluster which can logically only be controlled by the valves and the algorithms set in the brokers. Brokers could easily agree that certain data flowing between brokerage components can only be passed in a particular encrypted format to be able to be processed. If someone would break the generic encryption applying to the pipes mnning between the individual components and between the components and the brokers, then this would still not be adequate to cause a major impact if and when the brokers exchange certain data formats only in yet further encrypted form. By performing encryptions for a particular destination based on that destination, this limits the systemic benefits of an attack as the value of having a small element revealed is likely to be only very marginal for the opponent, thus reducing the chance that the code breaking investment will be made. (All security is always relative. By being hard, then the criminal will go for soft parts, such as outright blackmail of individuals or outright terrorism, or alternatively go for a softer target in another organisation)
The present invention envisage a certification agent or authority (CA)running inside each cluster and then from there onwards further CA's controlling every other component cluster (hardware, software, data, humans as actuators sensors) in the network to control peer or hierarchical communication with TTP handshake processes with individual components before components can communicate with each other. The certification agent mnning inside a cluster effectively controls access to a cluster of components with the application only capable of being mn or changed if and when a web of tmst has been established, i.e. all the components with the requisite specific ID's for that component instantiation are present as is the metering object and it will have been check that it still has reasonable n time on it.
The certification agent obviously will have different security levels with different groupings/clusters required to activate the ordinary mnning of the application and a significantly higher level of tmst to make change requests to the live components. Sign- offs will frequently be required from other parties in other webs to ensure that all other relevant webs have been fully prepared to accommodate those data elements which could have been impacted by the changes in one of the webs. All coding and decoding contains the seeds for code-breaking. Not so much because of the power of any individual machine to break the code by application of raw processing power to test combinations, but most frequently because the opponent is inadvertently given an insight into the architecture and can thus possibly anticipate a certain type of human or system error which will either occur naturally or in some clever way can be invoked. Errors provide the opponent with information, which then causes a problem previously not solvable within a human lifetime to suddenly become manageable in days. Even raw power attacks are possibly going to have very different cost dimensions in the near future.
The classic example could be the use of the processing power of the net to give anyone relatively low cost access to the equivalent of a Cray within the next year by accessing unused processing cycles. Limiting insight about system designs and creating explicit audit traps is the only way to create time to respond when the inevitable security infractions will happen. The broker component is conveniently capable of slowing infestation and general spreading of vimses down by a controller turning the valve remotely through the certification agent. It should be presumed that at some point in time it will be uncovered that errors have been made or that mathematical encryption assumptions are no longer valid, or that a breakthrough in terms of processing power is made somewhere using parallel computing or incredibly efficient distribution of tasks across a number of processors. No component and no cluster can ever be presumed to be totally secure. This dictates the requisite levels of compartmentalisation and granularity of the applets in order to be able to jettison compromised clusters and the need to build in redundancies all over the place.
The requisite need for this approach is best explained through a negative. It is the reverse of the smartie - hard wall/soft centre - strategy or the firewall strategy which is basically the systems world equivalent of the smartie strategy. Compromise should be expected anywhere and monitored at every cluster and at every brokerage object. The randomness of a problem and the theoretical chances can be extreme. Most systems todate have not allowed for the chance of being affected by a cosmic ray triggering a memory location or for that matter more wholesale destructive applications from opponents using electronic blasts. With switching by satellites in the sky the incidence of these types of events is much higher then it is naturally at lower ionosphere levels and risks which had initially impossibly low probabilities, when the number of transactions grows to extreme numbers, suddenly very real risk materialises with chances of certain event happening measured in one out of a trillion not being capable of being disregarded. This is one of the great challenges of building the type of scalable systems to which this invention specifically caters where the possibilities of billions of transactions a day have to be allowed for. These types of issues thus need to be anticipated henceforth.
Distribution of the components around the globe need to be made in such away that the systems continue to be available for most of the world even if parts of the system are wiped out. The opponents are predictable, the derivative impact of one nation-state attacks can be readily simulated. If then becomes purely a matter of design and cost to decide how much of the systems can be attacked, and in how many locations, before the stress on the overall system reaches a level where performance is tmly impacted. The actuarial and stress testing technologies associated with this type of approach are obviously tough problems and very leading-edge, but the principles involved are relatively easy and it is widely understood that security of this type can only be achieved with totally distributed systems with multiple components mnning in multiple places fully backed up and capable of mnning with explicit primary and proxy data with on the fly handover if any element is compromised.
The webs of tmst combined with the loose coupling of components means that branded experiences can be created which hide the plumbing from the user while meeting the explicit security and other quality standards through a series of intelligent agents making the requisite decisions. From a marketplace perspective the ability to package inherently complex systems design decision and choices is generally considered to be a material marketing advantage as it is the result which is purchased not the process of getting there. The inherent structure allows complexity to be decoupled from the expertise of the user and to accommodate preferences or inherent needs as they may exist from time to time.
The paths through the Webs of tmst is codifiable and level of tmst canthus be set explicitly for certain critical tasks and can reflect the specific needs of certain communities and the hierarchies existing therein. The inherent combination between connectionist and isolationist principles in the design of the webs of tmst means that administration can be readily simplified though relatively simple contextual logic mles and that tmst becomes a transitory concept.
The TTP stmcture allows service authorisation designations to be set regarding the rights that specific components can acquire with respect to other components within the web and in terms of what can be passed externally to another web of tmst or to a designated party. The encryption levels can be re-agreed, from generally prevailing standards within the system, between individual components within a certain software applet once the identity of both the components has been checked and the communication has therefore been cleared between the components. It is of course critical that this inter- component communication is encrypted in order that an outside observer can not learn about either of the components from what is going back and forth between them. (In traditional espionage this is the equivalent of watching a public building and deriving conclusions from who is going in and out of the building) The components themselves need to be protected from being challenged by a dissembler. Thus compiled code is in turn typically encrypted and/or appropriately watermarked in order that, should someone ultimately succeed with a dissembler, that one can trace the origin.
The TTP stmcture and the audit processes which can be readily associated with the passing of data through the gateways means that forensic accounting is facilitated whenever security is "breached". Frequently breaches will be permitted on purpose in order to observe the hostile parties and to allow them to be trapped as they enter further and further into and through webs of tmst. The architecture allows fake webs of tmst to be operating apparently as real life systems but in reality only providing "dummies" to capture hostile parties and to waste their efforts and confuse them about relative success in penetration into and compromising of systems.
The creation of a class category of components allows any party desiring to make their class commercially available to be explicit about the full "terms and conditions" package in every relevant jurisdiction surrounding the use of their components and the limitations, if any, on warranties and to be explicit about the degree of explicit testing by independently insurable and warranted third party testing organisations. Equally this process ensures that at any point in time prospective users of such components can readily verify the financial substance of the component supplier and/or the backing enhancements from this supplier or his publisher.
Equally the invention allows for a central repository to contain amongst others explicitly codified information on location of help services, where FAQ can be obtained, under what conditions an upgrade of the component is required, processes for handling upgrades in the components based on new operating systems becoming available, processes for handling systemic problems, such as bugs, and parties responsible for addressing systemic events and the time period they in turn have committed to resolve these problems. In addition the repository could contain information on how the potential users of the component can be sure that they will receive information to rectify systemic problems.
The systems allows enhanced audit for systems as the time and place of entry and exit from the integrated web of tmst can be determined and at all times the identity of the parties is known with an explicit level of tmst about the tmth of that identity.
The systems allows for natural bubbling up of problems as the infrastructure allows for systemic events experienced by others to be conveniently shared with parties using the same class of components. In view of the relatively small number of classes relative to number of components in all applications globally it can be expected that individual operators experiences of systemic events will be a fraction relative to those experienced by users of custom coded systems written from scratch or by users of less distributed systems who will not realise errors as they are less likely to be trapped as codification needs to be inherently less rigorous in geographically non-dispersed systems. The intensity of effective testing of the robustness of code together with the generalised nature of events will readily allow for rating services to emerge both for publishers/authors and for maintenance and other organisations which over time will allow for all dimensions of quality to be reduced to purely financial aspects which are inherently easier to manipulate mentally. Problems can thus be fixed in a systematic way through the normal effect of markets which will drive those who deliver less than the standards out of the marketplace over time and will reward those who excel though premiums reflecting the ultimate benefits of quality systems to theorisers.
The proposed system remedies the artificial separation between telecommunications and computing and between human and computer systems. Any and all systems are hybrid and it is pretty much inconceivable that web of tmst at the highest levels would not virtually always contain all of the elements and a number of hybrids in terms of intermediate components. Although hybrids are inherently superior to either pure automated systems or pure human systems, the systems also allow explicitly for overrides beyond the margins where either humans or systems become superior. These overrides can specifically be set within a cumulative way in order to address mental overload issues at times of crisis, thus ensuring that data and inputs are reduced and heuristic mles allow scarce human or system resources to be focused on what is most critical or most valuable and dropping where necessary explicitly both traffic and activities which are less valuable or which need to be jettisoned to protect the larger integrated infrastructure or at least elements thereof.
The proposed system addresses the specific problem of proven irresponsibility and negligence of certain system houses which have knowingly been delivering systems with material defect since the early eighties. It equally addresses the common sense irresponsibility and negligence of systems houses which have delivered non-CDC code during the nineties. The time frames of many system suppliers has proven to be either very short in terms of harvesting the returns from their activity or they have callously recognised the inability of the legal and financial infrastructure to address issues of warranty and fitness for purpose and the ability of their relatively unsophisticated clients to uncover from a systems perspective the material defects and thus have presumed an inability to collect from the original supplier in terms of the cost of remedying code for CDC defects. The world at large is expected to have to waste $600 billion alone in terms of payment for critical human resources for one single occurrence of systemic digital risk, the CDC event. The CDC event together with the consequential damages of non remedied CDC problems for which users have paid twice, once when they bought the components and again when they paid to remedy the CDC material defect which should have been under warranty, is generally expected to cause a phase change in the systems- component marketplace.
Mental phase changes are frequently experienced in networks and connectionist systems when awareness of certain issues raises critical interconnected levels. Paradigm shifts take a long time to build, but then can become widely accepted amongst cognoscenti and the larger public in a very short amount of time. (The tobacco litigation is a classic example. Cigarette companies read the environment and realised it was time to settle. The market had already fully discounted the effect through the 50% plus litigation discount) The degree of interconnection in the case of systems integration with multiple political and other trends is likely to make this phase change amongst the most violent social changes experienced by the world todate with intense discounting of companies which are late in addressing the issues. Few buyers of systems are expected to be able to escape charges of negligence if they do not follow minimum procedures in terms of clarifying warranties and the financial substance behind these warranties regarding fitness for purpose and explicit responsibility for remedying material defect. Under those circumstances the proposed system is expected to provide a structured and institutionalised approach to the handling of many of these complex warranty issues allowing clear data to be accumulated on the aggregate amount of financial responsibility incurred by certain supplier sand allow for objective determination of their ability to meet obligations should systemic events not be addressed on a timely basis and thus cause significant consequential damages. The discrimination impact, specifically against the smallest and the largest corporations, requires explicit technologies such as those associated with the current invention to avoid triggering wholesale allegations of antitrust and tie-in pricing. Cause and effect are concepts which are meaningless in a heavily integrated world operating close enough to real-time and moving such mental models beyond practical application.
The proposed systems will promote the development of generic component markets. (The types of products which have fairly universal applications and where no partners or other sovereign granted monopoly rights any longer remain and where the scientific basis for algorithms is capable of being explicitly quantified.) For generic components purity of the components will be critical and the cumulative amount of testing reducing actuarial risks associated with warranty obligations will be one of the key value drivers allowing the economics of generics to be attractive over time as cumulative experience in the markets conveys meaningful information benefits.
The proposed systems' explicit codification of terms and conditions relating to individual components will subsequently allow actuarial calculation to take place to determine aggregated risks which will thus allow system suppliers to compute risks on a professional basis with insurers. Agency functions and certification choices on classifying integrated systems in a particular risk category can thus be performed allowing boards to delegate those responsibilities in the systems area where they professionally might not have had adequate training.
The proposed system explicitly allows specialisation within the development of systems and components thereof and eliminates any benefit from vertical integration. This thus introduces the possibility of incremental participation in systems development by SME's all over the world, likely to lead to a surge in creativity and innovation and ever further reduction in monopoly power by any parties attempting to control markets through critical choke points in the value chain of integrated systems. The proposed systems' ability to codify the classes and number of users also means that the financial flows from software and other components can become objectively determinable and that derivative securities can thus be developed allowing authors of software components to arrange for convenient working capital financing of the ultimate fruits of their labour and permitting them to share future risk and rewards and to redistribute the maturity profile of such rewards in ways that meet the needs of the contributors of financial capita land intellectual capital co-producing the requisite components and cluster families.
The proposed system is recursive in that the proposed system is inherently well suited both for Electronic Commerce application and applications associated with delivering the components required for building Electronic Commerce systems. The negligible cost associated with digital conveyancing and the independence from distance ensures that the markets for components are likely to be global from inception, thus eliminating opportunities for discrimination regarding any characteristics of the authors which are not reflected objectively in the output products of such authors.
The proposed systems and the universal presence of the gateway object conveniently allows for superior distribution and metering of software based on usage and value to the user rather than being based on pure time based or number of users based licensing.
The proposed systems also allows inherently for virtual and real modelling and integration and stress testing in a completely segregated manner from the main life- systems. Electronic Commerce systems, in particular financial transaction processing systems and mission critical systems require high levels of "tmst". This feature is also particularly important in that in the absence of such models technological engineering- type perfection inevitably becomes the goal. With clear modelling technologies, explicit decisions can be made in a responsible manner to build a total infrastmcture which falls well short of perfection, but which optimises total retums. This is particularly important in the area of fraud, which has to be presumed as a constant in any and all systems design. It is extremely unlikely that no fraud is the optimal desired position as it tends to also impose significant cost on both organisation and clients and thus total elimination is not economically optimal. In addition fraud is a necessary and minimum condition to ensure that tmly catastrophic problems are avoided which are likely to happen when incremental improvement become too far spaced apart and control centres become comfortable and sleepy. The proposed systems permits objective standards for network management and create the ability to produce explicit diagnostic components which can mn inside any web of tmst thus further enhancing security and facilitating systems administration. Any and all network elements can thus be explicitly interconnected and enhanced with other data elements thus allowing the use of scientific techniques along arrange of metrics for optimisation The proposed system presumes all channels are tapped, but is not concerned since all the tappers get is encoded traffic. One's home government can of course compel one to break security by requesting the TTP identities and by asking to be provided with the decryption keys. This is however precisely defined in TTP legislation and thus an explicit exposure which can only be remedied though business relocation and electing a new sovereign.
The proposed systems explicitly allows for mixtures of structured and unstructured systems and permits increasing degrees of codification thus shifting hybrid systems continuously in the direction where the value added of human resources at the margin increases with time.
The proposed systems specially meets the requirements of the issuers of Directors' and Officers' liability in the post CDC era where large number of Board are expected to be subject to litigation based on either negligence in reclaiming the cost of CDC work from suppliers, negligence relating to third party certification of their and their interconnected parties systems for CDC compliance or the failure to anticipate that certain parties which failed to remedy would not be able to meet their obligations for the resulting consequential damages. Only those wholesale adopting the invention are likely to be able to handle the complexity in the post millennium D&O world
The proposed system provides investors with the possibility of directly investing in codified knowledge assets which are the closest proxy to investing directly in human assets. The maintainable nature of components also avoids the inherent exposure associated with the current vogue of investing in counter-cyclical year 2000 specific hedges. Many of the target companies reportedly providing such a hedge have no demonstrable skills in converting their business proposition beyond 2000 into sustainable digital risk reduction businesses. However codified knowledge assets can be rolled out on a global basis for integration and local mass customisation and personalization and thus do represent a hedge which furthermore is very difficult to expropriate through retroactive taxation or unilateral cancellation by certain sovereigns, either acting individual or as a concert party in violation of a treaty or long established sovereign monopoly rights such as trademarks and patents. The market based approach and the ability to provide real time information both on the quality of the accepted knowledge asset, the cumulative experiences with the knowledge asset and the cumulative and outstanding experiences which challenge to the knowledge assets means that the price of the derivative is likely to be established from inception in a relatively efficient global market while the owners/publishers can independently maximise the price of the knowledge assets from the financial investors. The shift in economic contribution from agriculture and industry into information businesses can be expected to impose a trust deficit discount on many large enterprises and depress surplus physical assets as wealth holding preferences adjust.
BRIEF DESCRIPTION OF DRAWINGS
A preferred embodiment of the invention will now be described with reference to the accompanying single figure of drawings which is a schematic block diagram of a group of nation-state specific components functioning via an intemexus to provide communications between the various nation states in the world.
DESCRIPTION OF THE PREFERRED EMBODIMENT
In the drawing, communications between 233 nation states are provided by a single web of tmst 2. The web of tmst is capable of accepting calls in any of the known formats, with routing control codes for example by means of a touch-tone key pad, or voice instructions in any language. Thus the number of telecommunications links required is 233 plus 1 for the service provider, whereas previously individual connections were required between individual states, making 54056 necessary links. The web of tmst 2 comprises four software components 4,6,8,10 installed on a single hardware platform component 12. Each component is assigned a unique identifying code 14 and each component has a communicating link 16 with the other components, which link is encrypted, employing the respective codes for the encryption. Each component functions as a black box, only the respective component knowing its own functions and capabilities. The sole means of communicating and interfacing with each component is via a respective applications programming interface 17. Each component has a certification authority 18, linked to a tmsted third party control 20 external to the system. The authority 18 checks upon start up the integrity of the respective component.
Component 4 functions as the intemexus, having an interface 22 which can accept encrypted communications in any of the prescribed telecommunications formats from a user 24 in any country in the world, who may have a handset including a touch tone key pad 26 and a microphone 28. Component 4 in addition includes a metering authority 30 for recording the use made of the intemexus.
MULTI-DISCIPLINARY NATURE OF THE SYSTEMS INVENTION
The combination of systems and financial technologies in this invention allows complexity to be reduced and allows the market to substitute and replace many of the functions historically fulfilled by the state and large corporations. At the extremes the firm will be able to be reduced to the proverbial box of explicit contracts, while remaining residual exposures to implicit events and risks can increasingly with time be converted into explicit contract and events. The invention will allow event-specific-trust to be translated into a probability. The specifics of an event combined with proven actuarial technology allows for translation of the probabilities into financial expectations with a distribution-range. Complex technological choices can thus be abstracted and both individually and at any aggregated level can be collapsed into market based pricing decisions for TRUST.
Managers universally desire certainty in terms of tmsted systems, which neither science nor systems technologies can offer or are ever likely to be able to offer. (Current theoretical knowledge indicates that certainty is impossible to achieve) The marketplace at large tends to put a premium value on instantaneous answers in order to be able to make on the spot decision regarding certain specific events. Event-based tmst ratings thus provide financial certainty thereby meeting this generally accepted critical managerial needs in addition to transparency and ease of decision-making while uncoupling these both from the component production decisions and the specific integrated systems design. TRANSITION TOWARDS WEBS OF TRUST & EVENT - TRUST
An event-based approach to risk and trust explicitly allows for the scientific use of triage with codified models to create an explicit basis for priority-based decision making on software upgrading.
This is considered particularly important in the transition from integrated systems built with unproven in-house produced components to building systems using industrial- strength components. These are inherently gradual processes. Tough choices are likely to be faced by many corporations to address the CDCevent and to limit associated specific Y2K event-risk.
Some risks can be reduced by substituting in-house produced components in core areas immediately with off-the-shelf industrial strength components from substantial or insured parties, some risks can be shifted to financially substantial parties and some risks might need to be retained either permanently or temporarily.
Even where risks are retained it will be rare that integrated global corporations can afford to make this decision implicitly. Arbitrary decision on the incidence and location of assuming implicitly the responsibility for the cost of fixing material defects in software products delivered from non-local suppliers or deployed beyond the local nation-state is likely to lead to enormous tax problem in years to come.
Whenever benefits of fixing apply to globally integrated systems, allocation issues of cost are inevitably going to emerge and anything short of a scientific approach, such as through explicit arms length insurance programs,i s likely to lead to conflict with tax- authorities in some location at some point down the road.
Equally availability in most countries of some type of contingency insurance is likely to be available to finance the legal cost and claim recovery process from the software suppliers liable for these costs failure to act will be difficult to justify to the tax authorities - quite apart from shareholders - and will tend to raise connected parties or transfer pricing issues. To recover the cost of fixing at minimal cost relative to the total amount of claims for fixing the material defects cost, interest, expenses associated with claiming and post 2000 consequential damages for incomplete fixes, it is unlikely that many tax authorities would allow deduction of amounts which do not properly constitute an expense for the corporation and are more in the nature of a grant to the software industry which is likely to only be deductible in relatively few jurisdictions which are sympathetic to the cause of this much maligned group.
Accounting and tax treatment can only be responsibly addressed upfront wherever competing national tax authorities are involved and expenses are incurred relating to fixing software deployed around the world. Failure to address these types of issues up front by for example failing to make explicit which risks are retained and in which legal persona they are retained, are likely to plague any and all of those who naively think they can just deduct the $600 billion CDC bill wherever this expense is incurred.
The OECD transfer pricing guidelines are applied by all leading-edge international companies relating to production decision profit allocations. It is rare to see this same discipline applied to avoid tax challenges in the systems area. This thus introduces classic tax problem universally experienced by multi-national associated with "Head Office Allocated Expenses". These types of expenses routinely get challenged by national authorities desiring a larger share domestically of global corporate cakes. The financial problems relating to just fixing one specific digital event risk, the CDC change, will thus not end on New Years day 2000. Technologically it will mostly be finished by the end of this millennium, except where the cure has proven inadequate. Financially it will play for another decade or so as the open tax years of the 20th century are negotiated around the world and as the consequential damages relating to those who didn't fix Y2K problem in time and thus imposed costs on others and first and third party consequential damage and material defect claims from those who thought they had fixed the problems are financially and legally sorted.
The invention when combined with triage will thus explicitly create the possibility of technologically and financially ring-fencing and creating specific secure islands /webs which can be the safe harbours from which an organisation can be rebuilt after experiencing hostile attacks or the consequences of its own ma king derived from using unproven and unwarranted components.

Claims

1. A distributed computer system, the system comprising:-
(a) a multiplicity of components, including at least one of each of the categories of human, data, software, hardware, and each component having a desired level of tmst assigned thereto,
(b) each of the software and hardware components including no more functionality than is required to carry out the desired task, and the internal functions of each of the hardware and software components not normally being accessible externally of the respective component;
(c) each component having a unique identifying code associated therewith, and one or more communication links between the components through predetermined interfaces requiring the use of the associated identifying codes, including means for encrypting communications between the components.
2. A system according to claim 1, for at least the software components, the logical state or condition being known only to the respective component, and/or the functionality of the component being known to the respective component.
3. A distributed computer system, the system comprising:-
(a) a multiplicity of components, including at least directly or indirectly one of each of the categories of human, data, software and hardware, and each component having a desired level of tmst assigned thereto;
(b) for at least the software components, the logical state or condition being known only to the respective component, the functionality of the component being known to the respective component, and the internal algorithms of the component being concealed from exterior access; and (c) each component having assigned thereto a unique identification code or number
4. A system according to any preceding claim, including one or more groups of a plurality of components which are isolated from the outside world except that at least one component functions as an intemexus or broker to provide a communication link with the outside world.
5. A system according to any preceding claim, wherein the intemexus includes an applications programming interface for accepting data in predetermined formats and for converting such formats to a form suitable for use by the components of the group.
6. A system according to any preceding claim, wherein the identity of each component is verified a by an independent tmsted third party by a public key/ private key system.
7. A system according to any preceding claim, wherein each component includes a certification authority for verifying the integrity of the component.
8. A system according to claim 6 and 7, wherein the tmsted third party functions as a certification authority.
9. A system according to any preceding claim, wherein each component includes a metering agent for recording the use of the component.
10. A system according to claims 4,6 and 7, wherein the metering agent, certification authority, and intemexus are provided by a single function.
PCT/GB1998/001668 1997-06-06 1998-06-08 Distributed computer system WO1998055920A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU80262/98A AU8026298A (en) 1997-06-06 1998-06-08 Distributed computer system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB9711787.3 1997-06-06
GBGB9711787.3A GB9711787D0 (en) 1997-06-06 1997-06-06 Distributed computer system

Publications (1)

Publication Number Publication Date
WO1998055920A1 true WO1998055920A1 (en) 1998-12-10

Family

ID=10813726

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1998/001668 WO1998055920A1 (en) 1997-06-06 1998-06-08 Distributed computer system

Country Status (3)

Country Link
AU (1) AU8026298A (en)
GB (1) GB9711787D0 (en)
WO (1) WO1998055920A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999013631A1 (en) * 1997-09-11 1999-03-18 Trust Eeig Improvements to telecommunications

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5224163A (en) * 1990-09-28 1993-06-29 Digital Equipment Corporation Method for delegating authorization from one entity to another through the use of session encryption keys
EP0623876A2 (en) * 1993-04-30 1994-11-09 International Business Machines Corporation Method and apparatus for linking object managers for cooperative processing in an object oriented computing environment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5224163A (en) * 1990-09-28 1993-06-29 Digital Equipment Corporation Method for delegating authorization from one entity to another through the use of session encryption keys
EP0623876A2 (en) * 1993-04-30 1994-11-09 International Business Machines Corporation Method and apparatus for linking object managers for cooperative processing in an object oriented computing environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TRIPATHI A ET AL: "TYPE MANAGEMENT SYSTEM IN THE NEXUS DISTRIBUTED PROGRAMMING ENVIRONMENT", PROCEEDINGS OF THE ANNUAL INTERNATIONAL COMPUTER SOFTWARE AND APPLICATIONS CONFERENCE. (COMPSAC), CHICAGO, 5 - 7 OCTOBER, 1988, no. CONF. 12, 5 October 1988 (1988-10-05), KNAFL G J, pages 170 - 177, XP000284835 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999013631A1 (en) * 1997-09-11 1999-03-18 Trust Eeig Improvements to telecommunications

Also Published As

Publication number Publication date
GB9711787D0 (en) 1997-12-24
AU8026298A (en) 1998-12-21

Similar Documents

Publication Publication Date Title
Teece Business models and dynamic capabilities
BR112019015066A2 (en) BCHAIN UNIVERSAL CONNECTIONS SYSTEM ALL / EVERYTHING / EVERY PART
WO2019033074A1 (en) Distributed ledger interaction systems and methods
Riasanow et al. The generic blockchain ecosystem and its strategic implications
Walker Financial technology law–a new beginning and a new future
Madir Smart Contracts:(How) Do They Fit Under Existing Legal Frameworks?
Galiveeti et al. Cybersecurity analysis: Investigating the data integrity and privacy in AWS and Azure cloud platforms
CN114363327A (en) Compliance mechanism in blockchain networks
Musan et al. NFT. finance leveraging non-fungible tokens
Didenko Cybersecurity regulation in the financial sector: prospects of legal harmonization in the European Union and beyond
Piliouras Network design: management and technical perspectives
Slaughter A profile of the software industry: Emergence, ascendance, risks, and rewards
Fliche et al. ‘Decentralised’or ‘disintermediated’finance: what regulatory response?
WO1998055920A1 (en) Distributed computer system
Schulz et al. Continuous Secure Software Development and Analysis.
Zaballos et al. Cloud Computing: Opportunities and Challenges for Sustainable Economic Development in Latin America and the Caribbean
Xue et al. Applying smart contracts in online dispute resolutions on a large scale and its regulatory implications
Sherwood et al. Enterprise security architecture
Levite et al. Addressing the Private Sector Cybersecurity Predicament: The Indispensable Role of Insurance
Kumar The missing piece in human-centric approaches to cybernorms implementation: the role of civil society
Boyer et al. The economics of free and open source software: Contributions to a Government Policy on open source software
CN114450687A (en) Method, computer program and system for enabling verification of a calculation result
Ramnath et al. Integrating goal modeling and execution in adaptive complex enterprises
Chawdhry et al. Strategies for trust and confidence in B2C e-commerce
Levi et al. Legal issues surrounding the use of smart contracts

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM GW HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 1999501884

Format of ref document f/p: F

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA

122 Ep: pct application non-entry in european phase