WO2002029733A1 - Method of monitoring the assembly of a product from a workpiece - Google Patents

Method of monitoring the assembly of a product from a workpiece Download PDF

Info

Publication number
WO2002029733A1
WO2002029733A1 PCT/GB2001/004408 GB0104408W WO0229733A1 WO 2002029733 A1 WO2002029733 A1 WO 2002029733A1 GB 0104408 W GB0104408 W GB 0104408W WO 0229733 A1 WO0229733 A1 WO 0229733A1
Authority
WO
WIPO (PCT)
Prior art keywords
test
specific property
stations
value
station
Prior art date
Application number
PCT/GB2001/004408
Other languages
French (fr)
Inventor
David Swords
Original Assignee
Ipr Industries Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ipr Industries Limited filed Critical Ipr Industries Limited
Priority to AU2001292074A priority Critical patent/AU2001292074A1/en
Publication of WO2002029733A1 publication Critical patent/WO2002029733A1/en

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C3/00Registering or indicating the condition or the working of machines or other apparatus, other than vehicles
    • G07C3/14Quality control systems
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • G05B19/41875Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM] characterised by quality surveillance of production
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/32Operator till task planning
    • G05B2219/32177Computer assisted quality surveyance, caq
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/32Operator till task planning
    • G05B2219/32178Normal and correction transferline, transfer workpiece if fault
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/32Operator till task planning
    • G05B2219/32181Monitor production, assembly apparatus with multiple sensors
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/32Operator till task planning
    • G05B2219/32182If state of tool, product deviates from standard, adjust system, feedback
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/32Operator till task planning
    • G05B2219/32191Real time statistical process monitoring
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • This invention relates to assembly lines used for the manufacture of one or more products.
  • the assembly lines may be manual, partly automated or fully automated. More particularly, the invention relates to methods for the real time asynchronous monitoring and analyzing by means of computer, of test measurements during the assembly of the product.
  • each testing station will result in a number of rejects which are visible in a reject bin, and comparison of the number of rejects with the number of items which have passed through the test station successfully gives the yield at that test station, usually expressed as the proportion or percentage of items entering the test station, which successfully exit therefrom.
  • a yield of 90% therefore means that 10% of those items entering the test station were rejected.
  • yield and other data are recorded over time and historical analysis of the test results to measure the performance of the previous manufacturing stages is undertaken.
  • the number of rejects can typically be automatically counted and the yield can be calculated and displayed at the test station.
  • a manager of the factory may inspect the test station periodically and use the information of the yield at a particular test station to take remedial action to improve the yield.
  • This monitoring or inspection may take the form of viewing a particular readout from a test station or analyzing a historical yield profile, or may even be simply physically observing an increase in the reject rate.
  • monitoring of the manufacturing process is carried out in the assembly industry by means of repeated active testing of each assembled item.
  • Each workpiece is subjected to one or more discrete, active tests at each testing stage, and each test generates a discrete test result.
  • WO 91/01528 to Intaq discloses a system for identifying faulty components or assembly operations on an assembly line in real time. Manual and automatic testing stations are linked to a real time data capture and analysis system, and faults in components or assembly operations are identified by inspectors and recorded on interactive screens by means of light pens.
  • test data which are generated are growing in volume and complexity.
  • systems such as those discussed above are developed to monitor these data so as to identify faults in the assembled product, testing systems are therefore coming to play an increasingly significant role in the assembly industry.
  • the efficient operation of automatic and manual testing stations is therefore essential to the efficiency of the assembly process.
  • a method of monitoring the assembly of a product from a workpiece at at least one test station comprising:
  • a specific property is defined as an aggregate property of the assembly process, derived from the aggregation of a plurality of discrete test results.
  • a specific property therefore relates to the process of testing and manufacture, as distinct from the properties of the product which is being assembled and tested.
  • the present invention also provides for the nature of each test to be identified, enabling the production manager to correlate any change in a specific property with changes in the product under test.
  • the specific properties which may be calculated and displayed include yield, test station utilisation, retest or rework, average test time, average tested per hour, and failed to process values, as described hereafter.
  • the value of any specific property may be calculated from a selected subset of test results, comprising for example the results of all tests carried out on a particular product.
  • the invention also provides for the specification of system parameters such as bin size, no data timeout, and other parameters as discussed below, allowing the time period and data population size over which the yield or other specific property value is calculated to be varied by the user.
  • the invention further provides for the specific property values to be calculated for different levels of data aggregation.
  • the test results from each of the test stations are aggregated to produce a combined specific property for a group of test stations.
  • the group test specific properties may be aggregated to produce a line test specific property for a number of groups of test stations and the line test specific properties may be aggregated to produce a site test specific property for a number of lines of test stations.
  • the site test specific properties may be aggregated to produce a multi-site specific property for a number of sites of test stations. In this way the invention makes possible the efficient and responsive management of complex and extensive testing systems and assembly operations.
  • the lines or sites may be in mutually remote locations.
  • test results can be processed to calculate and display a breakdown of failed tests, with the most frequently failed tests displayed first.
  • a change in the visual indication of the yield occurs when the test results change compared with a previous test result by more than a predetermined threshold amount, which may be determined by the user.
  • Fig. 1 shows a flowchart representing the general arrangement of an assembly process.
  • Fig. 2 shows a diagrammatic representation of a computer network for communicating and processing test results.
  • Fig. 3 shows a flow diagram representing the flow of information in the network.
  • Figures 4 to 14 show respectively eleven user interfaces presenting information relating to specific properties of the assembly process and the products, at different levels of detail.
  • Figure 4 shows information relating to three lines.
  • Figure 5 shows information relating to one line.
  • Figure 6 shows information relating to one line as a time series.
  • Figure 7 shows information relating to a number of test stages in one line.
  • Figures 8 and 9 show information relating to a number of test stations in one test stage.
  • Figure 10 shows information relating to a number of test stations in one test stage as a time series.
  • Figure 11 shows information relating to a number of tests at a number of test stations in one test stage.
  • Figure 12 shows information relating to one test station.
  • Figure 13 shows information relating to a number of tests at one test station.
  • Figure 14 shows information relating to a number of tests at one test station as a time series.
  • Figures 15 to 18 show respectively a further four user interfaces allowing an operator to predefine system parameters.
  • Figure 19 shows a sixteenth user interface presenting information relating to failed tests.
  • Figure 20 shows a seventeenth user interface allowing individuals to be designated to receive automatic alarm calls in response to the value of a specific property falling outside a predefined limit.
  • Figure 21 shows an eighteenth user interface allowing information relating to the assembly process and the user interfaces to be entered into the computer network.
  • Figure 22 shows a nineteenth user interface allowing information relating to the status of different parts of the assembly process to be produced and presented.
  • Figure 23 shows a twentieth user interface wherein information is presented as a statistical distribution.
  • FIG. 1 a flowchart shows the general arrangement of a typical assembly process acting on workpieces which during the process are assembled into products.
  • workpiece shall be used to describe any partly or fully assembled product at any point in the manufacturing process which is subject to a test of any kind.
  • the manufacturing process in this embodiment is shown as comprising a company 1 carrying out assembly operations at two sites 2, 2', each site having a number of lines 3 for assembling one or more products, each line having a number of test stages 4 at which specific properties of the products are tested either manually or by automatic testing equipment (referred to hereinafter as "ATE"), each test stage comprising one or more test stations, each test station carrying out measurements associated with one or more tests.
  • ATE automatic testing equipment
  • Each test comprises the measurement of one or more specific properties of the workpiece or of the process; the nature of the test will depend on the product being manufactured and will be understood by the person skilled in the art.
  • optical or electrical tests may be carried out during the assembly of products such as video players and cameras, DVD players, computers and other consumer electronics items. Numerous other tests may be used, using techniques such as laser, ultrasonic, x-ray or AOI (Automated Optical Inspection) testing, and many different types of product may be tested - for example, packaging, pharmaceuticals, and consumer goods of all kinds.
  • a computer network 5 collects and processes the test results and provides output information, alarms and actions as discussed hereinafter. For simplicity, only a representative selection of the parts of the network is shown. Almost any general purpose digital computer - for example, a commercially available network server- can be adapted for use in the present system.
  • test station data representing the test results.
  • Other information may be included in the test station data, such as the identity of the individual workpiece, the type of product being assembled, the time and date of the test and the identity of the testing station.
  • the test results and other information may be processed by the test station and the test station data presented as test log files containing information aggregated over time and otherwise manipulated.
  • the test station data from different test stations may also comprise information encoded according to the different standards of data encoding and communication adopted by the manufacturer of each test station.
  • yield server computer 9 which processes the test station data in real time and aggregates them according to a program to produce output information.
  • Data link 8 may be a conventional cable link, or any other convenient means such as the Internet, and yield server computer 9 may be located at the company or site, or remotely therefrom.
  • yield server computer 9 may be a remote data processing facility, linked to manual data entry means 6 and ATEs 7 via an Internet link 8.
  • the output information is stored and manipulated to produce both real time and time series information, which is communicated 10 to a number of output computers 1 1, and also used to trigger alarms and actions as discussed hereinafter.
  • the output computers may be located remote from each other and from the site, for example, in a manager's home, enabling the manager at all times to receive information relating to the assembly process.
  • data links 10 may be conventional cable links, Internet connections, or any other convenient means.
  • the output information is displayed on the screen of each output computer by means of a number of user interfaces which contain links to one another and commands facilitating the display of information in graphical, time series and other configurations.
  • Successive figures show user interfaces displaying output information in real time at increasingly high levels of detail, corresponding respectively to a site, lines within a site, test stages within a line, test stations within a test stage, and tests within a test station.
  • Output information may also be processed in real time by means of statistical techniques and algorithms to produce, for example, the real time statistical distribution of test results shown in Figure 23.
  • the specific property may be calculated from a subset of test results relating to a particular product under test. This information will help for example to identify which of a large number of altered components has caused a change in the value of a specific property.
  • the total yield can be monitored in real time by the user, who would typically be the production manager.
  • the total yield can be investigated further by the user to investigate group data, that is site or line data, and further still to investigate individual test data.
  • the user can thus take whatever remedial action is necessary.
  • the entire manufacturing process can be monitored in real time.
  • there may be more than one user interface so that a number of users may have access to the yield data for the manufacturing process. This may be useful if consultation with individuals with particular expertise is required before particular action is taken. Also it may be desirable to monitor the whole process from different locations depending on the time of day by means of a connection to an external network.
  • the process may be monitored by managers located in different parts of the world in different time zones so that the 24-hour management of the process can be maintained without the user working unsociable hours.
  • the different sites may correspondingly be located in the different time zones, but this need not necessarily be the case.
  • the user interface may also be located remotely at the home of the manager so that the system may be managed on an "on-call" basis without the requirement for the user/production manager to travel to the location of the site concerned.
  • the output information represents specific properties of the product and process, including the following:
  • the yield at a test station is the proportion being those workpieces which pass the test of the total number of workpieces entering the test station, expressed as a percentage.
  • the range of yield values for one test station is shown 121 in Figure 12, and the yield for each test station in a test stage is shown 91 in Figure 9.
  • the product type being tested is shown for each test station at 94 in Figure 9.
  • the yields for each test station are aggregated to produce the yield for each test stage, shown as percentages 71 in Figure 7.
  • the yields for each test stage within a line are aggregated to produce the yield for the line, shown as a percentage 51 in Figure 5.
  • the yields for each line may similarly be aggregated to produce the yield for the site.
  • This interface is configured so that a click on the percentage display 51 will reveal the yield information for the test stages at the next level of detail. Clicking the Yield History button 52 will show the line yield as a time series 62 as shown in Figure 6. The Yield may be calculated separately for each product type.
  • the yield indicators in each interface are continuously updated in real time.
  • the yield will also have associated with it a range within which one would normally expect the yield value for the particular process or part of the process to lie, failing which one would know there was a problem which needed remedial action.
  • the range for each test station, stage and line can be set directly by the process manager using the windows 151, 152, 153 provided in the user interfaces shown in Figures 15 to 18.
  • Information representing the yield may alternatively be presented as the proportion of workpieces which pass or fail each of the tests which are performed on them, calculated and presented if desired by product type. This information is presented in Figure 14 as the most commonly failed tests at one test station during each period of production at the site. The information is aggregated to produce the most commonly failed tests at each test station in a test stage, shown in Figure 11.
  • Clicking the Compare Failures button 95 in Figure 9 will show the top n reasons for failure as shown in Figure 11.
  • the number n can be determined by the user; in the embodiment shown the user has selected the top 5 reasons.
  • Clicking the failure button 93 in Figure 9 will show the top reasons for failure at that test station as shown in Figure 13.
  • the information in Figure 13 can be displayed for different product types by selecting the required product type from the menu 132. Clicking anywhere on the graph in Figure 13 gives the display shown in Figure 14.
  • each test station there may be a number of different tests.
  • the tests that result in the most failures are shown.
  • the top failure test is the test for Rx Acoustic Level.
  • a yield display 131 shows the current yield percentage for the selected test station in real time, continuously up-dated as the test stations performs the test on the products being manufactured. The population over which the most frequently occurring failures are calculated can be preset and reset by the user in order to achieve the most accurate results. Clicking on the graph itself will produce a time series plot of the top five failures shown as shown in fig 14.
  • the interfaces of figs. 4, 5, 7 and 9 include time series buttons 41, 52, 72, 92 for each of the yield displays, which when clicked will reveal the historic yield for that display over a previous time period in the form of a graph of the percentage yield concerned over time, so the user can see the evolution of the yield for the test station(s) concerned.
  • the yield, or other specific property values relating to one product type, test station, group, line or site may be displayed and compared with the corresponding specific property values relating to other product types, test stations, groups, lines or sites.
  • the Test Station Utilisation of a given test station is that proportion of the capacity of the test station for carrying out tests, which is used in any given time period.
  • the Test Station Utilisation for one test station is shown 122 in Figure 12, and for each test station in a test stage as 81 in Figure 8. Alternatively, this information may be presented graphically as a time series.
  • the Re-Test Value is that proportion of the workpieces entering a test station which pass the tests at that test station after passing through the test station more than once in a predetermined time period, which time period is defined by the user.
  • the Re-Test Value for one test station is shown 123 in Figure 12, and for each test station in a test stage as 82, 82' in Figure 8.
  • the Re-Work Value is that proportion of the workpieces entering a test station which pass the tests at that test station after passing through the test station more than once outside the predetermined time period.
  • the Average Test Time is the time taken to test a predefined population of workpieces at a test station, divided by the size of the population.
  • the Average Test Time for one test station is shown at 124 in Figure 12.
  • the Average Tested per Hour is the number of workpieces tested at a test station in a given one-hour period, shown at 125 in Figure 12.
  • the Failed to Process figure is the number of test results from a given population which the central computer processor failed to process, and is shown for a single test station both as a total and as a percentage at 126 in Figure 12. This figure provides a means of indicating problems such as electrical faults in data carrying cables which might otherwise go unrecognised.
  • each test station may be monitored by testing a sample workpiece having precisely determined properties, and comparing the test station data with a previously stored sample of values relating to that workpiece.
  • configuration interfaces enable an operator to configure the system and to predefine system parameters which determine the way in which the test results are processed, output information is presented, and alarms and actions are produced as discussed hereinafter.
  • System parameters include the following:
  • Fig. 15 shows a window 154 wherein the bin size may be specified, being the number of test results or aggregated test results forming a sample population over which a further aggregated test result is computed.
  • Window 172 in Figure 17 allows the maximum time interval between consecutive test station data transmissions to be determined, exceeding which will cause the yield display (91 in Figure 9) for that test station to change colour indicating that the test station is not working.
  • FIG. 17 shows a user interface providing windows 174, 174' wherein there may be defined a given proportion of failing test results out of a given population, which will result in an alert condition. Any aspect of the output information may be configured so as to trigger an alert. The alert is triggered by the value of a specific property of the process or product falling outside a predefined limit or range.
  • the alert condition may be indicated by a change in colour of a particular part of a user interface, indicating the part of the process causing the alert.
  • Figure 21 shows a window wherein a display colour may be specified which will indicate an alert.
  • a user interface may also be configured to automatically display information indicating an alert as soon as an alert occurs.
  • Alerts may be triggered by yield variations outside the threshold values set by the operator in windows 155, 155' in Figure 15, 161, 161 ' in Figure 16, 171, 171 ' in Figure 17 and 181, 181 ' in Figure 18. For example, where the yield rises above the threshold value, the yield display may turn green; where the yield falls below the threshold value, the yield display may turn red.
  • a range of threshold values provides for a range of alert responses.
  • a given frequency of failure in any test may be configured to trigger an alert response by setting the triggering number of failures and the size of the population within which they must occur in order to trigger an alert (for example, 5 failures out of any 10 consecutive workpieces passing through any one test) in windows 182, 182' in Figure 18.
  • Alerts may also be triggered by a comparison facility. For example, the value of a given specific property at one test stage may be compared with the value of the given specific property at a second test stage, and an alert triggered if there is a difference of for example more than 5% between the two values. Similarly, the yield for a first line may be compared with the yield for a second line, and an alarm triggered where the two values differ by more than a predefined percentage.
  • a given range of values for a given specific property may be configured to trigger a visual indicator such as a product change label 61, which appears on the time series information to indicate the point at which the predefined value change occurred.
  • the content of the label is determined by the user when defining the values by which it is triggered.
  • a change such as, for example, the use of a new component in the assembly process, may be identified in a test by the range of values of a specific property associated with that component, and the point at which the new component was introduced will then be clearly indicated on all relevant time series information.
  • a change in the assembly process or in an ambient environmental condition, for example, humidity may be similarly identified and labelled.
  • An alert may also trigger an alarm, which may be an audible or visible warning device.
  • the computer network may be configured to carry out an action, such as intervening in the assembly process.
  • Figure 20 shows a user interface wherein contact information may be specified enabling the computer network to telephone, email or otherwise contact a designated person in the event of an alert.
  • An alert may also trigger the production of a management report showing details of the situation triggering the alert.
  • management reports may be generated on demand, or automatically in predefined circumstances, such as at a particular time of the day.
  • the specific properties measured may include non product related data, including environmental conditions such as, for example, measurements of ambient temperature, humidity, or external RF interference, and these data may be correlated with changes in the measured values of the specific properties of the product or products under test.
  • Figure 22 shows a further user interface indicating the status of various parts of the assembly process, including the test stations.
  • An indicator 220 for each part shows whether it is functioning or not.
  • the time series buttons enable an operator to demand and view a time series presentation of the status information, presenting the operator with a historical record of the functioning of each part of the assembly process. The operator may thus instantly assess the status and downtime of each critical stage in the assembly process, and immediately identify any problems as soon as they occur.

Abstract

A method of monitoring the assembly of a product from a workpiece comprises the collection of test data from a plurality of automatic or manual testing stations (6, 7) which perform tests on the workpiece at different stages of assembly. The testing stations are controlled and operated asynchronously and connected together by a network (5) to a yield server (9). The test data are aggregated and analysed in real time to calculate and display the values of specific properties of the assembly process, such as the yield (71, 51, 91) and the test station utilisation value (122), at any level of aggregation selected by the user. The method enables the user to monitor the performance of an entire assembly process comprising numerous sites (2), lines (3), groups of test stations (4), individual test stations and individual tests.

Description

Method of monitoring the assembly of a product from a workpiece
Field of the Invention
This invention relates to assembly lines used for the manufacture of one or more products. The assembly lines may be manual, partly automated or fully automated. More particularly, the invention relates to methods for the real time asynchronous monitoring and analyzing by means of computer, of test measurements during the assembly of the product.
Background to the Invention
In many manufacturing processes it is necessary and advantageous to apply a test to the worked product to determine that the worked product has been made within the desired specification. An example of such products is consumer electronics, such as video players and cameras, DVD players, computers etc.. However it will be appreciated that the invention may be applied to the manufacture of any product in any manufacturing process requiring assembly of two or more parts, including packaging, pharmaceuticals, and consumer goods of all kinds.
Worked products which fail the test are rejected and retested, which means that they are subjected to a repeat test, or reworked, which means that they are repaired and returned to the assembly line to be retested later. Those which pass are allowed to continue to the next stage of the manufacturing process. This is advantageous because it prevents products which are out of specification from being delivered. Furthermore the earlier that the fault can be determined the better since subsequent operations are wasted if the worked on product is in any event a reject. Thus it is advantageous to provide testing of the worked product at as many stages of the manufacturing process as required to avoid faulty products undergoing further operations unnecessarily and to identify the manufacturing stages at which the worked product is falling out of the specification.
Numerous tests exist and typically these will be either optical or electrical, but may be more sophisticated, including for example visual, laser, ultrasonic, X-ray or AOI (Automatic Optical Inspection), or any other test based on any measurable property of the article being assembled.
Typically each testing station will result in a number of rejects which are visible in a reject bin, and comparison of the number of rejects with the number of items which have passed through the test station successfully gives the yield at that test station, usually expressed as the proportion or percentage of items entering the test station, which successfully exit therefrom. A yield of 90% therefore means that 10% of those items entering the test station were rejected.
Typically the yield and other data are recorded over time and historical analysis of the test results to measure the performance of the previous manufacturing stages is undertaken.
The number of rejects can typically be automatically counted and the yield can be calculated and displayed at the test station. Thus a manager of the factory may inspect the test station periodically and use the information of the yield at a particular test station to take remedial action to improve the yield. This monitoring or inspection may take the form of viewing a particular readout from a test station or analyzing a historical yield profile, or may even be simply physically observing an increase in the reject rate.
All of these methods entail the disadvantage that there is a delay in seeing the test information. If a particular test shows a sudden increase in failures and decrease in the yield the manager may happen to see this during a periodic inspection of the test results or he may be warned of it by an operator of the test station of a proximate manufacturing stage. In either case a significant amount of time will have passed before it is brought to the manager's attention. During this time a significant amount of production may have been lost.
Similarly historical yield profiles which are useful in determining where improvements in the manufacturing process can be made are produced after a period of time after the last test result has been made. During this period or delay significant production may be lost.
In the process industry, systems are known for analysing and displaying in real time the continuous output of analogue or digital data from sensors which monitor a production process. For example, US 4,718,025 to Minor and Matheny discloses a system for displaying the output of process sensors as a graphic representation in real time. This enables the conditions of the production process, such as the temperature and volume of a liquid in a boiler, to be continuously monitored, controlled and recorded.
In the assembly industry, it has similarly been proposed to monitor the output of an assembly line in real time so as to address the problem discussed above by identifying faults in the assembled product in time to save lost production.
However, in contrast to the process industry, monitoring of the manufacturing process is carried out in the assembly industry by means of repeated active testing of each assembled item. Each workpiece is subjected to one or more discrete, active tests at each testing stage, and each test generates a discrete test result.
Complex assembly processes can require a large number of tests, and the results of these tests must be collected and analysed in real time in order to identify faults as they emerge.
Various systems have therefore been developed for this purpose. For example WO 91/01528 to Intaq discloses a system for identifying faulty components or assembly operations on an assembly line in real time. Manual and automatic testing stations are linked to a real time data capture and analysis system, and faults in components or assembly operations are identified by inspectors and recorded on interactive screens by means of light pens.
Due to the increasing sophistication of assembled products, and hence the increasing complexity of assembly operations, the test data which are generated are growing in volume and complexity. As systems such as those discussed above are developed to monitor these data so as to identify faults in the assembled product, testing systems are therefore coming to play an increasingly significant role in the assembly industry. The efficient operation of automatic and manual testing stations is therefore essential to the efficiency of the assembly process.
There is therefore an increasing need to find a way of monitoring the operation of the testing system itself, and it is accordingly the object of the present invention to provide an improved means of monitoring the results of tests carried out during product assembly.
According to the invention therefore there is provided a method of monitoring the assembly of a product from a workpiece at at least one test station comprising:
a. receiving a first test result from a first measurement at a first test station;
b. receiving at least one second test result from a corresponding second measurement;
c. communicating the first and second test results to a central computer processor;
d. processing the test results to calculate the value of a specific property of the test results, and e. providing an output display of the value of the specific property in real-time.
In this specification, a specific property is defined as an aggregate property of the assembly process, derived from the aggregation of a plurality of discrete test results. A specific property therefore relates to the process of testing and manufacture, as distinct from the properties of the product which is being assembled and tested. However the present invention also provides for the nature of each test to be identified, enabling the production manager to correlate any change in a specific property with changes in the product under test.
The specific properties which may be calculated and displayed include yield, test station utilisation, retest or rework, average test time, average tested per hour, and failed to process values, as described hereafter. The value of any specific property may be calculated from a selected subset of test results, comprising for example the results of all tests carried out on a particular product. The invention also provides for the specification of system parameters such as bin size, no data timeout, and other parameters as discussed below, allowing the time period and data population size over which the yield or other specific property value is calculated to be varied by the user.
The invention further provides for the specific property values to be calculated for different levels of data aggregation. Preferably the test results from each of the test stations are aggregated to produce a combined specific property for a group of test stations. The group test specific properties may be aggregated to produce a line test specific property for a number of groups of test stations and the line test specific properties may be aggregated to produce a site test specific property for a number of lines of test stations. The site test specific properties may be aggregated to produce a multi-site specific property for a number of sites of test stations. In this way the invention makes possible the efficient and responsive management of complex and extensive testing systems and assembly operations. The lines or sites may be in mutually remote locations. Preferably the test results can be processed to calculate and display a breakdown of failed tests, with the most frequently failed tests displayed first. Preferably for each test station or number of test stations, a change in the visual indication of the yield occurs when the test results change compared with a previous test result by more than a predetermined threshold amount, which may be determined by the user.
Brief Description of the Drawings
The invention will best be understood from the claims when read in conjunction with the detailed description and drawings wherein:
Fig. 1 shows a flowchart representing the general arrangement of an assembly process.
Fig. 2 shows a diagrammatic representation of a computer network for communicating and processing test results.
Fig. 3 shows a flow diagram representing the flow of information in the network.
Figures 4 to 14 show respectively eleven user interfaces presenting information relating to specific properties of the assembly process and the products, at different levels of detail.
Figure 4 shows information relating to three lines.
Figure 5 shows information relating to one line.
Figure 6 shows information relating to one line as a time series. Figure 7 shows information relating to a number of test stages in one line.
Figures 8 and 9 show information relating to a number of test stations in one test stage.
Figure 10 shows information relating to a number of test stations in one test stage as a time series.
Figure 11 shows information relating to a number of tests at a number of test stations in one test stage.
Figure 12 shows information relating to one test station.
Figure 13 shows information relating to a number of tests at one test station.
Figure 14 shows information relating to a number of tests at one test station as a time series.
Figures 15 to 18 show respectively a further four user interfaces allowing an operator to predefine system parameters.
Figure 19 shows a sixteenth user interface presenting information relating to failed tests.
Figure 20 shows a seventeenth user interface allowing individuals to be designated to receive automatic alarm calls in response to the value of a specific property falling outside a predefined limit.
Figure 21 shows an eighteenth user interface allowing information relating to the assembly process and the user interfaces to be entered into the computer network. Figure 22 shows a nineteenth user interface allowing information relating to the status of different parts of the assembly process to be produced and presented.
Figure 23 shows a twentieth user interface wherein information is presented as a statistical distribution.
Referring to Figure 1 , a flowchart shows the general arrangement of a typical assembly process acting on workpieces which during the process are assembled into products. In this specification the term workpiece shall be used to describe any partly or fully assembled product at any point in the manufacturing process which is subject to a test of any kind.
The manufacturing process in this embodiment is shown as comprising a company 1 carrying out assembly operations at two sites 2, 2', each site having a number of lines 3 for assembling one or more products, each line having a number of test stages 4 at which specific properties of the products are tested either manually or by automatic testing equipment (referred to hereinafter as "ATE"), each test stage comprising one or more test stations, each test station carrying out measurements associated with one or more tests. For simplicity, the test stages are shown in only one line. Each test comprises the measurement of one or more specific properties of the workpiece or of the process; the nature of the test will depend on the product being manufactured and will be understood by the person skilled in the art.
For example, optical or electrical tests may be carried out during the assembly of products such as video players and cameras, DVD players, computers and other consumer electronics items. Numerous other tests may be used, using techniques such as laser, ultrasonic, x-ray or AOI (Automated Optical Inspection) testing, and many different types of product may be tested - for example, packaging, pharmaceuticals, and consumer goods of all kinds. Referring to Figures 2 and 3, a computer network 5 collects and processes the test results and provides output information, alarms and actions as discussed hereinafter. For simplicity, only a representative selection of the parts of the network is shown. Almost any general purpose digital computer - for example, a commercially available network server- can be adapted for use in the present system.
The results of manual tests are input into a manual data entry means 6, such as a box with pushbuttons, by the person performing the test. The manual data entry means and the ATEs 7 produce test station data representing the test results. Other information may be included in the test station data, such as the identity of the individual workpiece, the type of product being assembled, the time and date of the test and the identity of the testing station. The test results and other information may be processed by the test station and the test station data presented as test log files containing information aggregated over time and otherwise manipulated. The test station data from different test stations may also comprise information encoded according to the different standards of data encoding and communication adopted by the manufacturer of each test station. Once running in real time the system captures the test station data and passes them 8 to import directories for real time processing by a yield server computer 9 which processes the test station data in real time and aggregates them according to a program to produce output information. Data link 8 may be a conventional cable link, or any other convenient means such as the Internet, and yield server computer 9 may be located at the company or site, or remotely therefrom. For example, yield server computer 9 may be a remote data processing facility, linked to manual data entry means 6 and ATEs 7 via an Internet link 8.
The output information is stored and manipulated to produce both real time and time series information, which is communicated 10 to a number of output computers 1 1, and also used to trigger alarms and actions as discussed hereinafter. The output computers may be located remote from each other and from the site, for example, in a manager's home, enabling the manager at all times to receive information relating to the assembly process. Again, data links 10 may be conventional cable links, Internet connections, or any other convenient means.
Refering to Figures 4 to 14, the output information is displayed on the screen of each output computer by means of a number of user interfaces which contain links to one another and commands facilitating the display of information in graphical, time series and other configurations. Successive figures show user interfaces displaying output information in real time at increasingly high levels of detail, corresponding respectively to a site, lines within a site, test stages within a line, test stations within a test stage, and tests within a test station.
Output information may also be processed in real time by means of statistical techniques and algorithms to produce, for example, the real time statistical distribution of test results shown in Figure 23. The specific property may be calculated from a subset of test results relating to a particular product under test. This information will help for example to identify which of a large number of altered components has caused a change in the value of a specific property.
It can be seen how by means of the invention the total yield can be monitored in real time by the user, who would typically be the production manager. The total yield can be investigated further by the user to investigate group data, that is site or line data, and further still to investigate individual test data. The user can thus take whatever remedial action is necessary. Thus from a single location, the entire manufacturing process can be monitored in real time. It will be appreciated that there may be more than one user interface so that a number of users may have access to the yield data for the manufacturing process. This may be useful if consultation with individuals with particular expertise is required before particular action is taken. Also it may be desirable to monitor the whole process from different locations depending on the time of day by means of a connection to an external network. For example the process may be monitored by managers located in different parts of the world in different time zones so that the 24-hour management of the process can be maintained without the user working unsociable hours. The different sites may correspondingly be located in the different time zones, but this need not necessarily be the case. The user interface may also be located remotely at the home of the manager so that the system may be managed on an "on-call" basis without the requirement for the user/production manager to travel to the location of the site concerned.
The output information represents specific properties of the product and process, including the following:
1. Yield.
The yield at a test station is the proportion being those workpieces which pass the test of the total number of workpieces entering the test station, expressed as a percentage. The range of yield values for one test station is shown 121 in Figure 12, and the yield for each test station in a test stage is shown 91 in Figure 9. The product type being tested is shown for each test station at 94 in Figure 9. The yields for each test station are aggregated to produce the yield for each test stage, shown as percentages 71 in Figure 7. The yields for each test stage within a line are aggregated to produce the yield for the line, shown as a percentage 51 in Figure 5. The yields for each line may similarly be aggregated to produce the yield for the site. This interface is configured so that a click on the percentage display 51 will reveal the yield information for the test stages at the next level of detail. Clicking the Yield History button 52 will show the line yield as a time series 62 as shown in Figure 6. The Yield may be calculated separately for each product type.
The yield indicators in each interface — for example, box 51 in Figure 5, boxes 101 and 102 in Figure 10, and boxes 131, 133, and 134 in Figure 13 — are continuously updated in real time. The yield will also have associated with it a range within which one would normally expect the yield value for the particular process or part of the process to lie, failing which one would know there was a problem which needed remedial action. The range for each test station, stage and line can be set directly by the process manager using the windows 151, 152, 153 provided in the user interfaces shown in Figures 15 to 18.
Information representing the yield may alternatively be presented as the proportion of workpieces which pass or fail each of the tests which are performed on them, calculated and presented if desired by product type. This information is presented in Figure 14 as the most commonly failed tests at one test station during each period of production at the site. The information is aggregated to produce the most commonly failed tests at each test station in a test stage, shown in Figure 11.
Clicking the Compare Failures button 95 in Figure 9 will show the top n reasons for failure as shown in Figure 11. The number n can be determined by the user; in the embodiment shown the user has selected the top 5 reasons. Clicking the failure button 93 in Figure 9 will show the top reasons for failure at that test station as shown in Figure 13. The information in Figure 13 can be displayed for different product types by selecting the required product type from the menu 132. Clicking anywhere on the graph in Figure 13 gives the display shown in Figure 14.
It will be appreciated that at each test station there may be a number of different tests. At the test station of this embodiment the tests that result in the most failures are shown. In the example shown the top failure test is the test for Rx Acoustic Level. A yield display 131 shows the current yield percentage for the selected test station in real time, continuously up-dated as the test stations performs the test on the products being manufactured. The population over which the most frequently occurring failures are calculated can be preset and reset by the user in order to achieve the most accurate results. Clicking on the graph itself will produce a time series plot of the top five failures shown as shown in fig 14.
The interfaces of figs. 4, 5, 7 and 9 include time series buttons 41, 52, 72, 92 for each of the yield displays, which when clicked will reveal the historic yield for that display over a previous time period in the form of a graph of the percentage yield concerned over time, so the user can see the evolution of the yield for the test station(s) concerned.
The yield, or other specific property values relating to one product type, test station, group, line or site may be displayed and compared with the corresponding specific property values relating to other product types, test stations, groups, lines or sites.
2. Test Station Utilisation.
The Test Station Utilisation of a given test station is that proportion of the capacity of the test station for carrying out tests, which is used in any given time period. The Test Station Utilisation for one test station is shown 122 in Figure 12, and for each test station in a test stage as 81 in Figure 8. Alternatively, this information may be presented graphically as a time series.
Similarly, the utilisation of other production equipment such as SMD, flash programmers and rework equipment may be calculated and presented.
3. Re-Test or Re- Work Value.
The Re-Test Value is that proportion of the workpieces entering a test station which pass the tests at that test station after passing through the test station more than once in a predetermined time period, which time period is defined by the user. The Re-Test Value for one test station is shown 123 in Figure 12, and for each test station in a test stage as 82, 82' in Figure 8.
The Re-Work Value is that proportion of the workpieces entering a test station which pass the tests at that test station after passing through the test station more than once outside the predetermined time period.
4. Average Test Time
The Average Test Time is the time taken to test a predefined population of workpieces at a test station, divided by the size of the population. The Average Test Time for one test station is shown at 124 in Figure 12.
5. Average Tested per Hour
The Average Tested per Hour is the number of workpieces tested at a test station in a given one-hour period, shown at 125 in Figure 12.
6. Failed to Process
The Failed to Process figure is the number of test results from a given population which the central computer processor failed to process, and is shown for a single test station both as a total and as a percentage at 126 in Figure 12. This figure provides a means of indicating problems such as electrical faults in data carrying cables which might otherwise go unrecognised.
The performance of each test station may be monitored by testing a sample workpiece having precisely determined properties, and comparing the test station data with a previously stored sample of values relating to that workpiece. Referring to Figures 15 to 18, configuration interfaces enable an operator to configure the system and to predefine system parameters which determine the way in which the test results are processed, output information is presented, and alarms and actions are produced as discussed hereinafter.
System parameters include the following:
1. Bin size.
Fig. 15 shows a window 154 wherein the bin size may be specified, being the number of test results or aggregated test results forming a sample population over which a further aggregated test result is computed.
2. No Data Timeout
Window 172 in Figure 17 allows the maximum time interval between consecutive test station data transmissions to be determined, exceeding which will cause the yield display (91 in Figure 9) for that test station to change colour indicating that the test station is not working.
3. Test Station Exclusion
The data from a given test station or group of test stations to be excluded from the calculation of aggregated specific properties by placing a tick in box 173 of Figure 17 or box 183 of Figure 18. This facilitates for example the use of specified test stations for temporary operations which will not be included in the overall calculations.
4. Alert values. Fig. 17 shows a user interface providing windows 174, 174' wherein there may be defined a given proportion of failing test results out of a given population, which will result in an alert condition. Any aspect of the output information may be configured so as to trigger an alert. The alert is triggered by the value of a specific property of the process or product falling outside a predefined limit or range.
The alert condition may be indicated by a change in colour of a particular part of a user interface, indicating the part of the process causing the alert. Figure 21 shows a window wherein a display colour may be specified which will indicate an alert. A user interface may also be configured to automatically display information indicating an alert as soon as an alert occurs.
Alerts may be triggered by yield variations outside the threshold values set by the operator in windows 155, 155' in Figure 15, 161, 161 ' in Figure 16, 171, 171 ' in Figure 17 and 181, 181 ' in Figure 18. For example, where the yield rises above the threshold value, the yield display may turn green; where the yield falls below the threshold value, the yield display may turn red. A range of threshold values provides for a range of alert responses.
Clicking the Failure button 93 in Figure 9 or 73 in Figure 7 gives a top failures screen as shown in Figure 13, which shows all the failures at a given test station over a given population. A given frequency of failure in any test may be configured to trigger an alert response by setting the triggering number of failures and the size of the population within which they must occur in order to trigger an alert (for example, 5 failures out of any 10 consecutive workpieces passing through any one test) in windows 182, 182' in Figure 18.
Alerts may also be triggered by a comparison facility. For example, the value of a given specific property at one test stage may be compared with the value of the given specific property at a second test stage, and an alert triggered if there is a difference of for example more than 5% between the two values. Similarly, the yield for a first line may be compared with the yield for a second line, and an alarm triggered where the two values differ by more than a predefined percentage.
Referring to Figure 6, a given range of values for a given specific property may be configured to trigger a visual indicator such as a product change label 61, which appears on the time series information to indicate the point at which the predefined value change occurred. The content of the label is determined by the user when defining the values by which it is triggered. In this way a change such as, for example, the use of a new component in the assembly process, may be identified in a test by the range of values of a specific property associated with that component, and the point at which the new component was introduced will then be clearly indicated on all relevant time series information. Alternatively a change in the assembly process or in an ambient environmental condition, for example, humidity, may be similarly identified and labelled.
An alert may also trigger an alarm, which may be an audible or visible warning device. Alternatively the computer network may be configured to carry out an action, such as intervening in the assembly process. Figure 20 shows a user interface wherein contact information may be specified enabling the computer network to telephone, email or otherwise contact a designated person in the event of an alert.
An alert may also trigger the production of a management report showing details of the situation triggering the alert. Alternatively management reports may be generated on demand, or automatically in predefined circumstances, such as at a particular time of the day.
The specific properties measured may include non product related data, including environmental conditions such as, for example, measurements of ambient temperature, humidity, or external RF interference, and these data may be correlated with changes in the measured values of the specific properties of the product or products under test.
In a further embodiment of a further aspect of the invention, Figure 22 shows a further user interface indicating the status of various parts of the assembly process, including the test stations. An indicator 220 for each part shows whether it is functioning or not. The time series buttons enable an operator to demand and view a time series presentation of the status information, presenting the operator with a historical record of the functioning of each part of the assembly process. The operator may thus instantly assess the status and downtime of each critical stage in the assembly process, and immediately identify any problems as soon as they occur.
Several embodiments of the invention have now been described in detail. It is to be noted, however, that these descriptions of specific embodiments are merely illustrative of the principles underlying the inventive concept. It is contemplated that various modifications of the disclosed embodiments, as well as other embodiments of the invention will, without departing from the spirit and scope of the invention, be apparent to persons skilled in the art.

Claims

Claims
1. A method of monitoring the assembly of a product from a workpiece at at least one test station (6, 7) comprising:
a. receiving a first test result from a first measurement at a first test station;
b. receiving at least one second test result from a corresponding second measurement;
c. communicating (8) the first and second test results to a central computer processor (9);
d. processing the test results to calculate the value of a specific property of the test results; and
e. providing an output display (51, 71, 122, 123) of the value of the specific property in real-time.
2. A method according to claim 1, characterized in that the test results are stored in a computer memory.
3. A method according to claim 2, characterized in that at least the first test result is compared with subsequent or previous test results.
4. A method according to claim 3, characterized in that the test results are processed to calculate and display (62) the change over time of the value of the specific property.
5. A method according to any preceding claim, characterized in that the test results are processed according to a statistical algorithm to provide a statistical analysis of one or more values (51, 71, 122, 123) of one or more specific properties.
6. A method according to any preceding claim, characterized in that there is further performed at least one additional measurement of an environmental condition, the results of the additional measurement being processed and correlated with the test results.
7. A method according to any preceding claim, characterised in that the value of the specific property is calculated from a subset of test results selected (132) according to the type of product being assembled.
8. A method according to any preceding claim, characterized in that the specific property is the yield (91, 62) which is the proportion of workpieces having a positive result of the total passing through a test station, usually expressed as a percentage.
9. A method according to any preceding claim, characterized in that the specific property is the test station utilization (122) which is the proportion of workpieces passing through a test station compared to the total that could pass through the same test station, in a given time period.
10. A method according to any preceding claim, characterized in that the specific property is the re-test (123) or re-work value, wherein the re-test value is the proportion of workpieces which have a positive result at the test station having being passed through the same test station (6,7) more than once within a predetermined time period, and the re-work value is the proportion of workpieces which have a positive result at the test station having being passed through the same test station more than once outside the predetermined time period.
11. A method according to any preceding claim, characterized in that the test results from each of the test stations, provided in a group (4) of test stations, are aggregated to produce a group specific property for the group of test stations.
12. A method according to claim 11, characterized in that the group specific properties are aggregated to produce a line specific property (51) corresponding to a number of groups (4) of test stations arranged together in a production line (3).
13. A method according to claim 12, characterized in that the line specific properties are aggregated to produce a site specific property for a number of lines of test stations arranged at a particular site (2).
14. A method according to any of claims 11 to 13, characterized in that the value of a specific property relating respectively to a test station (6, 7), group of test stations (4), line of test stations (3) or site (2) is compared with the corresponding value of that specific property relating respectively to other test stations, groups of test stations, lines of test stations or sites.
15. A method according to any of claims 11 to 14, characterized in that a user may elect to exclude (173) the test results from one or more test stations from said aggregated test results.
16. A method according to any preceding claim, characterized in that the test results are processed to calculate and display a breakdown of failed tests, with the most frequently failed tests displayed first.
17. A method according to any preceding claim, characterized in that for each test station or number of test stations, a user may define a visual indicator (61) which is displayed when a specific property change occurs such that the specific property value falls outside a predetermined range for that value, the visual indicator being displayed on a graphical presentation of the processed results (62) adjacent the point corresponding to the specific property change.
18. A method according to any preceding claim, characterized in that for each test station or number of test stations, an alarm is triggered when a specific property change occurs such that the value of a given specific property of x out of the y most recently tested workpieces falls outside a predetermined range for that value, where x and y are numbers determined (182, 182') by the user.
19. A method according to any preceding claim, characterized in that the number of measurements over which the value of the specific property is calculated may be varied (154) by the user.
20. A method according to any preceding claim, characterized in that the at least one first test result is produced according to a first test result format and at least one second test result is produced according to a second different test result format.
21. A method according to any preceding claim, characterized in that the working condition of at least two test stations is measured and communicated continuously to the computer processor and displayed (220) on a screen in real time.
22. A method according to any preceding claim, characterized in that the working condition of at least two test stations is recorded over time and stored in a computer memory to provide a history of the working condition of one or more test stations.
PCT/GB2001/004408 2000-10-06 2001-10-04 Method of monitoring the assembly of a product from a workpiece WO2002029733A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001292074A AU2001292074A1 (en) 2000-10-06 2001-10-04 Method of monitoring the assembly of a product from a workpiece

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB0024635.5A GB0024635D0 (en) 2000-10-06 2000-10-06 Method of monitoring the manufacture of a product from a workpiece
GB0024635.5 2000-10-06

Publications (1)

Publication Number Publication Date
WO2002029733A1 true WO2002029733A1 (en) 2002-04-11

Family

ID=9900883

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2001/004408 WO2002029733A1 (en) 2000-10-06 2001-10-04 Method of monitoring the assembly of a product from a workpiece

Country Status (3)

Country Link
AU (1) AU2001292074A1 (en)
GB (2) GB0024635D0 (en)
WO (1) WO2002029733A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1591966A3 (en) * 2004-04-30 2006-02-01 Omron Corporation Quality control apparatus and control method of the same, and recording medium recorded with quality control program
GB2417072A (en) * 2004-08-13 2006-02-15 Mv Res Ltd A machine vision inspection system and method
WO2018220373A1 (en) * 2017-06-01 2018-12-06 Renishaw Plc Production and measurement of workpieces

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8555206B2 (en) * 2007-12-21 2013-10-08 Fisher-Rosemount Systems, Inc. Methods and apparatus to present recipe progress status information

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4718025A (en) 1985-04-15 1988-01-05 Centec Corporation Computer management control system
WO1991001528A1 (en) 1989-07-18 1991-02-07 Intaq, Inc. Method and apparatus for data collection of testing and inspection of products made on a production assembly line
DE3930551A1 (en) * 1989-08-09 1991-02-14 Egm Entwicklung Montage Manufacturing product monitoring for properties w.r.t. pressure medium - subjecting it to medium and measuring pressure parameters taking account of leakage near test device
US5134574A (en) * 1990-02-27 1992-07-28 The Foxboro Company Performance control apparatus and method in a processing plant
FR2698470A1 (en) * 1992-11-26 1994-05-27 Kodak Pathe Continuous monitoring in real=time of complex fabrication process e.g photographic film mfr - uses local monitoring of process data to send data to central station for conversion to frequency variation to allow testing.
US5440478A (en) * 1994-02-22 1995-08-08 Mercer Forge Company Process control method for improving manufacturing operations
US5631839A (en) * 1994-07-18 1997-05-20 Eastman Kodak Company Device for controlling the parameters of a manufacturing process
US5717456A (en) * 1995-03-06 1998-02-10 Champion International Corporation System for monitoring a continuous manufacturing process

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5440475A (en) * 1993-11-08 1995-08-08 Energy Savings, Inc. Electronic Ballast with low harmonic distortion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4718025A (en) 1985-04-15 1988-01-05 Centec Corporation Computer management control system
WO1991001528A1 (en) 1989-07-18 1991-02-07 Intaq, Inc. Method and apparatus for data collection of testing and inspection of products made on a production assembly line
DE3930551A1 (en) * 1989-08-09 1991-02-14 Egm Entwicklung Montage Manufacturing product monitoring for properties w.r.t. pressure medium - subjecting it to medium and measuring pressure parameters taking account of leakage near test device
US5134574A (en) * 1990-02-27 1992-07-28 The Foxboro Company Performance control apparatus and method in a processing plant
FR2698470A1 (en) * 1992-11-26 1994-05-27 Kodak Pathe Continuous monitoring in real=time of complex fabrication process e.g photographic film mfr - uses local monitoring of process data to send data to central station for conversion to frequency variation to allow testing.
US5440478A (en) * 1994-02-22 1995-08-08 Mercer Forge Company Process control method for improving manufacturing operations
US5631839A (en) * 1994-07-18 1997-05-20 Eastman Kodak Company Device for controlling the parameters of a manufacturing process
US5717456A (en) * 1995-03-06 1998-02-10 Champion International Corporation System for monitoring a continuous manufacturing process

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1591966A3 (en) * 2004-04-30 2006-02-01 Omron Corporation Quality control apparatus and control method of the same, and recording medium recorded with quality control program
US7324862B2 (en) 2004-04-30 2008-01-29 Omron Corporation Quality control apparatus and control method of the same, and recording medium recorded with quality control program
EP2256696A1 (en) * 2004-04-30 2010-12-01 Omron Corporation Quality control apparatus and control method of the same, and recording medium recorded with quality control program
GB2417072A (en) * 2004-08-13 2006-02-15 Mv Res Ltd A machine vision inspection system and method
WO2018220373A1 (en) * 2017-06-01 2018-12-06 Renishaw Plc Production and measurement of workpieces
CN110691955A (en) * 2017-06-01 2020-01-14 瑞尼斯豪公司 Production and measurement of workpieces
CN110691955B (en) * 2017-06-01 2022-12-23 瑞尼斯豪公司 Production and measurement of workpieces
US11693384B2 (en) 2017-06-01 2023-07-04 Renishaw Plc Production and measurement of workpieces

Also Published As

Publication number Publication date
GB0123891D0 (en) 2001-11-28
GB0024635D0 (en) 2000-11-22
GB2372610A (en) 2002-08-28
AU2001292074A1 (en) 2002-04-15

Similar Documents

Publication Publication Date Title
US6295478B1 (en) Manufacturing process change control apparatus and manufacturing process change control method
KR100216066B1 (en) Control system and control method for ic test process
CN100383780C (en) Machine management system and message server used for machine management
US5339257A (en) Real-time statistical process monitoring system
US20050246119A1 (en) Event occurrence graph
EP3270250B1 (en) Method and system for remote monitoring of power generation units
JP6756374B2 (en) Process error status diagnostic device and error status diagnosis method
KR101233264B1 (en) Plant and Building Facility Monitoring Apparatus and Technique based on Sector Graphs
CN112925279A (en) Fault comprehensive analysis system based on MES system
KR20080070543A (en) Early warning method for estimating inferiority in automatic production line
US7555405B2 (en) Computerized method for creating a CUSUM chart for data analysis
JP2000259223A (en) Plant monitoring device
JP5621967B2 (en) Abnormal data analysis system
CN114721352B (en) State monitoring and fault diagnosis method and system of DCS (distributed control system)
US7023337B2 (en) Spray gun control operator interface
JP2009070052A (en) Monitoring device and program
US8014972B1 (en) Computerized method for creating a CUSUM chart for data analysis
CN109101398A (en) AOI wire body monitoring method and system
CN113468022B (en) Automatic operation and maintenance method for centralized monitoring of products
US20200401596A1 (en) Test data integration system and method thereof
CN115037603A (en) Diagnosis evaluation method, device and system of electricity consumption information acquisition equipment
US20100324700A1 (en) Facilities control device and facilities control method
WO2002029733A1 (en) Method of monitoring the assembly of a product from a workpiece
WO2014164610A1 (en) Analyzing measurement sensors based on self-generated calibration reports
CN115640860B (en) Electromechanical equipment remote maintenance method and system for industrial cloud service

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP