WO2000067168A2 - Account fraud scoring - Google Patents

Account fraud scoring Download PDF

Info

Publication number
WO2000067168A2
WO2000067168A2 PCT/GB2000/001669 GB0001669W WO0067168A2 WO 2000067168 A2 WO2000067168 A2 WO 2000067168A2 GB 0001669 W GB0001669 W GB 0001669W WO 0067168 A2 WO0067168 A2 WO 0067168A2
Authority
WO
WIPO (PCT)
Prior art keywords
account
alarms
fraud
alarm
score
Prior art date
Application number
PCT/GB2000/001669
Other languages
French (fr)
Other versions
WO2000067168A3 (en
Inventor
Ben Grady
Philip William Hobson
Graham Jolliffe
Original Assignee
Nortel Networks Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nortel Networks Limited filed Critical Nortel Networks Limited
Priority to CA002371730A priority Critical patent/CA2371730A1/en
Priority to EP00925506A priority patent/EP1224585A2/en
Priority to AU44227/00A priority patent/AU4422700A/en
Priority to IL14637300A priority patent/IL146373A0/en
Publication of WO2000067168A2 publication Critical patent/WO2000067168A2/en
Publication of WO2000067168A3 publication Critical patent/WO2000067168A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q99/00Subject matter not provided for in other groups of this subclass

Definitions

  • the present invention relates to a method and apparatus for account fraud scoring and a system incorporating the same.
  • fraud detection tools have been developed to assist in the identification of such fraudulent use.
  • Such a fraud detection tool may, however, produce thousands of alarms in one day.
  • these alarms have been ordered either chronologically according to when they have occurred, or in terms of their importance, or a combination of both.
  • Alarm importance provided a rudimentary order based on the significance of the alarm raised, although it has many failings: such a system takes no account of how alarms interact.
  • the invention seeks to provide an improved method and apparatus for classifying and prioritising identified instances of potential account fraud.
  • a method of prioritising alarms in an account fraud detection system comprising the steps of: assigning a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account; computing a fraud score for said alarm responsive to said numeric weights.
  • the score gives a meaningful representation of the seriousness of a potential fraud associated with the raised alarm.
  • said step of computing comprises the step of: forming a product of a plurality of said numeric weights.
  • a method of prioritising alarms in an account fraud detection system comprising the steps of: assigning a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account; computing a fraud score for each of said one or more alarms responsive to said numeric weights; computing an account fraud score responsive to said one or more fraud scores.
  • said step of computing a fraud score comprises the step of: forming a product of a plurality of said numeric weights.
  • said step of computing an account fraud score comprises the step of: selecting a largest of said one or more fraud scores.
  • said step of computing an account fraud score comprises the step of: imposing a numeric bound on the value of said account fraud score.
  • said step of computing an account fraud score for each of said one or more alarms comprises the step of: adding a term dependent on the number of alarms raised.
  • said step of computing an account fraud score comprises the steps of: selecting a largest of said fraud scores; adding a term dependent on the number of alarms raised.
  • this prioritises accounts according to the seriousness of potential fraud associated with them.
  • a method of prioritising alarms in an account fraud detection system comprising the steps of: performing the method of claim 3 on a plurality of accounts whereby to compute an account fraud score for each of said accounts; providing a sorted list of accounts responsive to saJd account fraud scores.
  • the method may also comprise the step of: displaying said sorted list of accounts.
  • this allows an operator to rapidly identify high risk account usage and hence concentrate resources on those high risk, potentially high cost frauds.
  • the step of displaying said sorted list of accounts comprises the step of: displaying with each account an indication of its associated account fraud score.
  • said characteristics include one or more characteristics drawn from the set consisting of: alarm capability, alarm sub-capability, velocity, bucket size, and account age.
  • the invention also provides for a system for the purposes of fraud detection which comprises one or more instances of apparatus embodying the present invention, together with other additional apparatus.
  • an apparatus arranged for prioritising alarms in an account fraud detection system comprising: first apparatus arranged to assign a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account; second apparatus arranged to compute a fraud score for said alarm responsive to said numeric weights.
  • an apparatus arranged for prioritising alarms in an account fraud detection system comprising the steps of: first apparatus arranged to assign a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account; second apparatus arranged to compute a fraud score for each of said one or more alarms responsive to said numeric weights; third apparatus arranged to compute an account fraud score responsive to said one or more fraud scores.
  • a machine readable medium arranged for prioritising alarms in an account fraud detection system and arranged to perform the steps of: assigning a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account; computing an fraud score for each of said one or more alarms responsive to said numeric weights; computing an account fraud score responsive to said one or more fraud scores.
  • Figure 1 shows a schematic diagram of an account fraud scoring apparatus in accordance with the present invention.
  • Figure 2 shows a schematic diagram of an account fraud prioritising apparatus in accordance with the present invention.
  • Figures 3(a)-(d) show successive columns of a table showing an examples of account fraud score calculations in accordance with the present invention.
  • FIG. 1 there is shown a schematic diagram of a system arranged to perform account fraud scoring.
  • the system shown relates to telecommunications system account fraud scoring and comprises a source 100 of Call Detail Records (CDRs) arranged to provide CDR's to a plurality of fraud detectors 1 10, 120.
  • CDRs Call Detail Records
  • a first detector 1 10 is a neural network whilst ' a second detector 120 is arranged to apply thresholds (and/or rules) to the received CRS's.
  • the neural network fraud detector 1 10 is arranged to receive a succession of CDR's and to provide in response a series of outputs indicating either a Neural Network Fraudulent Alarm (NN(F)), a Neural
  • N(E) Network Expected Alarm
  • the third category may be implemented by the neural network not generating an output.
  • Each NN(E) alarm provided by the neural network 1 10 is then mapped 1 1 1 to an associated Alarm Capability Factor (ACF) which is a numeric value indicative of the importance or risk associated with the alarm.
  • ACF Alarm Capability Factor
  • Each NN(F) provided by the neural network 1 10 is mapped 1 12 to a confidence level indicative of the confidence with which the neural network predicts that the account behaviour which raised the alarm is fraudulent. This confidence level may then be normalised with respect to the Alarm Capability Factors arising from NN(E)'s and Threshold alarms (described below) to provide an Alarm Capability Factor for each NN(F).
  • the threshold detector 120 is arranged to receive a succession of CDR's from the CDR source 100 and to provide in response a series of outputs indicative of whether the series of CDR's to date has exceeded any of one or more threshold values associated with different characteristics of the CDR series, any one of which might be indicative of fraudulent account usage.
  • Fraud score 140 is then calculated 130 from the Alarm Capability Factors (ACF), Velocity Factors (VF), and Bucket Factor (BF) which are described in detail below.
  • the score is calculated as a product:
  • Fraud Score Alarm Capability Factor x Velocity Factor x Bucket Factor
  • a further factor, a sub-capability factor is added to the equation to cater for variations of risk within a given broad category of alarms associated with the alarm capability factor.
  • Fraud Score Alarm Capability Factor x Velocity Factor x Bucket Factor x Alarm Sub-Capability Factor (2)
  • Fraud scores are computed for each alarm type raised against a given account and the highest of these scores is taken as the base account fraud score.
  • An additional term is then added which takes into account the fact that multiple alarms on the score account may be more indicative of a potential fraud risk than a single alarm.
  • a fixed, multiple alarm factor is determined and then a multiple of this factor is added to the base account fraud score to give a find account fraud score.
  • the multiple used is simply the number of alarms on the account.
  • the account fraud scoring system 1 of Figure 1 typically forms part of a fraud detection system.
  • the CDR data 100 provided to the scoring mechanism 210 described above is obtained from the telecommunications network 200.
  • the resulting account fraud scores calculated per account may then be sorted (220) so as to identify those accounts most suspected of being used fraudulently. This information may then be presented to an operator via, for example a Graphical User Interface (GUI) 230, either simply by listing the accounts in order of fraud likelihood, or by also showing some indication of the associated account fraud score (for example by displaying the actual account fraud score), or by any other appropriate means.
  • GUI Graphical User Interface
  • the first column simply assigns a number to each of the main alarm types listed in column 2. Rows having no explicitly named alarm type relate to the same alarm type as appears most closely above.
  • Column 1 1 shows the effect of applying the sub-capability factor, velocity factor and bucket factor to each basic alarm capability factor.
  • Column 12 is blank, indicating that all the accounts listed in columns 15- 32 are considered in this example to be well-established accounts, with a default account age factor of 1.0. In the case of newly opened accounts on higher account age factor, for example 1.2 might be employed.
  • Columns 15-32 show nine examples of account fraud score calculations for separate accounts. Each successive pair of columns shows how many of each kind of alarm have been raised against that account, alongside the fraud score associated with that alarm.
  • a base account fraud score is shown (being the maximum fraud score computed for any alarm raised against that account) along with the total number of alarms raised against that account.
  • the resulting account fraud scores range from 60.25 on account 7 to 90.65 on account 6.
  • Too many elements in the scoring equation tends to make it very volatile, with a higher probability of algorithmic inaccuracies, and also increased risk of any such errors causing a ricochet effect through the fraud scoring engine.
  • the margin for error in configuring the scoring mechanism, and indeed the parameters for the rules and thresholds themselves, is also reduced as the number of elements increases since they are the building blocks on which scoring is based.
  • the Alarm Capability Factor indicates the relative hierarchical position of the risk associated with a given alarm relative to risks associated with other alarms.
  • the Sub-Capability Factor gives a further refinement of the indication of the hierarchical position of the risk associated with a given alarm relative to risks associate with other alarms.
  • Bucket Factor is a measure of the volume of the potential fraud.
  • Velocity Factor is a measure of the rate at which the fraud is being perpetrated.
  • Account Age Factor is a measure of how old the account is: new accounts behaviour may be less predictable than older established usage patterns, and more susceptible to fraud.
  • the Account Fraud Score created should accurately reflect the level of risk associated with the course of events causing the production of an alarm. This calculation should primarily consider the speed with which money is and may be defrauded, and the volume of revenue defrauded, as these indicate loss to the telecommunications company concerned; questions of cost are always paramount. For example if a criminal has used $5,000 worth of traffic over 4 hours, this is more significant than if the same individual had done so over 8 hours.
  • the Sub-Capability Factor is added to increase or decrease the risk associated with specific types of alarm.
  • Many alarm types have a finer level of granularity as appropriate to that specific alarm.
  • Many alarm types are sub-divided, for example, into different sub-types of alarms for different call destinations as the inherent risk is different for different destinations. For example international calls are more often associated with fraud than calls to mobile telephones.
  • Trigger Value divided by Threshold Value accurately and expeditiously alarms any account where there is a large sudden increase in traffic for that customer. This is because, for example, the 1 hour bucket will always have the lowest threshold for a given capability and therefore any increase in traffic will proportionately increase the fraud score more in any 1 hour bucket than in a corresponding longer period.
  • a single extra unit of traffic represents a 2% rise to the 1 hour bucket but only a 1% rise for the 4 hour bucket:
  • an account age factor may be applied to increase the risk score associated with new accounts. Over time, the account operators' knowledge of each customer will improve as more data (such as payment information, bank details, and view call pattern) is received about normal usage patterns and, as a consequence, it will become less likely that the customer will attempt to perpetrate a fraud.
  • an account age factor of 1.2 might be applied, whilst an established account may have a factor of 1.
  • performance of certain confirmatory functions by the account owner may be required after certain time periods and if the account owner fails to perform these then the account will be suspended
  • a bucket is a time duration over which an alarm has been raised.
  • the velocity factor (Trigger value/ Threshold value) and Bucket factor are both superfluous in conjunction with the above alarm types (though they may for simplicity be assigned nominal values of 1 which when applied will have a null modifying effect) and the only true modifier is Account Age
  • the score resulting directly from the combinations of factors listed above may exceed reasonable bounds, for example in cases where many factors each have a high value individually indicative of high fraud risk. This may give rise to fraud scores well outside normal range. Whilst such scores may be left unamended, since their high value will clearly stand out relative to other scores, it is also reasonable to take the approach that score values beyond a given threshold all be treated equally since, with such high scores all indicative of high fraud risk, there is little benefit in differentiating between them: at those score levels the difference in score is more likely to be an artefact of the scoring system than the actual differentiation of fraud risk. The same approach may be applied to very low scores. In such cases then, scores may be normalised to lie within fixed bounds: scores lying above or below those bounds being amended to the maximum or minimum bound as appropriate. In practice such a situation should not be common due to the accuracy of the various factor figures given.
  • an Account Fraud Score may be normalised within the calculation to ensure that a normalised score between 0 and 100 is produced. All scores under or equal to 0 will be mapped to 0; all scores over or equal to 100 will be mapped to 100.
  • a situation may occur where multiple alarms are raised for one account in one poll and it is desirable to cater for this in determining an Account
  • Fraud Score The decision on how to treat multiple alarm breaches is based on an assessment of whether there is a greater chance of fraud in an account with multiple threshold breaches or alarms.
  • the risk associated with an alarm of type A and an alarm of type B together may be less than, equal to, or greater than the risk associated with one alarm of type C.
  • time slot In isolation or if combined with Account Type, time slot will add an extra dimension to the calculation of Account Fraud Score. Different frauds may be perpetrated at different times of day with certain traffic types representing a greater risk at night or the weekend.
  • the percentage confidence calculated by the neural network is used as the alarm capability factor and processed as per other alarms.
  • the confidence given by the neural network must be integral to the score given for that alarm, since the confidence is a statement as to the probability that an account is exhibiting fraudulent behaviour.
  • the confidence should be the basis for any calculation and accordingly is used as the prime factor calculating the Account Fraud Score, the alarm capability factor. Furthermore, the alarm confidence for fraudulent neural network alarms must be unaffected in the calculation from alarm confidence to individual alarm capability factor except for a standardisation factor which converts the percentage into an alarm priority proportionate to the other alarm priorities and proportionate to its value in terms of assessing and quantifying risk. In short, the figure should be adjusted to ensure it is relative to other alarm capability factors. It is again true that it would be a detraction from the value of the neural network confidence calculation process if it were changed more than minimally.
  • Alarm Capability Factor AlarmConfidence(NN(F)) / X (4) where AlarmConfidence(NN(F)) is the Neural Network Fraudulent Alarm Confidence and X is a standardisation factor for Neural Network Fraudulent Alarms.
  • Neural Network Fraudulent alarms must be assessed with all other alarms generated, or persisting, for an account in order to ensure that the alarm, and the account, posing the most risk is prioritised above the remainder..
  • This proposed “clean” processing keeps the ordering by Account Fraud Scoring as pure as possible; the assigned confidence is not adjusted by other factors outside the neural network although it is integrated within the scoring process.
  • Capability Factor is a fixed figure for Neural Network Expected Alarms and Threshold alarms while for Neural Network Fraudulent Alarms, the confidence is standardised to associate a relational and reasonable level of significance.
  • the method takes different alarms or other types of information, homogenises them through scoring the risk embodied in each element of the mechanism, taking the highest scored alarm for each account on any one time and then adding an extra value to the score dependent upon the number of alarms raised. The resulting value is the account fraud score.

Abstract

A method and apparatus for prioritising alarms in an account fraud detection system. The method involves assigning a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account, and computing a fraud score for that alarm responsive to those numeric weights. Numeric bounds may be imposed on the score, and a term may be added dependent on the number of alarms raised on the account.

Description

ACCOUNT FRAUD SCORING
FIELD OF THE INVENTION
The present invention relates to a method and apparatus for account fraud scoring and a system incorporating the same.
BACKGROUND TO THE INVENTION
In recent years there has been a rapid increase in the number of commercially operated telecommunications networks in general and in particular wireless telecommunication networks. Associated with this proliferation of networks is a rise in fraudulent use of such networks the fraud typically taking the form of gaining illicit access to the network, and then using the network in such a way that the fraudulent user hopes subsequently to avoid paying for the resources used. This may for example involve misuse of a third party's account on the network so that the perpetrated fraud becomes apparent only when the third party is charged for resources which he did not use.
In response to this form of attack on the network, fraud detection tools have been developed to assist in the identification of such fraudulent use. Such a fraud detection tool may, however, produce thousands of alarms in one day. In the past these alarms have been ordered either chronologically according to when they have occurred, or in terms of their importance, or a combination of both. Alarm importance provided a rudimentary order based on the significance of the alarm raised, although it has many failings: such a system takes no account of how alarms interact.
Since fraudulent use of a single account can cost a network operator a large sum of money within a short space of time it is important that the operator be able to identify and deal with the most costly forms of fraud at the earliest possible time. The existing methods of chronological ordering and alarm importance ordering are, however, inadequate in that regard. OBJECT OF THE INVENTION
The invention seeks to provide an improved method and apparatus for classifying and prioritising identified instances of potential account fraud.
SUMMARY OF THE INVENTION
According to a first aspect of the present invention there is provided a method of prioritising alarms in an account fraud detection system comprising the steps of: assigning a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account; computing a fraud score for said alarm responsive to said numeric weights.
Advantageously, the score gives a meaningful representation of the seriousness of a potential fraud associated with the raised alarm.
Preferably, said step of computing comprises the step of: forming a product of a plurality of said numeric weights.
According to a further aspect of the present invention there is provided a method of prioritising alarms in an account fraud detection system comprising the steps of: assigning a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account; computing a fraud score for each of said one or more alarms responsive to said numeric weights; computing an account fraud score responsive to said one or more fraud scores.
Preferably, said step of computing a fraud score comprises the step of: forming a product of a plurality of said numeric weights.
Preferably, said step of computing an account fraud score comprises the step of: selecting a largest of said one or more fraud scores.
Preferably, said step of computing an account fraud score comprises the step of: imposing a numeric bound on the value of said account fraud score.
Preferably, said step of computing an account fraud score for each of said one or more alarms comprises the step of: adding a term dependent on the number of alarms raised. Preferably, said step of computing an account fraud score comprises the steps of: selecting a largest of said fraud scores; adding a term dependent on the number of alarms raised.
Advantageously, this prioritises accounts according to the seriousness of potential fraud associated with them.
According to a further aspect of the present invention there is provided a method of prioritising alarms in an account fraud detection system comprising the steps of: performing the method of claim 3 on a plurality of accounts whereby to compute an account fraud score for each of said accounts; providing a sorted list of accounts responsive to saJd account fraud scores.
The method may also comprise the step of: displaying said sorted list of accounts.
Advantageously, this allows an operator to rapidly identify high risk account usage and hence concentrate resources on those high risk, potentially high cost frauds.
Preferably, the step of displaying said sorted list of accounts comprises the step of: displaying with each account an indication of its associated account fraud score.
In a preferred embodiment, said characteristics include one or more characteristics drawn from the set consisting of: alarm capability, alarm sub-capability, velocity, bucket size, and account age.
The invention also provides for a system for the purposes of fraud detection which comprises one or more instances of apparatus embodying the present invention, together with other additional apparatus.
According to a further aspect of the present invention there is provided an apparatus arranged for prioritising alarms in an account fraud detection system comprising: first apparatus arranged to assign a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account; second apparatus arranged to compute a fraud score for said alarm responsive to said numeric weights. According to a further aspect of the present invention there is provided an apparatus arranged for prioritising alarms in an account fraud detection system comprising the steps of: first apparatus arranged to assign a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account; second apparatus arranged to compute a fraud score for each of said one or more alarms responsive to said numeric weights; third apparatus arranged to compute an account fraud score responsive to said one or more fraud scores.
According to a further aspect of the present invention there is provided software on a machine readable medium arranged for prioritising alarms in an account fraud detection system and arranged to perform' the steps of: assigning a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account; computing a fraud score for said alarm responsive to said numeric weights.
According to a further aspect of the present invention there is provided software on a machine readable medium arranged for prioritising alarms in an account fraud detection system and arranged to perform the steps of: assigning a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account; computing an fraud score for each of said one or more alarms responsive to said numeric weights; computing an account fraud score responsive to said one or more fraud scores.
The preferred features may be combined as appropriate, as would be apparent to a skilled person, and may be combined with any of the aspects of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
In order to show how the invention may be carried into effect, embodiments of the invention are now described below by way of example only and with reference to the accompanying figures in which:
Figure 1 shows a schematic diagram of an account fraud scoring apparatus in accordance with the present invention.
Figure 2 shows a schematic diagram of an account fraud prioritising apparatus in accordance with the present invention. Figures 3(a)-(d) show successive columns of a table showing an examples of account fraud score calculations in accordance with the present invention.
DETAILED DESCRIPTION OF INVENTION
Referring to Figure 1 , there is shown a schematic diagram of a system arranged to perform account fraud scoring. In particular the system shown relates to telecommunications system account fraud scoring and comprises a source 100 of Call Detail Records (CDRs) arranged to provide CDR's to a plurality of fraud detectors 1 10, 120. In this specific embodiment, a first detector 1 10 is a neural network whilst 'a second detector 120 is arranged to apply thresholds (and/or rules) to the received CRS's.
The neural network fraud detector 1 10 is arranged to receive a succession of CDR's and to provide in response a series of outputs indicating either a Neural Network Fraudulent Alarm (NN(F)), a Neural
Network Expected Alarm (NN(E)), or a third category not indicative of an alarm. (The third category may be implemented by the neural network not generating an output.)
Each NN(E) alarm provided by the neural network 1 10 is then mapped 1 1 1 to an associated Alarm Capability Factor (ACF) which is a numeric value indicative of the importance or risk associated with the alarm.
Each NN(F) provided by the neural network 1 10 is mapped 1 12 to a confidence level indicative of the confidence with which the neural network predicts that the account behaviour which raised the alarm is fraudulent. This confidence level may then be normalised with respect to the Alarm Capability Factors arising from NN(E)'s and Threshold alarms (described below) to provide an Alarm Capability Factor for each NN(F).
The threshold detector 120 is arranged to receive a succession of CDR's from the CDR source 100 and to provide in response a series of outputs indicative of whether the series of CDR's to date has exceeded any of one or more threshold values associated with different characteristics of the CDR series, any one of which might be indicative of fraudulent account usage. Fraud score 140 is then calculated 130 from the Alarm Capability Factors (ACF), Velocity Factors (VF), and Bucket Factor (BF) which are described in detail below. In a preferred embodiment, the score is calculated as a product:
Fraud Score = Alarm Capability Factor x Velocity Factor x Bucket Factor
(1 )
In a preferred embodiment, a further factor, a sub-capability factor, is added to the equation to cater for variations of risk within a given broad category of alarms associated with the alarm capability factor.
Fraud Score = Alarm Capability Factor x Velocity Factor x Bucket Factor x Alarm Sub-Capability Factor (2)
Fraud scores are computed for each alarm type raised against a given account and the highest of these scores is taken as the base account fraud score.
An additional term is then added which takes into account the fact that multiple alarms on the score account may be more indicative of a potential fraud risk than a single alarm. In a most preferred embodiment a fixed, multiple alarm factor is determined and then a multiple of this factor is added to the base account fraud score to give a find account fraud score. The multiple used is simply the number of alarms on the account.
Details of these specific factors and others are given in more detail below.
Turning now to Figure 2, the account fraud scoring system 1 of Figure 1 typically forms part of a fraud detection system.
The CDR data 100 provided to the scoring mechanism 210 described above is obtained from the telecommunications network 200.
The resulting account fraud scores calculated per account may then be sorted (220) so as to identify those accounts most suspected of being used fraudulently. This information may then be presented to an operator via, for example a Graphical User Interface (GUI) 230, either simply by listing the accounts in order of fraud likelihood, or by also showing some indication of the associated account fraud score (for example by displaying the actual account fraud score), or by any other appropriate means.
Referring now to the table shown in Figures 3(a)-(d), an example is given of the numerical values assigned to the various account characteristics.
The first column simply assigns a number to each of the main alarm types listed in column 2. Rows having no explicitly named alarm type relate to the same alarm type as appears most closely above.
Column 4 similarly lists alarm sub-types where applicable whilst column 9 indicates bucket size for two applicable alarm types.
Columns 3, 5, 8, and 10 respectively list the alarm capability factors, sub- capability factors, velocity factors, and bucket factors associated with each alarm variant.
In the table shown no specific traffic values and threshold values are shown, since these are specific to a particular account at a particular time. Instead, typical resulting velocity factor values (e.g. 1 , 1 .35) are shown in column 8 for illustrative purposes.
Column 1 1 shows the effect of applying the sub-capability factor, velocity factor and bucket factor to each basic alarm capability factor.
Column 12 is blank, indicating that all the accounts listed in columns 15- 32 are considered in this example to be well-established accounts, with a default account age factor of 1.0. In the case of newly opened accounts on higher account age factor, for example 1.2 might be employed.
Column 13 shows the effect of applying the account age factor to the product of preceding factors shown in column 1 1.
Columns 15-32 show nine examples of account fraud score calculations for separate accounts. Each successive pair of columns shows how many of each kind of alarm have been raised against that account, alongside the fraud score associated with that alarm.
At the foot of each pair of columns, a base account fraud score is shown (being the maximum fraud score computed for any alarm raised against that account) along with the total number of alarms raised against that account. These two figures, in conjunction with the fixed multiple alarm fraud factor, set in this example at 0.65, are used to compute the final account fraud score in each case by adding to the base account fraud score a term being the fixed multiple alarm fraud factor times the number of alarms raised.
In the example shown, the resulting account fraud scores range from 60.25 on account 7 to 90.65 on account 6.
The selection of precise values for the various factors used in the calculation is a matter of experience and experiment and will vary according to the field of application. In the example shown, sub-capability factors, velocity factors, and bucket factors all fall approximately in the range 1 -1.5, whilst the basic alarm capability factors range from 30 to 90.
To achieve the desired scoring, one associates with each alarm a level of risk that is factored by a number of related elements. With each increase in the number of such related elements, there is an increase in the level of granularity in the scoring mechanism and a consequent potential increase in precision and efficiency of the scoring mechanism.
Too many elements in the scoring equation, however, tends to make it very volatile, with a higher probability of algorithmic inaccuracies, and also increased risk of any such errors causing a ricochet effect through the fraud scoring engine. The margin for error in configuring the scoring mechanism, and indeed the parameters for the rules and thresholds themselves, is also reduced as the number of elements increases since they are the building blocks on which scoring is based.
In short, too few factors result in a robust but insufficiently accurate system whilst too many factors produce an initially more labour intensive set-up with the potential for being highly accurate, although if configured incorrectly, the opposite could be true. The solution is a compromise between the two extremes: the system needs to be durable yet accurate. In the most preferred embodiment therefore, five significant factors are employed:
• Alarm Capability Factor
• Sub-Capability Factor • Bucket Factor
• Velocity Factor
• Account Age Factor
The Alarm Capability Factor indicates the relative hierarchical position of the risk associated with a given alarm relative to risks associated with other alarms.
The Sub-Capability Factor gives a further refinement of the indication of the hierarchical position of the risk associated with a given alarm relative to risks associate with other alarms.
Bucket Factor is a measure of the volume of the potential fraud.
Velocity Factor is a measure of the rate at which the fraud is being perpetrated.
Account Age Factor is a measure of how old the account is: new accounts behaviour may be less predictable than older established usage patterns, and more susceptible to fraud.
All neural network and threshold alarm capabilities are apportioned a figure upon which further calculations are made, increasing or decreasing the score as commensurate with the risk present. The Account Fraud Score created should accurately reflect the level of risk associated with the course of events causing the production of an alarm. This calculation should primarily consider the speed with which money is and may be defrauded, and the volume of revenue defrauded, as these indicate loss to the telecommunications company concerned; questions of cost are always paramount. For example if a criminal has used $5,000 worth of traffic over 4 hours, this is more significant than if the same individual had done so over 8 hours.
The Sub-Capability Factor is added to increase or decrease the risk associated with specific types of alarm. Many alarm types have a finer level of granularity as appropriate to that specific alarm. Many alarm types are sub-divided, for example, into different sub-types of alarms for different call destinations as the inherent risk is different for different destinations. For example international calls are more often associated with fraud than calls to mobile telephones.
The longer that an account is in operation fraudulently, the greater the cost will be, so a good fraud management system will aim to detect fraud as early as possible. Thus the analyst wishes, ideally, to see all alarms after the shortest time period, in order that he may stop the illegal action at the earliest opportunity.
The problem is addressed by calculating a ratio between a) the quantity of traffic pertinent to the particular alarm type within a poll and b) a threshold value for the alarm. Trigger Value divided by Threshold Value accurately and expeditiously alarms any account where there is a large sudden increase in traffic for that customer. This is because, for example, the 1 hour bucket will always have the lowest threshold for a given capability and therefore any increase in traffic will proportionately increase the fraud score more in any 1 hour bucket than in a corresponding longer period. In the example in table 1 below, a single extra unit of traffic represents a 2% rise to the 1 hour bucket but only a 1% rise for the 4 hour bucket:
Figure imgf000012_0001
This then gives an additional factor, namely rate of change of traffic relative to given thresholds, whereby to allow the account fraud scoring system to prioritise alarms so that the high velocity frauds can be investigated earlier than slower, and hence potentially less costly , examples of fraud.
In addition to the above, an account age factor may be applied to increase the risk score associated with new accounts. Over time, the account operators' knowledge of each customer will improve as more data (such as payment information, bank details, and view call pattern) is received about normal usage patterns and, as a consequence, it will become less likely that the customer will attempt to perpetrate a fraud.
For example, for new accounts, an account age factor of 1.2 might be applied, whilst an established account may have a factor of 1.
Furthermore, performance of certain confirmatory functions by the account owner may be required after certain time periods and if the account owner fails to perform these then the account will be suspended
As well as considering the volume or momentum of the fraud, it is also relevant to consider the immediate volume of potential fraud present in any given situation. Therefore a factor indicative of increases in the bucket size associated with the alarm can be applied to ensure that a measure of the quantity of fraud is directly represented in the resulting fraud score, independent of a factor representative of the velocity. A bucket is a time duration over which an alarm has been raised.
In the normal course of events, the 1 hour bucket alarms will be alarmed first because they have the smallest thresholds assigned to them. In the unlikely event that a fraudster manages to perpetrate fraud over a longer period without triggering such a small bucket an alarm, then it is desirable to generate an indication at the earliest opportunity should an alarm on a larger bucket be triggered.
Therefore if a 168 hour (1 week) alarm is raised, this is of considerable significance and should be weighted accordingly. Consequently, it is appropriate to increase the weighting applied to larger time buckets. The aim is to ensure that such a larger bucket alarm would be proportionately more prominent dependent upon the size of the time bucket and the associated risk.
Some alarms do not lend themselves directly to thresholds, but are merely concerned simply with whether an specific event has occurred.
For example in a telecommunications network account system, The
Neural Network Fraudulent, Neural Network Expected, Hot A Numbers, Hot B Numbers, Overlapping Calls, Single IMEI/Multiple IMSI and Single IMSI/Multiple IMEI alarms, by their very nature, do not lend themselves to thresholds. In these cases the only significance is that a particular CDR has been involved in a particular kind of call or whether the profile has exhibited a particular form of suspect behaviour.
The velocity factor (Trigger value/ Threshold value) and Bucket factor are both superfluous in conjunction with the above alarm types (though they may for simplicity be assigned nominal values of 1 which when applied will have a null modifying effect) and the only true modifier is Account Age
Factor. This is not a serious issue since Hot A & B Numbers, Single IMEI/Multiple IMSI, and Single IMSI/Multiple IMEI will typically be allocated a high basic Alarm Capability Factor since these kinds of alarm will certainly need to be examined as priorities by a reviewing fraud analyst.
This approach serves once again to achieve the overall aim that the risk associated with an alarm be accurately reflected in the final score allocated to that alarm.
In some cases it is possible that the score resulting directly from the combinations of factors listed above may exceed reasonable bounds, for example in cases where many factors each have a high value individually indicative of high fraud risk. This may give rise to fraud scores well outside normal range. Whilst such scores may be left unamended, since their high value will clearly stand out relative to other scores, it is also reasonable to take the approach that score values beyond a given threshold all be treated equally since, with such high scores all indicative of high fraud risk, there is little benefit in differentiating between them: at those score levels the difference in score is more likely to be an artefact of the scoring system than the actual differentiation of fraud risk. The same approach may be applied to very low scores. In such cases then, scores may be normalised to lie within fixed bounds: scores lying above or below those bounds being amended to the maximum or minimum bound as appropriate. In practice such a situation should not be common due to the accuracy of the various factor figures given.
For example, an Account Fraud Score may be normalised within the calculation to ensure that a normalised score between 0 and 100 is produced. All scores under or equal to 0 will be mapped to 0; all scores over or equal to 100 will be mapped to 100.
A situation may occur where multiple alarms are raised for one account in one poll and it is desirable to cater for this in determining an Account
Fraud Score. The decision on how to treat multiple alarm breaches is based on an assessment of whether there is a greater chance of fraud in an account with multiple threshold breaches or alarms.
It is inappropriate to aggregate the scores produced by multiple alarms since the increase in risk signaled by multiple alarms is not normally proportional to the increase in score that would be created by aggregation of the scores produced.
It is reasonable however to assume that there would be an increase in the risk associated with an account if another alarm were added to an already present alarm: that is, for example, the risk associated with a given alarm is less than the risk associated with two or more of those alarms.
It is also reasonable however to assume that two separate alarms of different types may or may not be as significant a concern as one other alarm. The level of concern must be translated to the Account Fraud Score and should not be influenced by the number of alarms arbitrarily.
That is , the risk associated with an alarm of type A and an alarm of type B together may be less than, equal to, or greater than the risk associated with one alarm of type C.
This means that the Account Fraud Score should be increased for multiple alarms but the risk associated with the highest risk alarm generated must first be considered. Accordingly, a fixed addition is made to the score dependent upon the number of alarms as described below:
Number of Alarms x Fixed Multiple Alarm Factor (3) It is beneficial to be able to assign different factors for determination of fraud scores for each account type as the increase or decrease in the level of risk associated is not uniform for all account types. For example, a business account calling PRS might indicate a greater risk compared to a residential customer, whereas a business calling the USA would be of less concern than in a residential account.
In isolation or if combined with Account Type, time slot will add an extra dimension to the calculation of Account Fraud Score. Different frauds may be perpetrated at different times of day with certain traffic types representing a greater risk at night or the weekend.
We now consider how to incorporate the neural network alarms in the Account Fraud Scoring mechanism as with neural network alarms, a confidence is calculated as to the accuracy of its decision.
The percentage confidence calculated by the neural network is used as the alarm capability factor and processed as per other alarms. The confidence given by the neural network must be integral to the score given for that alarm, since the confidence is a statement as to the probability that an account is exhibiting fraudulent behaviour.
The confidence should be the basis for any calculation and accordingly is used as the prime factor calculating the Account Fraud Score, the alarm capability factor. Furthermore, the alarm confidence for fraudulent neural network alarms must be unaffected in the calculation from alarm confidence to individual alarm capability factor except for a standardisation factor which converts the percentage into an alarm priority proportionate to the other alarm priorities and proportionate to its value in terms of assessing and quantifying risk. In short, the figure should be adjusted to ensure it is relative to other alarm capability factors. It is again true that it would be a detraction from the value of the neural network confidence calculation process if it were changed more than minimally.
The method for converting the confidence into an Alarm Capability Factor is as described below:
Alarm Capability Factor = AlarmConfidence(NN(F)) / X (4) where AlarmConfidence(NN(F)) is the Neural Network Fraudulent Alarm Confidence and X is a standardisation factor for Neural Network Fraudulent Alarms.
Neural Network Fraudulent alarms must be assessed with all other alarms generated, or persisting, for an account in order to ensure that the alarm, and the account, posing the most risk is prioritised above the remainder..
This proposed "clean" processing keeps the ordering by Account Fraud Scoring as pure as possible; the assigned confidence is not adjusted by other factors outside the neural network although it is integrated within the scoring process.
One has thought through the elements to be included within the Account Fraud Scoring mechanism, why they are to be included, how they represent risk and the appropriate method of dealing with each alarm type. The conclusion is that all alarms are processed through the scoring mechanism in the same fashion, only the prime figure, the Alarm
Capability Factor is a fixed figure for Neural Network Expected Alarms and Threshold alarms while for Neural Network Fraudulent Alarms, the confidence is standardised to associate a relational and reasonable level of significance.
For Neural Network Expected alarms, the confidence values will be 0-20% as opposed to a range of 0-100% for fraudulent neural network alarms. These expected alarms tend to indicate behaviour which is suspicious or unusual although not immediately identifiable as fraud. By their very nature, they will alert the user to areas of uncertainty. There is no suggestion that the expected behavioural neural network alarms are not valid; quite the opposite, since it is important that this task be performed.
The idea that small deviations in the neural network's confidence can be interpreted is a little spurious because the neural network is judging how much it doesn't know the behaviour being presented to it.
Thus there is more to be lost, in terms of complication and processing, than would be gained by allowing the percentage confidence to affect the Alarm Capability factor. Indeed it might also prove misleading, reducing the accuracy of the alarm generation engine. Use of a fixed value for the Alarm Capability factor, as opposed to a variable level resolves this issue. So for Neural Network Fraudulent alarms the percentage confidence is normalised and integrated into scoring mechanism; for Neural Network Expected alarms a fixed Alarm Capability factor is used as per threshold alarms.
In summary then, the method takes different alarms or other types of information, homogenises them through scoring the risk embodied in each element of the mechanism, taking the highest scored alarm for each account on any one time and then adding an extra value to the score dependent upon the number of alarms raised. The resulting value is the account fraud score.
Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person for an understanding of the teachings herein.

Claims

1. A method of prioritising alarms in an account fraud detection system comprising the steps of:
assigning a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account;
computing a fraud score for said alarm responsive to said numeric weights.
2. A method according to claim 1 wherein said step of -computing comprises the step of:
forming a product of a plurality of said numeric weights.
3. A method of prioritising alarms in an account fraud detection system comprising the steps of:
assigning a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account;
computing a fraud score for each of said one or more alarms responsive to said numeric weights;
computing an account fraud score responsive to said one or more fraud scores.
4. A method according to claim 3 wherein said step of computing a fraud score for each of said one or more alarms comprises the step of:
forming a product of a plurality of said numeric weights.
5. A method according to claim 3 wherein said step of computing an account fraud score comprises the step of:
selecting a largest of said one or more fraud scores.
6. A method according to any one of claims 3 - 5 wherein said step of computing an account fraud score comprises the step of: imposing a numeric bound on the value of said account fraud score.
7. A method according to any one of claims 3 - 6 wherein said step of computing an account fraud score for each of said one or more alarms comprises the step of:
adding a term dependent on the number of alarms raised.
8. A method of prioritising alarms in an account fraud detection system comprising the steps of:
performing the method of any one of claims 3 - 7 on a plurality of accounts whereby to compute an account fraud score for each of said accounts;
providing a sorted list of accounts responsive to said account fraud scores.
9. A method according to claim 8 additionally comprising the step of:
displaying said sorted list of accounts.
10. A method according to claim 9 wherein the step of displaying said sorted list of accounts comprises the step of:
displaying with each account an indication of its associated account fraud score.
11. A method according to any one of claims 3 - 10 wherein said characteristics include one or more characteristics drawn from the set consisting of: alarm capability, alarm sub-capability, velocity, bucket size, and account age.
12. Apparatus arranged for prioritising alarms in an account fraud detection system comprising: first apparatus arranged to assign a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account;
second apparatus arranged to compute a fraud score for said alarm responsive to said numeric weights.
13. Apparatus arranged for prioritising alarms in an account fraud detection system comprising the steps of:
first apparatus arranged to assign a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account;
second apparatus arranged to compute a fraud score for each of said one or more alarms responsive to said numeric weights;
third apparatus arranged to compute an account fraud score responsive to said one or more fraud scores.
14. Software on a machine readable medium arranged for prioritising alarms in an account fraud detection system and arranged to perform the steps of:
assigning a numeric weight to each of a plurality of behavioural characteristics of an alarm raised against an account;
computing a fraud score for said alarm responsive to said numeric weights.
15. Software on a machine readable medium arranged for prioritising alarms in an account fraud detection system and arranged to perform the steps of:
assigning a numeric weight to each of a plurality of behavioural characteristics of each of one or more of alarms raised against an account;
computing an fraud score for each of said one or more alarms responsive to said numeric weights;
computing an account fraud score responsive to said one or more fraud scores.
PCT/GB2000/001669 1999-04-30 2000-04-28 Account fraud scoring WO2000067168A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CA002371730A CA2371730A1 (en) 1999-04-30 2000-04-28 Account fraud scoring
EP00925506A EP1224585A2 (en) 1999-04-30 2000-04-28 Account fraud scoring
AU44227/00A AU4422700A (en) 1999-04-30 2000-04-28 Account fraud scoring
IL14637300A IL146373A0 (en) 1999-04-30 2000-04-28 Account fraud scoring

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB9910111.5 1999-04-30
GBGB9910111.5A GB9910111D0 (en) 1999-04-30 1999-04-30 Account fraud scoring

Publications (2)

Publication Number Publication Date
WO2000067168A2 true WO2000067168A2 (en) 2000-11-09
WO2000067168A3 WO2000067168A3 (en) 2002-04-25

Family

ID=10852648

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2000/001669 WO2000067168A2 (en) 1999-04-30 2000-04-28 Account fraud scoring

Country Status (6)

Country Link
EP (1) EP1224585A2 (en)
AU (1) AU4422700A (en)
CA (1) CA2371730A1 (en)
GB (1) GB9910111D0 (en)
IL (1) IL146373A0 (en)
WO (1) WO2000067168A2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7606721B1 (en) 2003-01-31 2009-10-20 CDR Associates, LLC Patient credit balance account analysis, overpayment reporting and recovery tools
US7774842B2 (en) * 2003-05-15 2010-08-10 Verizon Business Global Llc Method and system for prioritizing cases for fraud detection
US7783019B2 (en) 2003-05-15 2010-08-24 Verizon Business Global Llc Method and apparatus for providing fraud detection using geographically differentiated connection duration thresholds
WO2010118057A1 (en) * 2009-04-06 2010-10-14 Finsphere Corporation System and method for identity protection using mobile device signaling network derived location pattern recognition
US7817791B2 (en) 2003-05-15 2010-10-19 Verizon Business Global Llc Method and apparatus for providing fraud detection using hot or cold originating attributes
US7971237B2 (en) 2003-05-15 2011-06-28 Verizon Business Global Llc Method and system for providing fraud detection for remote access services
US8116731B2 (en) 2007-11-01 2012-02-14 Finsphere, Inc. System and method for mobile identity protection of a user of multiple computer applications, networks or devices
US8374634B2 (en) 2007-03-16 2013-02-12 Finsphere Corporation System and method for automated analysis comparing a wireless device location with another geographic location
US9420448B2 (en) 2007-03-16 2016-08-16 Visa International Service Association System and method for automated analysis comparing a wireless device location with another geographic location
US9432845B2 (en) 2007-03-16 2016-08-30 Visa International Service Association System and method for automated analysis comparing a wireless device location with another geographic location
US9922323B2 (en) 2007-03-16 2018-03-20 Visa International Service Association System and method for automated analysis comparing a wireless device location with another geographic location
US11405781B2 (en) 2007-03-16 2022-08-02 Visa International Service Association System and method for mobile identity protection for online user authentication

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997003533A1 (en) * 1995-07-13 1997-01-30 Northern Telecom Limited Detecting mobile telephone misuse
WO1997037486A1 (en) * 1996-03-29 1997-10-09 British Telecommunications Public Limited Company Fraud monitoring in a telecommunications network
WO1998032086A1 (en) * 1997-01-21 1998-07-23 Northern Telecom Limited Monitoring and retraining neural network
US5819226A (en) * 1992-09-08 1998-10-06 Hnc Software Inc. Fraud detection using predictive modeling

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5819226A (en) * 1992-09-08 1998-10-06 Hnc Software Inc. Fraud detection using predictive modeling
WO1997003533A1 (en) * 1995-07-13 1997-01-30 Northern Telecom Limited Detecting mobile telephone misuse
WO1997037486A1 (en) * 1996-03-29 1997-10-09 British Telecommunications Public Limited Company Fraud monitoring in a telecommunications network
WO1998032086A1 (en) * 1997-01-21 1998-07-23 Northern Telecom Limited Monitoring and retraining neural network

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7606721B1 (en) 2003-01-31 2009-10-20 CDR Associates, LLC Patient credit balance account analysis, overpayment reporting and recovery tools
US7835921B1 (en) 2003-01-31 2010-11-16 ASC Commercial Solutions, Inc. Patient credit balance account analysis, overpayment reporting and recovery tools
US7774842B2 (en) * 2003-05-15 2010-08-10 Verizon Business Global Llc Method and system for prioritizing cases for fraud detection
US7783019B2 (en) 2003-05-15 2010-08-24 Verizon Business Global Llc Method and apparatus for providing fraud detection using geographically differentiated connection duration thresholds
US8638916B2 (en) 2003-05-15 2014-01-28 Verizon Business Global Llc Method and apparatus for providing fraud detection using connection frequency and cumulative duration thresholds
US7817791B2 (en) 2003-05-15 2010-10-19 Verizon Business Global Llc Method and apparatus for providing fraud detection using hot or cold originating attributes
US7971237B2 (en) 2003-05-15 2011-06-28 Verizon Business Global Llc Method and system for providing fraud detection for remote access services
US8015414B2 (en) 2003-05-15 2011-09-06 Verizon Business Global Llc Method and apparatus for providing fraud detection using connection frequency thresholds
US8374634B2 (en) 2007-03-16 2013-02-12 Finsphere Corporation System and method for automated analysis comparing a wireless device location with another geographic location
US9603023B2 (en) 2007-03-16 2017-03-21 Visa International Service Association System and method for identity protection using mobile device signaling network derived location pattern recognition
US11405781B2 (en) 2007-03-16 2022-08-02 Visa International Service Association System and method for mobile identity protection for online user authentication
US10776791B2 (en) 2007-03-16 2020-09-15 Visa International Service Association System and method for identity protection using mobile device signaling network derived location pattern recognition
US8831564B2 (en) 2007-03-16 2014-09-09 Finsphere Corporation System and method for identity protection using mobile device signaling network derived location pattern recognition
US9420448B2 (en) 2007-03-16 2016-08-16 Visa International Service Association System and method for automated analysis comparing a wireless device location with another geographic location
US9432845B2 (en) 2007-03-16 2016-08-30 Visa International Service Association System and method for automated analysis comparing a wireless device location with another geographic location
US8280348B2 (en) 2007-03-16 2012-10-02 Finsphere Corporation System and method for identity protection using mobile device signaling network derived location pattern recognition
US9848298B2 (en) 2007-03-16 2017-12-19 Visa International Service Association System and method for automated analysis comparing a wireless device location with another geographic location
US9922323B2 (en) 2007-03-16 2018-03-20 Visa International Service Association System and method for automated analysis comparing a wireless device location with another geographic location
US10354253B2 (en) 2007-03-16 2019-07-16 Visa International Service Association System and method for identity protection using mobile device signaling network derived location pattern recognition
US10669130B2 (en) 2007-03-16 2020-06-02 Visa International Service Association System and method for automated analysis comparing a wireless device location with another geographic location
US10776784B2 (en) 2007-03-16 2020-09-15 Visa International Service Association System and method for automated analysis comparing a wireless device location with another geographic location
US8116731B2 (en) 2007-11-01 2012-02-14 Finsphere, Inc. System and method for mobile identity protection of a user of multiple computer applications, networks or devices
WO2010118057A1 (en) * 2009-04-06 2010-10-14 Finsphere Corporation System and method for identity protection using mobile device signaling network derived location pattern recognition

Also Published As

Publication number Publication date
WO2000067168A3 (en) 2002-04-25
IL146373A0 (en) 2002-07-25
GB9910111D0 (en) 1999-06-30
AU4422700A (en) 2000-11-17
EP1224585A2 (en) 2002-07-24
CA2371730A1 (en) 2000-11-09

Similar Documents

Publication Publication Date Title
US7457401B2 (en) Self-learning real-time prioritization of fraud control actions
US6535728B1 (en) Event manager for use in fraud detection
US6597775B2 (en) Self-learning real-time prioritization of telecommunication fraud control actions
US7117191B2 (en) System, method and computer program product for processing event records
US7783019B2 (en) Method and apparatus for providing fraud detection using geographically differentiated connection duration thresholds
US7971237B2 (en) Method and system for providing fraud detection for remote access services
US8340259B2 (en) Method and apparatus for providing fraud detection using hot or cold originating attributes
EP0890256B1 (en) Fraud prevention in a telecommunications network
JP2002510942A (en) Automatic handling of fraudulent means in processing-based networks
WO2000067168A2 (en) Account fraud scoring
US20050222806A1 (en) Detection of outliers in communication networks
EP0890255B1 (en) Fraud monitoring in a telecommunications network
KR102200253B1 (en) System and method for detecting fraud usage of message
EP1396141A1 (en) Variable length called number screening
Kang et al. Toll Fraud Detection of Voip Services via an Ensemble of Novelty Detection Algorithms.
US6466778B1 (en) Monitoring a communication network
EP1427244A2 (en) Event manager for use in fraud detection
MXPA98007770A (en) Monitoring fraud in a telecommunication network

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
ENP Entry into the national phase

Ref document number: 2371730

Country of ref document: CA

Ref country code: CA

Ref document number: 2371730

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 2000925506

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 2000925506

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP