In an effort to reduce losses due to fraud, financial services companies have been fairly successful in establishing fraud detection analytics, based on abnormal behavior identification, which identify financial transactions that seem out of norm for a particular financial services customer. For example, credit card companies acting on this information will contact cardholders to validate anomalous behavior, or if costs are high, and users unavailable, can freeze accounts until the anomaly is investigated. In this way, they can curtail the loss due to prolonged invalid use of a credit card. Fraud detection algorithms (based on user behavior models) and procedures immediately set off account alarms and/or deny additional transactions after they have detected a fraudulent or suspicious transaction. Depending upon the fraud method (e.g., automated gasoline purchase), they may not always block the first fraudulent transaction on a given card.
In online banking, financial institutions employ similar behavioral models to monitor the size and destinations of financial transfers and/or online transactions (such as change of address or payee) and will delay transfers until the customer can be reached to confirm the transactions and/or provide additional authentication. Despite the use of best available behavior modeling and monitoring, financial institutions continue to sustain significant financial loss from fraud. Can the field of fraud detection (and cybersecurity in general) be improved by new technology and approaches?
Fraud detection works on the assumption that malicious fiscal behavior is a subset of abnormal behavior – if the fraudulent user mimics the financial behavior of the authorized user, these methods do not work. Detection methods do not assume that malicious behavior is automatically distinguishable from unusual behavior on the part of authorized users. The fraud detection algorithms use the financial services customer’s history to build a profile of “normal” transactions and develop thresholds for unusual behavior. The volume of transactions allows for reasonable thresholds to be established. Fraud detection methods rely on strong models of normal behavior, or known criminal behavior characteristics. The development of many of these models is aided by the fact that the value of a transaction is numeric and allows sets of values to be analyzed with well understood algorithms. For example, credit card purchases have relatively small and fixed semantics: store names are typed, businesses are categorized, relationships among businesses and purchases by card users are fairly easy to establish (e.g., people who buy plane tickets may also purchase luggage, or may eat out more when they are away, or may spend more in general while traveling). These models enable gradual change in behavior to be learned and help drive down false alerts.
Many cyber intrusion detection techniques, or insider threat detection techniques, aim to achieve similar results by using abnormal behavior detection as a starting point. Yet, it is an open question whether these techniques can expect to attain the same broad-based success when applied in the broader cyber security domain. The domains share an adversarial dynamic that might indicate that similar analyses could be effective. But do the assumptions of the relationship between malicious and normal behavior hold true? Can we establish a solid footing in terms of models of normal transaction semantics and transaction value? Does the real time nature of cyber decision making, and the ease of dynamic changes in the criminal’s attack signature, present insurmountable challenges for behavioral techniques?
Topic: Using Abnormal Behavior Detection to Identify Malicious Actors
Research Problem
Here the principal investigator (PI) is examining the widespread issue of fraud amongst financial institutions. They inspect financial fraud by utilizing both credit card and online banking examples where there are some identification processes in place to deter greater losses. However, the PI goes on to explain that despite the sophisticated fraud detection policies used amongst financial institutions, heavy losses still occur. Throughout the narrative, there are numerous questions posed about the issue at hand. Regarding whether the field of fraud detection and cyber security can be improved by new technology, it is quite clear the answer is yes. This can best be done by treating cyber security like any other science, where growth and improvement are always possible. Innovating technology and therefore staying ahead of the malicious user is imperative.
Although the assumptions between malicious and abnormal behavior may not always hold true, it is important to have a starting point for analyzing potentially malicious behavior. The granularity of the analyzing itself is where improvement should be made so that abnormal behavior that is not malicious can be examined and ruled out with efficiency.
A solid footing in our models of normal transaction semantics and value can be established but at a high enough level that assumptions do not lead to security holes. No level can be considered perfectly safe but standards can be achieved.
Although it may appear as though the real time cyber decision making and the ease of dynamic change in criminal signatures presents insurmountable challenges for behavior detection techniques, this is not the case. The method by which we confront these attacks must be agile in nature and never outdated. Combating an ever changing environment of signatures and their skilled attackers will take equal if not greater skill for those who defend against such threats.
Research Purpose
The purpose of this research is to establish fraud detection analytics, based on abnormal behavior identification by Using Abnormal Behavior Detection systems to Identify Malicious Actors.
Research Question
What abnormal behavior techniques can be used to identify different fraud cases?
What techniques are expected to attain success when applied in the broader cyber security domain?
Can the field of fraud detection (and cybersecurity in general) be improved by new technology and approaches?
How does the system determine which system is a normal behavior or malicious behavior?
Does the real time nature of cyber decision making, and the ease of dynamic changes in the criminal’s attack signature, present insurmountable challenges for behavioral techniques?
No comments:
Post a Comment