Tag Archives: data analytics - Page 2

Dr. Fraudster & the Billing Anomaly Continuum

healthcare-fraudThis month’s member’s lecture on Medicare and Medicaid Fraud triggered a couple of Chapter member requests for more specifics about how health care fraud detection analytics work in actual practice.

It’s a truism within the specialty of data analytics having to do with health care billing data that the harder you work on the front end, the more successful you’ll be in materializing information that will generate productive results on the back end.  Indeed, in the output of health care analytics applications, fraud examiners and health care auditors now have a new set of increasingly powerful tools to use in the audit and investigation of all types of fraud generally and of health care fraud specifically; I’m referring, of course, to analytically supported analysis of what’s called the billing anomaly continuum.

The use of the anomaly continuum in the general investigative process starts with the initial process of detection, proceeds to investigation and mitigation and then (depending on the severity of the case) can lead to the follow-on phases of prevention, response and recovery.   We’ll only discuss the first three phases here as most relevant for the fraud examination process and leave the prevention, response and recovery phases for a later post.

Detection is the discovery of clues within the data.  The process involves taking individual data segments related to the whole health care process (from the initial provision of care by the health care provider all the way to the billing and payment for that care by the insurance provider) and blending them into one data source for seamless analysis.  Any anomalies in the data can then be noted.  The output is then evaluated for either response or for follow-up investigation.  It is these identified anomalies that will go on at the end of the present investigative process to feed the detection database for future analysis.

As an example of an actual Medicare case, let’s say we have a health care provider whom we’ll call Dr. Fraudster, some of whose billing data reveals a higher than average percentage of complicated (and costly) patient visits. It also seems that Dr. Fraudster apparently generated some of this billings while travelling outside the country.  There were also referred patient visits to chiropractors, acupuncturists, massage therapists, nutritionists and personal trainers at a local gym whose services were also billed under Dr. Fraudster’s tax ID number as well as under standard MD Current Procedural Terminology (CPT) visit codes.  In addition, a Dr. Outlander, a staff physician, and an unlicensed doctor, was on Dr. Fraudster’s staff and billed for $5 an hour.  Besides Outlander, a Dr. Absent was noted as billing out of Dr. Fraudster’s clinic even though he was no longer associated with the clinic.

First off, in the initial detection phase, its seems Dr. Fraudster’s high-volume activity flagged an edit function that tracks an above-average practice growth rate without the addition of new staff on the claim form.  Another anomalous activity picked up was the appearance of wellness services presented as illness based services.  Also the billed provision of services while travelling is also certainly anomalous.

The following investigation phase involves ascertaining whether various activities or statements are true.  In Dr. Fraudster’s case, evidence to collect regarding his on-staff associate, Dr. Outlander, may include confirmation of license status, if any; educational training, clinic marketing materials and payroll records.  The high percentage of complicated visits and the foreign travel issues need to be broken down and each activity analyzed separately in full detail.  If Dr. Fraudster truly has a high complication patient population, most likely these patients would be receiving some type of prescription regime.  The lack of a diagnosis requirement with associated prescriptions in this case limited the scope of the real-life investigation.  Was Dr. Fraudster prescribing medications with no basis?  If he uses an unlicensed Doctor on his staff, presents wellness services as illness related services, and sees himself (perhaps) as a caring doctor getting reluctant insurance companies to pay for alternative health treatments, what other alternative treatment might he be providing with prescribed medications?  Also, Dr. Fraudster had to know that the bills submitted during his foreign travels were false.  Statistical analysis in addition to clinical analysis of the medical records by actual provider and travel records would provide a strong argument that the doctor had intent to misrepresent his claims.

The mitigation phase typically builds on issues noted within the detection and investigation phases.  Mitigation is the process of reducing or making a certain set of circumstances less severe.  In the case of Dr. Fraudster, mitigation occurred in the form of prosecution.  Dr. Fraudster was convicted of false claims and removed from the Medicare network as a licensed physician, thereby preventing further harm and loss.  Other applicable issues that came forward at trial were evidence of substandard care and medical unbelievability patterns (CPE codes billed that made no sense except to inflate the billing).  What made this case even more complicated was tracking down Dr. Fraudster’s assets.  Ultimately, the real-life Dr. Fraudster did receive a criminal conviction; civil lawsuits were initiated, and he ultimately lost his license.

From an analytics point of view, mitigation does not stop at the point of conviction of the perpetrator.  The findings regarding all individual anomalies identified in the case should be followed up with adjustment of the insurance company’s administrative adjudication and edit procedures (Medicare was the third party claims payer in this case).  What this means is that feedback from every fraud case should be fed back into the analytics system.  Incorporating the patterns of Dr. Fraudster’s fraud into the Medicare Fraud Prevention Model will help to prevent or minimize future similar occurrences, help find currently on-going similar schemes elsewhere with other providers and reduce the time it takes to discover these other schemes.  A complete mitigation process also feeds detection by reducing the amount of investigative time required to make the existence of a fraud known.

As practicing fraud examiners, we are provided by the ACFE with an examination methodology quite powerful in its ability to extend and support all three phases of the health care fraud anomaly identification process presented above.  There are essentially three tools available to the fraud examiner in every health care fraud examination, all of which can significantly extend the value of the overall analytics based health care fraud investigative process.  The first is interviewing – the process of obtaining relevant information about the matter from those with knowledge of it.  The second is supporting documents – the examiner is skilled at examining financial statements, books and records.   The examiner also knows the legal ramifications of the evidence and how to maintain the chain of custody over documents.  The third is observation – the examiner is often placed in a position where s/he can observe behavior, search for displays of wealth and, in some instances, even observe specific offenses.

Dovetailing the work of the fraud examiner with that of the healthcare analytics team is a win for both parties to any healthcare fraud investigation and represents a considerable strengthening of the entire long term healthcare fraud mitigation process.

Where the Money Is

bank-robberyOne of the followers of our Central Virginia Chapter’s group on LinkedIn is a bank auditor heavily engaged in his organization’s analytics based fraud control program.  He was kind enough to share some of his thoughts regarding his organization’s sophisticated anti-fraud data modelling program as material for this blog post.

Our LinkedIn connection reports that, in his opinion, getting fraud data accurately captured, categorized, and stored is the first, vitally important challenge to using data-driven technology to combat fraud losses. This might seem relatively easy to those not directly involved in the process but, experience quickly reveals that having fraud related data stored reliably over a long period of time and in a readily accessible format represents a significant challenge requiring a systematic approach at all levels of any organization serious about the effective application of analytically supported fraud management. The idea of any single piece of data being of potential importance to addressing a problem is a relatively new concept in the history of banking and of most other types of financial enterprises.

Accumulating accurate data starts with an overall vision of how the multiple steps in the process connect to affect the outcome. It’s important for every member of the fraud control team to understand how important each process pre-defined step is in capturing the information correctly — from the person who is responsible for risk management in the organization to the people who run the fraud analytics program to the person who designs the data layout to the person who enters the data. Even a customer service analyst or a fraud analyst not marking a certain type of transaction correctly as fraud can have an on-going impact on developing an accurate fraud control system. It really helps to establish rigorous processes of data entry on the front end and to explain to all players exactly why those specific processes are in place. Process without communication and communication without process both are unlikely to produce desirable results. In order to understand the importance of recording fraud information correctly, it’s important for management to communicate to all some general understanding about how a data-driven detection system (whether it’s based on simple rules or on sophisticated models) is developed.

Our connection goes on to say that even after an organization has implemented a fraud detection system that is based on sophisticated techniques and that can execute effectively in real time, it’s important for the operational staff to use the output recommendations of the system effectively. There are three ways that fraud management can improve results within even a highly sophisticated system like that of our LinkedIn connection.

The first strategy is never to allow operational staff to second-guess a sophisticated model at will. Very often, a model score of 900 (let’s say this is an indicator of very high fraud risk), when combined with some decision keys and sometimes on its own, can perform extremely well as a fraud predictor. It’s good practice to use the scores at this high risk range generated by a tested model as is and not allow individual analysts to adjust it further. This policy will have to be completely understood and controlled at the operational level. Using a well-developed fraud score as is without watering it down is one of the most important operational strategies for the long term success of any model. Application of this rule also makes it simpler to identify instances of model scoring failure by rendering them free of any subsequent analyst adjustments.

Second, fraud analysts will have to be trained to use the scores and the reason codes (reason codes explain why the score is indicative of risk) effectively in operations. Typically, this is done by writing some rules in operations that incorporate the scores and reason codes as decision keys. In the fraud management world, these rules are generally referred to as strategies. It’s extremely important to ensure strategies are applied uniformly by all fraud analysts. It’s also essential to closely monitor how the fraud analysts are operating using the scores and strategies.

Third, it’s very important to train the analysts to mark transactions that are confirmed or reported to be fraudulent by the organization’s customers accurately in their data store.

All three of these strategies may seem very straight forward to accomplish, but in practical terms, they are not that easy without a lot of planning, time, and energy. A superior fraud detection system can be rendered almost useless if it is not used correctly. It is extremely important to allow the right level of employee to exercise the right level of judgment.  Again, individual fraud analysts should not be allowed to second-guess the efficacy of a fraud score that is the result of a sophisticated model. Similarly, planners of operations should take into account all practical limitations while coming up with fraud strategies (fraud scenarios). Ensuring that all of this gets done the right way with the right emphasis ultimately leads the organization to good, effective fraud management.

At the heart of any fraud detection system is a rule or a model that attempts to detect a behavior that has been observed repeatedly in various frequencies in the past and classifies it as fraud or non-fraud with a certain rank ordering. We would like to figure out this behavior scenario in advance and stop it in its tracks. What we observe from historical data and our experience needs be converted to some sort of a rule that can be systematically applied to the data real-time in the future. We expect that these rules or models will improve our chance of detecting aberrations in behavior and help us distinguish between genuine customers and fraudsters in a timely manner. The goal is to stop the bleeding of cash from the account and to accomplish that as close to the start of the fraud episode as we can. If banks can accurately identify early indicators of on-going fraud, significant losses can be avoided.

In statistical terms, what we define as a fraud scenario would be the dependent variable or the variable we are trying to predict (or detect) using a model. We would try to use a few independent variables (as many of the variables used in the model tend to have some dependency on each other in real life) to detect fraud. Fundamentally, at this stage we are trying to model the fraud scenario using these independent variables. Typically, a model attempts to detect fraud as opposed to predict fraud. We are not trying to say that fraud is likely to happen on this entity in the future; rather, we are trying to determine whether fraud is likely happening at the present moment, and the goal of the fraud model is to identify this as close to the time that the fraud starts as possible.

In credit risk management, we try to predict if there will likely be serious delinquency or default risk in the future, based on the behavior exhibited in the entity today. With respect to detecting fraud, during the model-building process, not having accurate fraud data is akin to not knowing what the target is in a shooting range. If a model or rule is built on data that is only 75 percent accurate, it is going to cause the model’s accuracy and effectiveness to be suspect as well. There are two sides to this problem.  Suppose we mark 25 percent of the fraudulent transactions inaccurately as non-fraud or good transactions. Not only are we missing out on learning from a significant portion of fraudulent behavior, by misclassifying it as non-fraud, the misclassification leads to the model assuming the behavior is actually good behavior. Hence, misclassification of data affects both sides of the equation. Accurate fraud data is fundamental to addressing the fraud problem effectively.

So, in summary, collecting accurate fraud data is not the responsibility of just one set of people in any organization. The entire mind-set of the organization should be geared around collecting, preserving, and using this valuable resource effectively. Interestingly, our LinkedIn connection concludes, the fraud data challenges faced by a number of other industries are very similar to those faced by financial institutions such as his own. Banks are probably further along in fraud management and can provide a number of pointers to other industries, but fundamentally, the problem is the same everywhere. Hence, a number of techniques he details in this post are applicable to a number of industries, even though most of his experience is bank based. As fraud examiners and forensic accountants, we will no doubt witness the impact of the application of analytically based fraud risk management by an ever multiplying number of client industrial types.

Informed Analytics

data-analytics_2by Michael Bret Hood,
21st Century Learning & Consulting,
LLC, University of Virginia, Retired FBI

I recently had a conversation with an old friend who is an accounting professor at a large southern university.

We were discussing my impending retirement and the surprising difficulty I am having in finding a corporate fraud investigation position. One of the things we discussed was the recent trend to hire mathematicians and statisticians as directors of fraud detection and risk control programs. Knowing that I could be biased, I asked the professor if he had seen the same thing. He replied that he had and then uttered, “What a foolish mistake!”

While neither of us harbors any ill will toward the community of mathematicians and statisticians (they probably are a lot smarter and way more technologically gifted than us), fraud detection and fraud prevention is so much more than the numbers and related informational sub‐sets. Sun Tzu in The Art of War said, “Know your enemy and know yourself and you can fight a hundred battles without disaster.” What every fraud contains that data analytics can never account for is the human behavior element. Absent the analytical process directly involving someone with the expertise of knowing how fraudsters operate as well as someone who understands victimology, significant weaknesses will almost always be introduced to the analysis. Matt Asay, in his InformationWeek article ‘8 Reasons Big Data Projects Fail’, understands this inherent flaw. “Too many organizations hire data scientists who might be math and programming geniuses but who lack the most important component: domain knowledge.”

The current perception is that data analytics causes the fraudulent patterns in organizations to just suddenly become exposed. Unfortunately, the pattern algorithms and programs created by data scientists are not magic elixirs. Just like the old gold miners did, someone has to sort through the data to ensure the patterns are both relevant and valid. In his article ‘What Data Can’t Do’, author David Brooks says the following, “Data is always constructed to someone’s predispositions and values. The end result looks disinterested, but in reality, there are value choices all the way through, from construction to interpretation.” The old adage suggesting inferior input equals inferior output certainly applies with equal force to data analytics today.

Data analytics has certainly had its successes. Credit card companies have been able to stem losses based on intricate and real‐time analysis of current trends, which unaided human reviewers would certainly be unable to produce manually. In other cases, data analytics have failed. “Google Flu Trends unsuccessfully predicted flu outbreaks in 2013 when it ended up forecasting twice as many cases of flu as the Center for Disease Control and Prevention reported.”  Data analytics as applied by even the best data scientists can’t always quantify the human element in their computations. No one should say that data analytics are not useful tools to be leveraged in any fraud investigation. In fact, they are most beneficial. However, implementing data analytics does not place an impenetrable anti-fraud fortress around your data and/or your money. Sometimes it takes a combination of data analytics and experienced professionals to produce the best results.

In one business, data analytics were deployed using Benford’s Law in such a way that an insider‐led tax refund fraud scheme was uncovered, saving the company millions of dollars. This data set, however, would never have been chosen for analysis were it not for forensic accountants who noticed a variance in the numbers they sampled. Fraud is a crime that always includes an unmistakable human element represented in the actions and reactions of both the perpetrator and the victim. Data analytics, although extremely useful will never be able to take into account the full dynamic range of emotions and decision making of which human beings are capable. Businesses have started to realize this problem as evidenced by a recent Gartner survey where the author claims, “Big data was so yesterday; it’s all about algorithms today.”

While it may cost organizations a little more in salary to engage the services of   experienced fraud investigators such as myself, the resultant ROI is far superior to the cost of the investment.

You Can’t Prevent What You Can’t See

uncle-samThe long, rainy Central Virginia fourth of July weekend gave me a chance to review the ACFE’s latest Report to the Nations and I was struck by what the report had to say about proactive data analytics as an element of internal control, especially as applicable to small business fraud prevention.

We’re all familiar with the data analytics performed by larger businesses of which proactive data analytic tests form only a part.  This type of analysis is accomplished with the use of sophisticated software applications that comb through massive volumes of data to determine weak spots in the control system. By analyzing data in this manner, large companies can prevent fraud from happening or detect an ongoing fraud scheme. The Report to the Nations reveals, among other things that, of the anti-fraud controls analyzed, proactive data monitoring and analysis appears to be the most effective at limiting the duration and cost of fraud schemes. By performing proactive data analysis, companies detected fraud schemes sooner, limiting the total potential loss. Data analysis is not a new concept, but, as we all know, with the increasing number of electronic transactions due to advances in technology, analyzing large volumes of data has become ever more complex and costly to implement and manage.

Companies of all sizes are accountable not only to shareholders but to lenders and government regulators.  Although small businesses are not as highly regulated by the government since they are typically not publically financed, small business leaders share the same fiduciary duty as large businesses: to protect company assets. Since, according to the ACFE, the average company loses 5% of revenue to fraud, it stands to reason that preventing losses due to fraud could increase profitability by 5%. When viewed in this light, many small businesses would benefit from taking a second look at implementing stronger fraud prevention controls.  The ACFE also reports that small businesses tend to be victims of fraud more frequently than large businesses because small businesses have limited financial and human resources. In terms of fraud prevention and detection, having fewer resources overall translates into having fewer resources dedicated to strong internal controls. The Report also states that small businesses (less than 100 employees) experience significantly larger losses percentage-wise than larger businesses (greater than 100 employees). Since small businesses do not have the resources to dedicate to fraud prevention and detection, they’re not able to detect fraud schemes as quickly, prolonging the scheme and increasing the losses to the company.

The ACFE goes on to tell us that certain controls are anti-fraud by nature and can prevent and detect fraud, including conducting an external audit of a set of financial statements, maintaining an internal audit department, having an independent audit committee, management review of all financial statements, providing a hotline to company employees, implementing a company code of conduct and anti-fraud policy, and practicing pro-active data monitoring. While most of these controls are common for large companies, small businesses have difficulty implementing some of them, again,  because of their limited financial and human resources.

What jumped out at me from the ACFE’s Report was that only 15% of businesses under 100 employees currently perform proactive data analysis, while 41.9% of businesses over 100 employees do. This is a sign that many small businesses could be doing a basic level of data analysis, but aren’t. The largest costs associated with data analysis are software costs and employee time to perform the analysis. With respect to employee resources, data analysis is a control that can be performed by a variety of employees, such as a financial analyst, an accountant, an external consultant, a controller, or even the CFO. The level of data analysis should always be structured to fit within the cost structure of the company. While larger companies may be able to assign a full time analyst to handle these responsibilities, smaller companies may only be able to allocate a portion of their time to this task. Given these realities, smaller businesses, need to look for basic data analysis techniques that can be easily implemented.

The most basic data analysis techniques are taught in introductory accounting courses and aren’t particularly complex: vertical analysis, horizontal analysis, liquidity ratios, and profitability ratios. Large public companies are required to prepare these type of calculations for their filings with the Securities and Exchange Commission. For small businesses, these ratios and analyses can be calculated by using two of the basic financial statements produced by any accounting software:  the income statement and the balance sheet. By comparing the results of these calculations to prior periods or to industry peers, significant variances can point to areas where fraudulent transactions may have occurred. This type of data analysis can be performed in a tabular format and the results used to create visual aids. Charts and graphs are a great way for a small business analyst to visualize variances and trends for management.

I like to point out to small business clients that all of the above calculations can be performed with Microsoft Excel and Microsoft Access. These are off-the-shelf tools that any analyst can use to perform even analytical calculations of great complexity. The availability of computing power in Excel and Access and the relatively easy access to audit tools … known as Computer Assisted Audit Techniques (CAAT), have accelerated the analytical review process generally. Combined with access to the accounting server and its related applications and to the general ledger, CAATS are very powerful tools indeed.

The next step would be to consider using more advanced data analysis programs. Microsoft Excel has many features to perform data analysis, and it is probably already installed on many computers within small enterprises. CFE’s might suggest to their clients adding the Audit Control Language (ACL) Add-In to client Excel installations to add yet another layer of advanced analysis that will help make data analytics more effective and efficient. When a small business reaches a level of profitability where it can incorporate a more advanced data analysis program,it can add a more robust tool such as IDEA or ACL Analytics. Improving controls by adding a specialized software program will require financial resources to acquire it and to train employees. It will also require the dedication of time from employees serving in the role of internal examiners for fraud like internal auditors and financial personnel. Professional organizations such as the ACFE and AICPA have dedicated their time and efforts to ensuring that companies of all sizes are aware of the threats of fraud in the workplace. One suggestion I might make to these professional organizations would be to work with accounting software developers and the current developers of proactive data analysis tools to incorporate data analysis reports into their standard products. If a small business had the ability to run an anti-fraud report as a part of their monthly management review of financial statements without having to program the report, it would save a significant amount of company resources and improve the fraud prevention program overall.

To sum up, according to Joseph T. Wells, founder of the ACFE, “data analytics have never been more important or useful to a fraud examiner. There are more places for fraud to hide, and more opportunities for fraudsters to conceal it.” Clearly there are many resources available today for small businesses of almost any size to implement proactive data analysis tools. With the significant advances in technology, exciting new anti-fraud solutions appear on the horizon almost daily; the only thing standing between them and our clients is the decision to pick them up and use them.

The Auditor and the Fraud Examiner

financial-statementsOur Chapter averages about three new members a month, a majority of whom are drawn from the pool of relatively recent college graduates in accounting or finance, most of whom possessing an interest in fraud examination and having a number of courses in auditing under their belts.  From the comments I get it seems that our new members are often struck by the apparent similarities between fraud examination and auditing imparted by their formal training and yet hazy about the differences between the two in actual practice.

But, unlike the financial statement focus in financial auditing, fraud examination involves resolving fraud allegations from inception to disposition. Fraud examination methodology requires that all fraud allegations be handled in a uniform, legal fashion and be resolved on a timely basis. Assuming there is sufficient reason (predication) to conduct a fraud examination, specific examination steps usually are employed. At each step of the fraud examination process, the evidence obtained and the effectiveness of the fraud theory approach are continually assessed and re-assessed. Further, the fraud examination methodology gathers evidence from the general to the specific. As such, the suspect (subject) of the inquiry typically would be interviewed last, only after the fraud examiner has obtained enough general and specific information to address the allegations adequately.  However, just like a financial statement audit, a fraud investigation consists of a multitude of steps necessary to resolve allegations of fraud: interviewing witnesses, assembling evidence, writing reports, and dealing with prosecutors and the courts. Because of the legal ramifications of the fraud examiners’ actions, the rights of all individuals must be observed throughout. Additionally, fraud examinations must be conducted only with adequate cause or predication.

Predication is the totality of circumstances that would lead a reasonable, professionally trained, and prudent individual to believe a fraud has occurred, is occurring, or will occur. Predication is the basis upon which an examination is commenced. Unlike a financial audit, fraud examinations should never be conducted without proper predication. Each fraud examination begins with the prospect that the case will end in litigation. To solve a fraud without complete and perfect evidence, the examiner must make certain assumptions. This is not unlike the scientist who postulates a theory based on observation and then tests it. In the case of a complex fraud, fraud theory is almost indispensable. Fraud theory begins with a hypothesis, based on the known facts, of what might have occurred. Then that hypothesis or key assumption is tested to determine whether it’s provable.

The fraud theory approach involves the following steps, in the order of their occurrence:

  • Analyze available data.
  • Create a hypothesis.
  • Test the hypothesis.
  • Refine and amend the hypothesis.
  • Accept or reject the hypothesis based on the evidence.

With that said, fraud examinations incorporate many auditing techniques; however, the primary differences between an audit and a fraud investigation are the scope, methodology, and reporting. It’s also true that many of the fraud examiners in our Chapter (as in every ACFE Chapter) have an accounting background. Indeed, some of our members are employed primarily in the audit function of their organizations. Although fraud examination and auditing are related, they are not the same discipline. So how do they differ?  First, there’s the question of timing.  Financial audits are conducted on a regular recurring basis while fraud examinations are non-recurring; they’re conducted only with sufficient predication.

The scope of the examination in a financial audit is general (the scope of the audit is a general examination of financial data) while the fraud examination is conducted to resolve specific allegations.

An audit is generally conducted for the purpose of expressing an opinion on the financial statements or related information.  The fraud examination’s goal is to determine whether fraud has occurred, is occurring, or will occur, and to determine who is responsible.

The external audit process is non-adversarial in nature. Fraud examinations, because they involve efforts to affix blame, are adversarial in nature.

Audits are conducted primarily by examining financial data. Fraud examinations are conducted by (1) document examination; (2) review of outside data, such as public records; and (3) interviews.

Auditors are required to approach audits with professional skepticism. Fraud examiners approach the resolution of a fraud by attempting to establish sufficient proof to support or refute an allegation of fraud.

As a general rule during a financial fraud investigation, documents and data should be examined before interviews are conducted. Documents typically provide circumstantial evidence rather than direct evidence. Circumstantial evidence is all proof, other than direct admission, of wrongdoing by the suspect or a co-conspirator.  In collecting evidence, it’s important to remember that every fraud examination may result in litigation or prosecution. Although documents can either help or harm a case, they generally do not make the case; witnesses do. However, physical evidence can make or break the witnesses. Examiners should ensure that the evidence is credible, relevant, and material when used to support allegations of fraud.

From the moment evidence is received, its chain of custody must be maintained for it to be accepted by the court. This means that a record must be made when the item is received or when it leaves the care, custody, or control of the fraud examiner. This is best handled by a memorandum of interview by the custodian of the records when the evidence is received.

Fraud examiners are not expected to be forensic document experts; however, they should possess adequate knowledge superior to that of a lay person.

In fraud investigations, examiners discover facts and assemble evidence. Confirmation is typically accomplished by interviews. Interviewing witnesses and conspirators is an information-gathering tool critical in the detection of fraud. Interviews in financial statement fraud cases are different than those in most other cases because the suspect being interviewed might also be the boss.

In conclusion, auditing procedures are indeed often used in a financial statement fraud examination. Auditing procedures are the acts or steps performed by an auditor in conducting the review. According to the third standard of fieldwork of generally accepted auditing standards, “The auditor must obtain sufficient appropriate audit evidence by performing audit procedures to afford a reasonable basis for an opinion regarding the financial statements under audit.”  Common auditing procedures routinely used during fraud examination, as during financial statement examination, are confirmations, physical examination, observation, inquiry, scanning, inspection, vouching, tracing, re-performance, re-computation, analytical procedures, and data mining; these are all vital tools in the arsenal of both practitioners as well as of all financial assurance professionals.

Ancient Analytics

CPU5Our Chapter, along with our partners the Virginia State Police and national ACFE, will be hosting a two day seminar starting April 8th entitled, ‘Hands on Analytics – Using Data Analytics to Identify Fraud’ at the VASP Training Academy here in Richmond, Virginia.  Our presenter will be one of the ACFE’s best, the renowned fraud examiner Bethmara Kessler, Chief Audit Officer of the Campbell Soup Company.  The science of analytics has come a long way in its evolution into the effective tool we know and all make such good use of today.  I can remember being hired fresh out of graduate school at the University of Chicago by a Virginia bank (long since vanished into the mists of time) to do market and operations research in the early 1970’s.

The bank had just started accumulating operational and branch related data for use with a fairly primitive IBM mainframe relational database; simple as that first application was, it was like a new day had dawned!  The bank’s holding company was expanding rapidly, buying up correspondent banks all over the state so, as you can imagine, we were hungry for all sorts of actionable market and financial information.  In those early days, in the almost total absence of any real information, when data about the holding company was first being accumulated and some initial reports run, it felt like lighting a candle in a dark room!  At first blush, the information seemed very useful and numbers-savvy people poured over the reports identifying how some of the quantities (variables) in the reports varied in relation to others.  As we all know now, based on a wider and more informed experience, there’s sometimes a direct correlation between fields and sometimes there’s an implied correlation. When our marketing and financial analysts began to see these correlations, relating the numbers to their own experiences in branch bank location and in lending risk management for example, it was natural for them to write up some rules to manage vulnerable areas like branch operations and fraud risk.  With regard to fraud control, the data based rules worked great for a while but since they were only rules, fraudsters quickly proved surprisingly effective at figuring out exactly what sort of fraud the rules were designed to stop.  If the rule cutoff was $300 for a cash withdrawal, we found that fraudsters soon experimented with various amounts and determined that withdrawing $280 was a safe option.  The bank’s experts saw this and started designing rules to prevent a whole range of specific scenarios but it quickly became a losing game for the bank since fraudsters only got craftier and craftier.

Linear regression models were first put forward to address this incessant back and forth issue of rule definition and fraudster response as database software became more adept at handling larger amounts of data effectively and so enough data could be analyzed to begin to identify persistent patterns.  The linear regression model assumed that the relationships between the predictors used in the model and the fraud target were linear and so the algorithm tries to fit a linear model to detect fraud by identifying outliers from the basic fit of the regression line.   The regression models proved better than the rule based approach since they could systematically look at all the bank’s credit card data, for instance, and so could draw more effective conclusions about what was actually going on than the rules ever could.

As we at the bank found in the early days of attempted analytics based fraud detection, when operating managers get together and devise fraud identification rules, they generally do slightly better than random chance in identifying cases of actual fraud; this is because, no matter how good and well formulated the rules are, they can’t cover the entire universe of possible transactions.  We can only give anti-fraud coverage to the portion of transactions addressed by the rules.  When the bank built a linear model employing algorithms comparing actual past experience with present actual experience the analysis experienced the advantage of covering the entire set of transactions and classifying them as either fraudulent or good.   Fraud identification improved considerably above chance.

It’s emerged over the years that a big drawback with using linear regression models to identify fraud is that, although there are many cases in which the underlying risk is truly linear, there are more where it’s non-linear; where both the target (fraud) and independent variables are non-continuous.  While there are many problems where a 90% solution is good enough, fraud is not one of them.  This is where such non-linear techniques, like the neural networks Bethmara Kessler will be discussing, come in.  Neural networks were originally developed to model the functioning of the brain; their statistical properties also make them an excellent fit for addressing many risk related problems.

As our April seminar will demonstrate, there are generally two lines of thought regarding the building of models to perform fraud analytics.  One is that techniques don’t matter that much; what matters is the data itself and how much of it and its variety the fraud analyst can get; the more data, the better the analysis.  The other line of thought holds that, whereas, more data is always good, techniques do matter.  There are many well documented fraud investigation situations in which improving the sophistication of the techniques has yielded truly amazing results.

All of these issues and more will be covered in our Chapter’s April seminar.  I hope all of you can join us!

The Fraud Examiner & the Financial Analyst

SandFootprintsOn June 18, 2014, our ACFE Chapter and partners, the Virginia State Police and Health Management Systems (HMS), will be jointly conducting a free afternoon training session on the topic of ‘Advanced Trends in Data Analytics’ at the VSP Training Academy here in Richmond.  Several of us were struck in going over the speaker’s presentation by how useful something as basic as fundamental financial analysis can be to fraud examiners and other assurance professionals in setting up and implementing the sorts of advanced analytical testing that companies like HMS are pioneering to such good effect.

I can remember graduate finance classes at the University of Richmond so many years ago where we were told what an important tool financial analysis could be at every level of management, but especially for anyone tasked with performing operational and compliance reviews.  After reviewing the foundational steps needed to effectively set up and use today’s advanced analytical and data mining techniques, I think fraud examiners in general would be well advised to develop or update even their basic financial analysis skills so as to do a more efficient job as the ‘fraud expert’ on the data mining and analytics team directing what to test and how much to test it to efficiently identify and investigate the various types of financial frauds.

Fraud examiners should plan to actively reach out to financial analysts (FA’s), initially when performing fraud risk assessments, and then when setting up analytics and data mining supported testing for the suspected presence of fraud or to investigate actual fraud. The FA is an expert in the construction and analysis of ratios and comparisons derived from all the entity’s financial data but especially from its financial statements.  If the entity is a public company using some taxonomy of XBRL markup language to prepare and file automated quarterly and annual statements with the SEC, the company’s FA will be an invaluable resource to the fraud examiner in structuring the analytic, financial data tests essential to building her case.

Comparison analysis of financial data can be performed simply or, depending upon the data moving and crunching tools available, with great sophistication.  What are the established financial expectations for the entity under review from one performance period to the next? This is a question that can be answered in almost any degree of detail and the answer, as any auditor knows, can be quite important when seen in the context of the investigation of any on-going fraud or financial irregularity.  The fraud examiner working with the FA can piggy back on the FA’s existing work to examine actual account balances, relevant to an actual fraud or postulated fraud scenario, from current and prior periods, as well as to budgeted or forecasted plans that anticipate actual results for a past or current period.

And don’t look down on humble ratio analysis in the hands of an experienced FA as a tool to guide the setup of advanced analytic procedures.  Even though ratio analysis is by far the most commonly used type of basic financial analysis, it can still provide information on the effectiveness and efficiency of operations and highlight relationships between target accounts in light of industry and economic trends … as such it’s can be very effective for the fraud examiner who wants to demonstrate a client’s due diligence in the application of systems of internal controls associated with best practice.  Because ratio analysis is fundamentally about actual and expected relationships between items of financial data as manifested over time, significant changed in the ratios from period to period are usually caused by the type of apparent or recurring “errors” or “poor performance” that often mask on-going frauds.

By working with an FA who has developed a powerful set of ratios specific to the client, the fraud examiner is forced to understand quickly not only the operations of the examination target organization but also those of its industry; we all know how useful it is to be able to demonstrate to a jury that financial shenanigans we’ve uncovered in connection with investigating a suspected fraud are just not typical of enterprises in the target’s industry … ratio analysis can be of great help in forcefully demonstrating that point to a judge and jury.

The Achilles heel of analytics, data mining, comparison and ratio analysis as tools for fraud examination is that these three techniques primarily look at what was previously done or experienced at the company and, by straight-forward extension, at what was anticipated.  This is where the FA can come in to assist the fraud examiner using financial information to build her case to avoid the many pitfalls and minefields inherent in these types of data.  The FA might say, for example,  that anticipated performance does not necessarily indicate what constitutes good performance or even what should necessarily be expected from effective and efficient business operations; for those reasons she will warn that our fraud examiner might want to expand her comparison and ratio analysis based analytic tests to include a general look at other players in the same industry … in other words, applying ‘best of class’ comparisons across comparable companies before drawing conclusions based only on the performance, or lack thereof, of a single company.  This is only a single example of how useful FA’s can be in sharpening the investigation of financial data, especially if they are experienced with the client.

In every case, whether employing simple ratios or comparison analysis or building complex analytical tests, fraud examiners must use their own professional judgment but that doesn’t mean, as in the case of the FA, we can’t all use some help from time to time from our professional friends!

An Update on Data Analytics for Fraud Detection

TowerInMoonSeveral of our sister professional organizations have recently conducted research projects related to the degree of  implementation of data analytics by enterprises of various sorts in general and as a tool for fraud detection and examination specifically.  As members of our Chapter and readers of this blog are well aware, data analytics is a broad term encompassing a wide variety of processes and techniques all focused on improving the value of information for the decision makers of any organization and having the potential of making a marked difference in enterprise accountability and performance. The good news from the recent research is that the use of data analytic tools for fraud examination is growing at a rapid rate and that executives are becoming more and more aware of the power of these softwares to achieve better financial outcomes, improve customer service and to detect fraud.  The bad news is that a substantial subset of organizations report not employing analytics, basically for three reasons.

The main reason is a lack of budget resources, closely followed by a lack of appropriate staff and third is technical uncertainty as to how to develop an automated system to perform the required analysis.  As this blog has long preached, the best way to address all three of these issues is to leverage the experience gained by other fraud examiners and organizations in implementing these types of detection and analytic systems.  Our members, who have worked with organizations who  have successfully implemented such systems, all emphasize the paramount importance of strong advocacy leadership within the implementing organization that will champion the project and articulate its vision and goals.  This means that  before an analytics project can take place, key management have to be educated as to exactly what the project can accomplish and decide to own and work toward the accomplishment of those outcomes; without a strong owner, or group of owners, a  project of this complexity cannot succeed.

It’s been my experience that a key component in project success is that the developing organization has the ability to rely on its own, internal  operational data to draw effective conclusions; the ideal situation is an organization with a full range of existing data covering the entire spectrum of its own operations (like a health insurance enterprise or state government) which possesses plenty of its own data for modeling and analysis supplemented by limited external data (local market demographics and perhaps Dunn & Bradstreet data for peer comparisons, for example).  It follows from this that successful projects are seamlessly integrated into an organization’s operations on a concurrent basis.  Concurrent, analytic tests are conducted as close as possible to the actual events reflected in the organizational data so that information revealed by the analytic system and its tests prompt timely investigative action and are themselves used to update the system’s models and algorithms.

The recent research also seems to reveal that the most successful implementations of data analytics are developed with a combination of internal and contracted staff; internal staff provide business expertise and program knowledge whereas external contractors provide knowledge of statistics, modeling and software design.  The interpretation of test results is usually performed by a combination of internal and external resources working together; this is especially true in the case of the interpretation of revealed fraud scenarios.

What makes all this come together is the sharing of information about successful tests and analytic techniques among all the involved professionals and their client organizations to the greatest extent feasible.  Successful organizations, through their trade associations and membership in organizations like the NACFE and AICPA, should share their experiences as widely as possible to help sister organizations overcome the deficiencies in budget staffing and experience that are preventing the wider dissemination and implementation of these vital, analytically based fraud detection and investigation techniques.

What is a Fraud, Waste and Abuse Detection System (FADS)?

BusinessMeet2Fraud and abuse detection technology (FADS) processes (or data mines) large amounts of information stored in data warehouses to identify patterns, associations, clusters, outliers and other red flag phenomena that indicate the presence of fraud and abuse.  A key characteristic of this technology is the use of “learning experiences” where findings from previous analyses are integrated into the next round of tests to search for potentially fraudulent activities.

FADS technology detects these activities using three principal methodologies: 1) a vendor centric methodology to identify vendors consistently submitting suspicious invoices (as in a system targeting vendor billing), an invoice centric methodology to identify patterns within invoices indicative of fraud and abuse (without linking the invoices to specific vendors) and 3) a predictive modeling algorithm to identify previously undetected fraudulent activities.

The predictive modeling algorithm scores newly received  invoices based on their deviance from vendor peer group norms. Additional functions performed by FADS systems include the ability to identify emerging criminal schemes using generalizations from previous analyses and tests, the generation of ad hoc reports to increase accounts receivable program oversight, and the ability to add software programming updates as needed to improve detection capabilities.

FAD systems are successfully applied to virtually every business process with a large volume of activity and  which is supported by large scale storage of digitized historical data.

One type of FAD is a Medicaid or Medicare Fraud and Abuse Detection System (MFAD). Some examples of the types of medical provider insurance claims testing such a system might continuously perform are:

–creation of  a statistical model of each claim type (the normal claim) to compare with all medical provider claims processed to identify “abnormal” claims;

–run each claim through a comprehensive series of tests and statistical analysis configured for each claim type;

–identify improper payments;

–incorporate findings and experience on an on-going basis to continually improve results;

–identify medical providers and patients engaged in fraud, waste and abuse.