Setting Up the Client Data Mine to Screen Out Fraud, Waste and Abuse

The process of developing a data warehouse of client information is a critical first step in the data mapping and data mining effort that has proved a challenge for fraud examiners and auditors setting out to utilize these tools for the first time.  Consider what we’d need if we were thinking about taking a vacation involving a long road trip.  First, we’d need some kind of vehicle to drive; we can’t really determine what kind of vehicle we need until we know how many people will be going with us (entities about which we’ll be storing information).   Then we’re going to need a roadmap (the data) to guide our trip.  We also need to be prepared for unforeseen events (data anomalies) along the way that don’t appear on the map.  Then, once we arrive at each of the various milestones along the way, we take in information from that stage of the journey and re-evaluate our route…it’s an on-going process.

So we can think of the implementation of a data mapping and data mining effort for fraud examination as an on-going process built on a foundation of operational or managerial auditing procedures; the process involves defining the data elements to be gathered, the collection of the data, the design of the tables and decision trees in which the data will be stored and processed by queries, and the on-going surveillance of the data.  The pre-condition here is that the data flows continuously as in health care, billing or quarterly updated financial applications.

Once a warehouse had been appropriately mapped and data mining activated, the ongoing activity is surveillance.  This is where auditor judgment proves critical.  Finding patterns in the on-going flow of data indicative of the presence of scenarios linked to fraud, waste and abuse is a skill which can be developed only over time and through experience with what “normal” data for the entity under surveillance should look like… how, in the company environment, should normal data look and what makes this data look “abnormal”?

This analysis is not a one-time event but an ongoing, constantly evolving tool for efficiently obtaining the intelligence to identify fraud and then alter controls to prevent such transactions from being processed in the future.  We’re not looking to recoup the losses from identified past fraud scenarios (pay and chase) so much as we’re looking to adjust our systems and controls through edits to prevent the data associated with such scenarios from even being processed in the future.

Simply put, we need to identify the anomalous output and study the hidden patterns associated with each anomaly; document the sequence of events leading to the offense; identify potential perpetrators; document the loss; and finally, adjust system edits so that the  processing pattern associated with the fraud does not recur.

VISIT THE RICHMOND ACFE CHAPTER AND JOIN US ON-LINE TODAY!

Comments are closed.