Tag Archives: data analytics

Fraud Detection-Fraud Prevention

One of our CFE chapter members left us a contact comment asking whether concurrent fraud auditing might not be a good fraud prevention tool for use by a retailer client of hers that receives hundreds of credit card payments for services each day. The foundational concepts behind concurrent fraud auditing owe much to the idea of continuous assurance auditing (CAA) that internal auditors have applied for years; I personally applied the approach as an essential tool throughout by carrier as a chief audit executive (CAE). Basically, the heart of a system of concurrent fraud auditing (CFA) like that of CAA is the process of embedding control based software monitors in real time, automated financial or payment systems to alert reviewers of transactional anomalies in as close to their occurrence as possible. Today’s networked/cloud based processing environments have made the implementation and support of such real time review approaches operationally feasible in ways that the older, batch processing based environments couldn’t.

Our member’s client uses several on-line, cloud based services to process its customer payments; these services provide our member’s client with a large database full of payment history, tantamount to a data warehouse, all available for use on SQL server, by in-house client IT applications like Oracle and SAP. In such a data rich environment, CFE’s and other assurance professionals can readily test for the presence of transactional patterns characteristic of defined, common payment fraud scenarios such as those associated with identity theft and money laundering. The objective of the CFA program is not necessarily to recover the dollars associated with on-line frauds but to continuously (in as close to real time as possible) adjust the edits in the payment collection and processing system so that certain fraudulent transactions (those associated with known fraud scenarios) stand a greater chance of not even getting processed in the first place. Over time, the CFA process should get better and better at editing out or flagging the anomalies associated with your defined scenarios.

The central concept of any CFA system is that of an independent application monitoring for suspected fraud related activity through, for example (as with our Chapter member), periodic (or even real time) reviews of the cloud based files of an automated payment system. Depending upon the degree of criticality of the results of its observations, activity summaries of unusual items can be generated with any specified frequency and/or highlighted to an exception report folder and communicated to auditors via “red flag” e-mail notices. At the heart of the system lies a set of measurable, operational metrics or tags associated with defined fraud scenarios. The fraud prevention team would establish the metrics it wishes to monitor as well as supporting standards for those metrics. As a simple example, the U.S. has established anti-money-laundering banking rules specifying that all transactions over $10,000 must be reported to regulators. By experience, the $10,000 threshold is a fraud related metric investigators have found to be generic in the identification of many money-laundering fraud scenarios. Anti-fraud metric tags could be built into the cloud based financial system of our Chapter member’s client to monitor in real time all accounts payable and other cash transfer transactions with a rule that any over $10,000 would be flagged and reviewed by a member of the audit staff. This same process could have multiple levels of metrics and standards with exceptions fed up to a first level assurance process that could monitor the outliers and, in some instances, send back a correcting feedback transaction to the financial system itself (an adjusting or corrective edit or transaction flag). The warning notes that our e-mail systems send us that our mailboxes are full are another example of this type of real time flagging and editing.

Yet other types of discrepancies would flow up to a second level fraud monitoring or audit process. This level would produce pre-formatted reports to management or constitute emergency exception notices. Beyond just reports, this level could produce more significant anti-fraud or assurance actions like the referral of a transaction or group of transactions to an enterprise fraud management committee for consideration as documentation of the need for an actual future financial system fraud prevention edit. To continue the e-mail example, this is where the system would initiate a transaction to prevent future mailbox accesses to an offending e-mail user.

There is additionally yet a third level for our system which is to use the CFA to monitor the concurrent fraud auditing process itself. Control procedures can be built to report monitoring results to external auditors, governmental regulators, the audit committee and to corporate council as documented evidence of management’s performance of due diligence in its fight against fraud.

So I would encourage our member CFE to discuss the CFA approach with the management of her client. It isn’t the right tool for everyone since such systems can vary greatly in cost depending upon the existing processing environment and level of IT sophistication of the implementing organization. CFA’s are particularly useful for monitoring purchase and payment cycle applications with an emphasis on controls over customer and vendor related fraud. CFA is an especially useful tool for any financial application where large amounts of cash are either coming in or going out the door (think banking applications) and to control all aspects of the processing of insurance claims.

Analytic Reinforcements

Rumbi’s post of last week on ransomware got me thinking on a long drive back from Washington about what an excellent tool the AICPA’s new Cybersecurity Risk Management Reporting Framework is, not only for CPAs but for CFEs as well as for all our client organizations. As the seemingly relentless wave of cyberattacks continues with no sign of let up, organizations are under intense pressure from key stakeholders and regulators to implement and enhance their cyber security and fraud prevention programs to protect customers, employees and all the types of valuable information in their possession.

According to research from the ACFE, the average total cost per company, per event of a data breach is $3.62 million. Initial damage estimates of a single breach, while often staggering, may not take into account less obvious and often undetectable threats such as the theft of intellectual property, espionage, destruction of data, attacks on core operations or attempts to disable critical infrastructure. These effects can knock on for years and have devastating financial, operational and brand impact ramifications.

Given the present broad regulatory pressures to tighten cyber security controls and the visibility surrounding cyberrisk, a number of proposed regulations focused on improving cyber security risk management programs have been introduced in the United States over the past few years by our various governing bodies. One of the more prominent is a regulation by the New York Department of Financial Services (NYDFS) that prescribes certain minimum cyber security standards for those entities regulated by the NYDFS. Based on an entity’s risk assessment, the NYDFS law has specific requirements around data encryption and including data protection and retention, third-party information security, application security, incident response and breach notification, board reporting, and required annual re-certifications.

However, organizations continue to report to the ACFE regarding their struggle to systematically report to stakeholders on the overall effectiveness of their cyber security risk management programs. In response, the AICPA in April of last year released a new cyber security risk management reporting framework intended to help organizations expand cyberrisk reporting to a broad range of internal and external users, to include management and the board of directors. The AICPA’s new reporting framework is designed to address the need for greater stakeholder transparency by providing in-depth, easily consumable information about the state of an organization’s cyberrisk management program. The cyber security risk management examination uses an independent, objective reporting approach and employs broader and more flexible criteria. For example, it allows for the selection and utilization of any control framework considered suitable and available in establishing the entity’s basic cyber security objectives and in developing and maintaining controls within the entity’s cyber security risk management program irregardless of whether the standard is the US National Institute of Standards and Technology (NIST)’s Cybersecurity Framework, the International Organization for standardization (ISO)’s ISO 27001/2 and related frameworks, or even an internally developed framework based on a combination of sources. The examination is voluntary, and applies to all types of entities, but should be considered by CFEs as a leading practice that provides management, boards and other key stakeholders with clear insight into the current state of an organization’s cyber security program while identifying gaps or pitfalls that leave organizations vulnerable to cyber fraud and other intrusions.

What stakeholders might benefit from a client organization’s cyber security risk management examination report? Clearly, we CFEs as we go about our routine fraud risk assessments; but such a report, most importantly, can be vital in helping an organization’s board of directors establish appropriate oversight of a company’s cyber security risk program and credibly communicate its effectiveness to stakeholders, including investors, analysts, customers, business partners and regulators. By leveraging this information, boards can challenge management’s assertions around the effectiveness of their cyberrisk management and fraud prevention programs and drive more effective decision making. Active involvement and oversight from the board can help ensure that an organization is paying adequate attention to cyberrisk management and displaying due diligence. The board can help shape expectations for reporting on cyberthreats while also advocating for greater transparency and assurance around the effectiveness of the program.

The cyber security risk management report in its initial and follow-up iterations can be invaluable in providing overview guidance to CFEs and forensic accountants in targeting both fraud prevention and fraud detection/investigative analytics. We know from our ACFE training that data analytics need to be fully integrated into the investigative process. Ensuring that data analytics are embedded in the detection/investigative process requires support from all levels, starting with the managing CFE. It will be an easier, more coherent process for management to support such a process if management is already supporting cyber security risk management reporting. Management will also have an easier time reinforcing the use of analytics generally, although the data analytics function supporting fraud examination will still have to market its services, team leaders will still be challenged by management, and team members will still have to be trained to effectively employ the newer analytical tools.

The presence of a robust cyber security risk management reporting process should also prove of assistance to the lead CFE in establishing goals for the implementation and use of data analytics in every investigation, and these goals should be communicated to the entire investigative team. It should be made clear to every level of the client organization that data analytics will support the investigative planning process for every detected fraud. The identification of business processes, IT systems, data sources, and potential analytic routines should be discussed and considered not only during planning, but also throughout every stage of the entire investigative engagement. Key in obtaining the buy-in of all is to include investigative team members in identifying areas or tests that the analytics group will target in support of the field work. Initially, it will be important to highlight success stories and educate managers and team leaders about what is possible. Improving on the traditional investigative approach of document review, interviewing, transaction review, etc. investigators can benefit from the implementation of data analytics to allow for more precise identification of the control deficiencies, instances of noncompliance with policies and procedures, and mis-assessment of areas of high risk that contributed to the development of the fraud in the first place. These same analytics can then be used to ensure that appropriate post-fraud management follow-up has occurred by elevating the identified deficiencies to the cyber security risk management reporting process and by implementing enhanced fraud prevention procedures in areas of higher fraud risk. This process would be especially useful in responding to and following up data breaches.

Once patterns are gathered and centralized, analytics can be employed to measure the frequency of occurrence, the bit sizes, the quantity of files executed and average time of use. The math involved allows an examiner to grasp the big picture. Individuals, including examiners, are normally overwhelmed by the sheer volume of information, but automation of pattern recognizing techniques makes big data a tractable investigative resource. The larger the sample size, the easier it is to determine patterns of normal and abnormal behavior. Network haystacks are bombarded by algorithms that can notify the CFE information archeologist about the probes of an insider threat for example.

Without analytics, enterprise-level fraud examination and risk assessment is a diminished discipline, limited in scope and effectiveness. Without an educated investigative workforce, armed with a programing language for automation and an accompanying data-mining philosophy and skill set, the control needs of management leaders at the enterprise level will go unmet; leaders will not have the data needed for fraud prevention on a large scale nor a workforce that is capable of getting them that data in the emergency following a breach or penetration.

The beauty of analytics, from a security and fraud prevention perspective, is that it allows the investigative efforts of the CFE to align with the critical functions of corporate business. It can be used to discover recurring risks, incidents and common trends that might otherwise have been missed. Establishing numerical baselines on quantified data can supplement a normal investigator’s tasks and enhance the auditor’s ability to see beneath the surface of what is presented in an examination. Good communication of analyzed data gives decision makers a better view of their systems through a holistic approach, which can aid in the creation of enterprise-level goals. Analytics and data mining always add dimension and depth to the CFE’s examination process at the enterprise level and dovetail with and are supported beautifully by the AICPA’s cyber security risk management reporting initiative.

CFEs should encourage the staffs of client analytics support functions to possess …

–understanding of the employing enterprise’s data concepts (data elements, record types, database types, and data file formats).
–understanding of logical and physical database structures.
–the ability to communicate effectively with IT and related functions to achieve efficient data acquisition and analysis.
–the ability to perform ad hoc data analysis as required to meet specific fraud examiner and fraud prevention objectives.
–the ability to design, build, and maintain well-documented, ongoing automated data analysis routines.
–the ability to provide consultative assistance to others who are involved in the application of analytics.

Forensic Data Analysis

As a long term advocate of big data based solutions to investigative challenges, I have been interested to see the recent application of such approaches to the ever-growing problem of data beaches. More data is stored electronically than ever before, financial data, marketing data, customer data, vendor listings, sales transactions, email correspondence, and more, and evidence of fraud can be located anywhere within those mountains of data. Unfortunately, fraudulent data often looks like legitimate data when viewed in the raw. Taking a sample and testing it might not uncover fraudulent activity. Fortunately, today’s fraud examiners have the ability to sort through piles of information by using special software and data analysis techniques. These methods can identify future trends within a certain industry, and they can be configured to identify breaks in audit control programs and anomalies in accounting records.

In general, fraud examiners perform two primary functions to explore and analyze large amounts of data: data mining and data analysis. Data mining is the science of searching large volumes of data for patterns. Data analysis refers to any statistical process used to analyze data and draw conclusions from the findings. These terms are often used interchangeably. If properly used, data analysis processes and techniques are powerful resources. They can systematically identify red flags and perform predictive modeling, detecting a fraudulent situation long before many traditional fraud investigation techniques would be able to do so.

Big data are high volume, high velocity, and/or high variety information assets that require new forms of processing to enable enhanced decision making, insight discovery, and process optimization. Simply put, big data is information of extreme size, diversity, and complexity. In addition to thinking of big data as a single set of data, fraud investigators and forensic accountants are conceptualizing about the way data grow when different data sets are connected together that might not normally be connected. Big data represents the continuous expansion of data sets, the size, variety, and speed of generation of which makes it difficult for investigators and client managements to manage and analyze.

Big data can be instrumental to the evidence gathering phase of an investigation. Distilled down to its core, how do fraud examiners gather data in an investigation? They look at documents and financial or operational data, and they interview people. The challenge is that people often gravitate to the areas with which they are most comfortable. Attorneys will look at documents and email messages and then interview individuals. Forensic accounting professionals will look at the accounting and financial data (structured data). Some people are strong interviewers. The key is to consider all three data sources in unison.

Big data helps to make it all work together to bring the complete picture into focus. With the ever-increasing size of data sets, data analytics has never been more important or useful. Big data requires the use of creative and well-planned analytics due to its size and complexity. One of the main advantages of using data analytics in a big data environment is that it allows the investigator to analyze an entire population of data rather than having to choose a sample and risk drawing erroneous conclusions in the event of a sampling error.

To conduct an effective data analysis, a fraud examiner must take a comprehensive approach. Any direction can (and should) be taken when applying analytical tests to available data. The more creative fraudsters get in hiding their breach-related schemes, the more creative the fraud examiner must become in analyzing data to detect these schemes. For this reason, it is essential that fraud investigators consider both structured and unstructured data when planning their engagements.

Data are either structured or unstructured. Structured data is the type of data found in a database, consisting of recognizable and predictable structures. Examples of structured data include sales records, payment or expense details, and financial reports. Unstructured data, by contrast, is data not found in a traditional spreadsheet or database. Examples of unstructured data include vendor invoices, email and user documents, human resources files, social media activity, corporate document repositories, and news feeds. When using data analysis to conduct a fraud examination, the fraud examiner might use structured data, unstructured data, or a combination of the two. For example, conducting an analysis on email correspondence (unstructured data) among employees might turn up suspicious activity in the purchasing department. Upon closer inspection of the inventory records (structured data), the fraud examiner might uncover that an employee has been stealing inventory and covering her tracks in the record.

Recent reports of breach responses detailed in social media and the trade press indicate that those investigators deploying advanced forensic data analysis tools across larger data sets provided better insights into the penetration, which lead to more focused investigations, better root cause analysis and contributed to more effective fraud risk management. Advanced technologies that incorporate data visualization, statistical analysis and text-mining concepts, as compared to spreadsheets or relational database tools, can now be applied to massive data sets from disparate sources enhancing breach response at all organizational levels.

These technologies enable our client companies to ask new compliance questions of their data that they might not have been able to ask previously. Fraud examiners can establish important trends in business conduct or identify suspect transactions among millions of records rather than being forced to rely on smaller samplings that could miss important transactions.

Data breaches bring enhanced regulatory attention. It’s clear that data breaches have raised the bar on regulators’ expectations of the components of an effective compliance and anti-fraud program. Adopting big data/forensic data analysis procedures into the monitoring and testing of compliance can create a cycle of improved adherence to company policies and improved fraud prevention and detection, while providing additional comfort to key stakeholders.

CFEs and forensic accountants are increasingly being called upon to be members of teams implementing or expanding big data/forensic data analysis programs so as to more effectively manage data breaches and a host of other instances of internal and external fraud, waste and abuse. To build a successful big data/forensic data analysis program, your client companies would be well advised to:

— begin by focusing on the low-hanging fruit: the priority of the initial project(s) matters. The first and immediately subsequent projects, the low-hanging investigative fruit, normally incurs the largest cost associated with setting up the analytics infrastructure, so it’s important that the first few investigative projects yield tangible results/recoveries.

— go beyond usual the rule-based, descriptive analytics. One of the key goals of forensic data analysis is to increase the detection rate of internal control noncompliance while reducing the risk of false positives. From a technology perspective, client’s internal audit and other investigative groups need to move beyond rule-based spreadsheets and database applications and embrace both structured and unstructured data sources that include the use of data visualization, text-mining and statistical analysis tools.

— see that successes are communicated. Share information on early successes across divisional and departmental lines to gain broad business process support. Once validated, success stories will generate internal demand for the outputs of the forensic data analysis program. Try to construct a multi-disciplinary team, including information technology, business users (i.e., end-users of the analytics) and functional specialists (i.e., those involved in the design of the analytics and day-to-day operations of the forensic data analysis program). Communicate across multiple departments to keep key stakeholders assigned to the fraud prevention program updated on forensic data analysis progress under a defined governance program. Don’t just seek to report instances of noncompliance; seek to use the data to improve fraud prevention and response. Obtain investment incrementally based on success, and not by attempting to involve the entire client enterprise all at once.

—leadership support will gets the big data/forensic data analysis program funded, but regular interpretation of the results by experienced or trained professionals are what will make the program successful. Keep the analytics simple and intuitive; don’t try to cram too much information into any one report. Invest in new, updated versions of tools to make analytics sustainable. Develop and acquire staff professionals with the required skill sets to sustain and leverage the forensic data analysis effort over the long-term.
Finally, enterprise-wide deployment of forensic data analysis takes time; clients shouldn’t be lead to expect overnight adoption; an analytics integration is a journey, not a destination. Quick-hit projects might take four to six weeks, but the program and integration can take one to two years or more.

Our client companies need to look at a broader set of risks, incorporate more data sources, move away from lightweight, end-user, desktop tools and head toward real-time or near-real time analysis of increased data volumes. Organizations that embrace these potential areas for improvement can deliver more effective and efficient compliance programs that are highly focused on identifying and containing damage associated with hacker and other exploitation of key high fraud-risk business processes.

Fraud Prevention Oriented Data Mining

One of the most useful components of our Chapter’s recently completed two-day seminar on Cyber Fraud & Data Breaches was our speaker, Cary Moore’s, observations on the fraud fighting potential of management’s creative use of data mining. For CFEs and forensic accountants, the benefits of data mining go much deeper than as just a tool to help our clients combat traditional fraud, waste and abuse. In its simplest form, data mining provides automated, continuous feedback to ensure that systems and anti-fraud related internal controls operate as intended and that transactions are processed in accordance with policies, laws and regulations. It can also provide our client managements with timely information that can permit a shift from traditional retrospective/detective activities to the proactive/preventive activities so important to today’s concept of what effective fraud prevention should be. Data mining can put the organization out front of potential fraud vulnerability problems, giving it an opportunity to act to avoid or mitigate the impact of negative events or financial irregularities.

Data mining tests can produce “red flags” that help identify the root cause of problems and allow actionable enhancements to systems, processes and internal controls that address systemic weaknesses. Applied appropriately, data mining tools enable organizations to realize important benefits, such as cost optimization, adoption of less costly business models, improved program, contract and payment management, and process hardening for fraud prevention.

In its most complex, modern form, data mining can be used to:

–Inform decision-making
–Provide predictive intelligence and trend analysis
–Support mission performance
–Improve governance capabilities, especially dynamic risk assessment
–Enhance oversight and transparency by targeting areas of highest value or fraud risk for increased scrutiny
–Reduce costs especially for areas that represent lower risk of irregularities
–Improve operating performance

Cary emphasized that leading, successful organizational implementers have tended to take a measured approach initially when embarking on a fraud prevention-oriented data mining initiative, starting small and focusing on particular “pain points” or areas of opportunity to tackle first, such as whether only eligible recipients are receiving program funds or targeting business processes that have previously experienced actual frauds. Through this approach, organizations can deliver quick wins to demonstrate an early return on investment and then build upon that success as they move to more sophisticated data mining applications.

So, according to ACFE guidance, what are the ingredients of a successful data mining program oriented toward fraud prevention? There are several steps, which should be helpful to any organization in setting up such an effort with fraud, waste, abuse identification/prevention in mind:

–Avoid problems by adopting commonly used data mining approaches and related tools.

This is essentially a cultural transformation for any organization that has either not understood the value these tools can bring or has viewed their implementation as someone else’s responsibility. Given the cyber fraud and breach related challenges faced by all types of organizations today, it should be easier for fraud examiners and forensic accountants to convince management of the need to use these tools to prevent problems and to improve the ability to focus on cost-effective means of better controlling fraud -related vulnerabilities.

–Understand the potential that data mining provides to the organization to support day to day management of fraud risk and strategic fraud prevention.

Understanding, both the value of data mining and how to use the results, is at the heart of effectively leveraging these tools. The CEO and corporate counsel can play an important educational and support role for a program that must ultimately be owned by line managers who have responsibility for their own programs and operations.

–Adopt a version of an enterprise risk management program (ERM) that includes a consideration of fraud risk.

An organization must thoroughly understand its risks and establish a risk appetite across the enterprise. In this way, it can focus on those area of highest value to the organization. An organization should take stock of its risks and ask itself fundamental questions, such as:

-What do we lose sleep over?
-What do we not want to hear about us on the evening news or read about in the print media or on a blog?
-What do we want to make sure happens and happens well?

Data mining can be an integral part of an overall program for enterprise risk management. Both are premised on establishing a risk appetite and incorporating a governance and reporting framework. This framework in turn helps ensure that day-to-day decisions are made in line with the risk appetite, and are supported by data needed to monitor, manage and alleviate risk to an acceptable level. The monitoring capabilities of data mining are fundamental to managing risk and focusing on issues of importance to the organization. The application of ERM concepts can provide a framework within which to anchor a fraud prevention program supported by effective data mining.

–Determine how your client is going to use the data mined information in managing the enterprise and safeguarding enterprise assets from fraud, waste and abuse.

Once an organization is on top of the data, using it effectively becomes paramount and should be considered as the information requirements are being developed. As Cary pointed out, getting the right data has been cited as being the top challenge by 20 percent of ACFE surveyed respondents, whereas 40 percent said the top challenge was the “lack of understanding of how to use analytics”. Developing a shared understanding so that everyone is on the same page is critical to success.

–Keep building and enhancing the application of data mining tools.

As indicated above, a tried and true approach is to begin with the lower hanging fruit, something that will get your client started and will provide an opportunity to learn on a smaller scale. The experience gained will help enable the expansion and the enhancement of data mining tools. While this may be done gradually, it should be a priority and not viewed as the “management reform initiative of the day. There should be a clear game plan for building data mining capabilities into the fiber of management’s fraud and breach prevention effort.

–Use data mining as a tool for accountability and compliance with the fraud prevention program.

It is important to hold managers accountable for not only helping institute robust data mining programs, but for the results of these programs. Has the client developed performance measures that clearly demonstrate the results of using these tools? Do they reward those managers who are in the forefront in implementing these tools? Do they make it clear to those who don’t that their resistance or hesitation are not acceptable?

–View this as a continuous process and not a “one and done” exercise.

Risks change over time. Fraudsters are always adjusting their targets and moving to exploit new and emerging weaknesses. They follow the money. Technology will continue to evolve, and it will both introduce new risks but also new opportunities and tools for management. This client management effort to protect against dangers and rectify errors is one that never ends, but also one that can pay benefits in preventing or managing cyber-attacks and breaches that far outweigh the costs if effectively and efficiently implemented.

In conclusion, the stark realities of today’s cyber related challenges at all levels of business, private and public, and the need to address ever rising service delivery expectations have raised the stakes for managing the cost of doing business and conducting the on-going war against fraud, waste and abuse. Today’s client-managers should want to be on top of problems before they become significant, and the strategic use of data mining tools can help them manage and protect their enterprises whilst saving money…a win/win opportunity for the client and for the CFE.

Analytics Confronts the Normal

The Information Audit and Control Association (ISACA) tells us that we produce and store more data in a day now than mankind did altogether in the last 2,000 years. The data that is produced daily is estimated to be one exabyte, which is the computer storage equivalent of one quintillion bytes, which is the same as one million terabytes. Not too long ago, about 15 years, a terabyte of data was considered a huge amount of data; today the latest Swiss Army knife comes with a 1 terabyte flash drive.

When an interaction with a business is complete, the information from the interaction is only as good as the pieces of data that get captured during that interaction. A customer walks into a bank and withdraws cash. The transaction that just happened gets stored as a monetary withdrawal transaction with certain characteristics in the form of associated data. There might be information on the date and time when the withdrawal happened; there may be information on which customer made the withdrawal (if there are multiple customers who operate the same account). The amount of cash that was withdrawn, the account from which the money was extracted, the teller/ATM who facilitated the withdrawal, the balance on the account after the withdrawal, and so forth, are all typically recorded. But these are just a few of the data elements that can get captured in any withdrawal transaction. Just imagine all the different interactions possible on all the assorted products that a bank has to offer: checking accounts, savings accounts, credit cards, debit cards, mortgage loans, home equity lines of credit, brokerage, and so on. The data that gets captured during all these interactions goes through data-checking processes and gets stored somewhere internally or in the cloud.  The data that gets stored this way has been steadily growing over the past few decades, and, most importantly for fraud examiners, most of this data carries tons of information about the nuances of the individual customers’ normal behavior.

In addition to what the customer does, from the same data, by looking at a different dimension of the data, examiners can also understand what is normal for certain other related entities. For example, by looking at all the customer withdrawals at a single ARM, CFEs can gain a good understanding of what is normal for that particular ATM terminal.  Understanding the normal behavior of customers is very useful in detecting fraud since deviation from normal behavior is a such a primary indicator of fraud. Understanding non-fraud or normal behavior is not only important at the main account holder level but also at all the entity levels associated with that individual account. The same data presents completely different information when observed in the context of one entity versus another. In this sense, having all the data saved and then analyzed and understood is a key element in tackling the fraud threat to any organization.

Any systematic, numbers-based system of understanding of the phenomenon of fraud as a past occurring event is dependent on an accurate description of exactly what happened through the data stream that got accumulated before, during, and after the fraud scenario occurred. Allowing the data to speak is the key to the success of any model-based system. This data needs to be saved and interpreted very precisely for the examiner’s models to make sense. The first crucial step to building a model is to define, understand, and interpret fraud scenarios correctly. At first glance, this seems like a very easy problem to solve. In practical terms, it is a lot more complicated process than it seems.

The level of understanding of the fraud episode or scenario itself varies greatly among the different business processes involved with handling the various products and functions within an organization. Typically, fraud can have a significant impact on the bottom line of any organization. Looking at the level of specific information that is systematically stored and analyzed about fraud in financial institutions for example, one would arrive at the conclusion that such storage needs to be a lot more systematic and rigorous than it typically is today. There are several factors influencing this. Unlike some of the other types of risk involved in client organizations, fraud risk is a censored problem. For example, if we are looking at serious delinquency, bankruptcy, or charge-off risk in credit card portfolios, the actual dollars-at-risk quantity is very well understood. Based on past data, it is relatively straightforward to quantify precise credit dollars at risk by looking at how many customers defaulted on a loan or didn’t pay their monthly bill for three or more cycles or declared bankruptcy. Based on this, it is easy to quantify the amount at risk as far as credit risk goes. However, in fraud, it is virtually impossible to quantify the actual amount that would have gone out the door as the fraud is stopped immediately after detection. The problem is censored as soon as some intervention takes place, making it difficult to precisely quantify the potential risk.

Another challenge in the process of quantifying fraud is how well the fraud episode itself gets recorded. Consider the case of a credit card number getting stolen without the physical card getting stolen. During a certain period, both the legitimate cardholder and the fraudster are charging using the card. If the fraud detection system in the issuing institution doesn’t identify the fraudulent transactions as they were happening in real time, typically fraud is identified when the cardholder gets the monthly statement and figures out that some of the charges were not made by him/her. Then the cardholder calls the issuer to report the fraud.  In the not too distant past, all that used to get recorded by the bank was the cardholder’s estimate of when the fraud episode began, even though there were additional details about the fraudulent transactions that were likely shared by the cardholder. If all that gets recorded is the cardholder’s estimate of when the fraud episode began, ambiguity is introduced regarding the granularity of the actual fraud episode. The initial estimate of the fraud amount becomes a rough estimate at best.  In the case in which the bank’s fraud detection system was able to catch the fraud during the actual fraud episode, the fraudulent transactions tended to be recorded by a fraud analyst, and sometimes not too accurately. If the transaction was marked as fraud or non-fraud incorrectly, this problem was typically not corrected even after the correct information flowed in. When eventually the transactions that were actually fraudulent were identified using the actual postings of the transactions, relating this back to the authorization transactions was often not a straightforward process. Sometimes the amounts of the transactions may have varied slightly. For example, the authorization transaction of a restaurant charge is sometimes unlikely to include the tip that the customer added to the bill. The posted amount when this transaction gets reconciled would look slightly different from the authorized amount. All of this poses an interesting challenge when designing a data-driven analytical system to combat fraud.

The level of accuracy associated with recording fraud data also tends to be dependent on whether the fraud loss is a liability for the customer or to the financial institution. To a significant extent, the answer to the question, “Whose loss is it?” really drives how well past fraud data is recorded. In the case of unsecured lending such as credit cards, most of the liability lies with the banks, and the banks tend to care a lot more about this type of loss. Hence systems are put in place to capture this data on a historical basis reasonably accurately.

In the case of secured lending, ID theft, and so on, a significant portion of the liability is really on the customer, and it is up to the customer to prove to the bank that he or she has been defrauded. Interestingly, this shift of liability also tends to have an impact on the quality of the fraud data captured. In the case of fraud associated with automated clearing house (ACH) batches and domestic and international wires, the problem is twofold: The fraud instances are very infrequent, making it impossible for the banks to have a uniform method of recording frauds; and the liability shifts are dependent on the geography.  Most international locations put the onus on the customer, while in the United States there is legislation requiring banks to have fraud detection systems in place.  The extent to which our client organizations take responsibility also tends to depend on how much they care about the customer who has been defrauded. When a very valuable customer complains about fraud on her account, a bank is likely to pay attention.  Given that most such frauds are not large scale, there is less need to establish elaborate systems to focus on and collect the data and keep track of past irregularities. The past fraud information is also influenced heavily by whether the fraud is third-party or first-party fraud. Third-party fraud is where the fraud is committed clearly by a third party, not the two parties involved in a transaction. In first-party fraud, the perpetrator of the fraud is the one who has the relationship with the bank. The fraudster in this case goes to great lengths to prevent the banks from knowing that fraud is happening. In this case, there is no reporting of the fraud by the customer. Until the bank figures out that fraud is going on, there is no data that can be collected. Also, such fraud could go on for quite a while and some of it might never be identified. This poses some interesting problems. Internal fraud where the employee of the institution is committing fraud could also take significantly longer to find. Hence the data on this tends to be scarce as well.

In summary, one of the most significant challenges in fraud analytics is to build a sufficient database of normal client transactions.  The normal transactions of any organization constitute the baseline from which abnormal, fraudulent or irregular transactions, can be identified and analyzed.  The pinpointing of the irregular is thus foundational to the development of the transaction processing edits which prevent the irregular transactions embodying fraud from even being processed and paid on the front end; furnishing the key to modern, analytically based fraud prevention.

Bye-Bye Money

Miranda had responsibility for preparing personnel files for new hires, approval of wages, verification of time cards, and distribution of payroll checks. She “hired” fictitious employees, faked their records, and ordered checks through the payroll system. She deposited some checks in several personal bank accounts and cashed others, endorsing all of them with the names of the fictitious employees and her own. Her company’s payroll function created a large paper trail of transactions among which were individual earnings records, W-2 tax forms, payroll deductions for taxes and insurance, and Form 941 payroll tax reports. She mailed all the W-2 forms to the same post office box.

Miranda stole $160,000 by creating some “ghosts,” usually 3 to 5 out of 112 people on the payroll and paying them an average of $650 per week for three years. Sometimes the ghosts quit and were later replaced by others. But she stole “only” about 2 percent of the payroll funds during the period.

A tip from a fellow employee received by the company hotline resulted in the engagement of Tom Hudson, CFE.  Tom’s objective was to obtain evidence of the existence and validity of payroll transactions on the control premise that different people should be responsible for hiring (preparing personnel files), approving wages, and distributing payroll checks. “Thinking like a crook” lead Tom to readily see that Miranda could put people on the payroll and obtain their checks just as the hotline caller alleged. In his test of controls Tom audited for transaction authorization and validity. In this case random sampling was less likely to work because of the small number of alleged ghosts. So, Tom looked for the obvious. He selected several weeks’ check blocks, accounted for numerical sequence (to see whether any checks had been removed), and examined canceled checks for two endorsements.

Tom reasoned that there may be no “balance” to audit for existence/occurrence, other than the accumulated total of payroll transactions, and that the total might not appear out of line with history because the tipster had indicated that the fraud was small in relation to total payroll and had been going on for years.  He decided to conduct a surprise payroll distribution, then followed up by examining prior canceled checks for the missing employees and then scan personnel files for common addresses.

Both the surprise distribution and the scan for common addresses quickly provided the names of 2 or 3 exceptions. Both led to prior canceled checks (which Miranda had not removed and the bank reconciler had not noticed), which carried Miranda’s own name as endorser. Confronted, she confessed.

The major risks in any payroll business cycle are:

•Paying fictitious “employees” (invalid transactions, employees do not exist);

• Overpaying for time or production (inaccurate transactions, improper valuation);

•Incorrect accounting for costs and expenses (incorrect classification, improper or inconsistent presentation and disclosure).

The assessment of payroll system control risk normally takes on added importance because most companies have fairly elaborate and well-controlled personnel and payroll functions. The transactions in this cycle are numerous during the year yet result in lesser amounts in balance sheet accounts at year-end. Therefore, in most routine outside auditor engagements, the review of controls, test of controls and audit of transaction details constitute the major portion of the evidence gathered for these accounts. On most annual audits, the substantive audit procedures devoted to auditing the payroll-related account balances are very limited which enhances fraud risk.

Control procedures for proper segregation of responsibilities should be in place and operating. Proper segregation involves authorization (personnel department hiring and firing, pay rate and deduction authorizations) by persons who do not have payroll preparation, paycheck distribution, or reconciliation duties. Payroll distribution (custody) is in the hands of persons who do not authorize employees’ pay rates or time, nor prepare the payroll checks. Recordkeeping is performed by payroll and cost accounting personnel who do not make authorizations or distribute pay. Combinations of two or more of the duties of authorization, payroll preparation and recordkeeping, and payroll distribution in one person, one office, or one computerized system may open the door for errors and frauds. In addition, the control system should provide for detail control checking activities.  For example: (1) periodic comparison of the payroll register to the personnel department files to check hiring authorizations and for terminated employees not deleted, (2) periodic rechecking of wage rate and deduction authorizations, (3) reconciliation of time and production paid to cost accounting calculations, (4) quarterly reconciliation of YTD earnings records with tax returns, and (5) payroll bank account reconciliation.

Payroll can amount to 40 percent or more of an organization’s total annual expenditures. Payroll taxes, Social Security, Medicare, pensions, and health insurance can add several percentage points in variable costs on top of wages. So, for every payroll dollar saved through forensic identification, bonus savings arise automatically from the on-top costs calculated on base wages. Different industries will exhibit different payroll risk profiles. For example, firms whose culture involves salaried employees who work longer hours may have a lower risk of payroll fraud and may not warrant a full forensic approach. Organizations may present greater opportunity for payroll fraud if their workforce patterns entail night shift work, variable shifts or hours, 24/7 on-call coverage, and employees who are mobile, unsupervised, or work across multiple locations. Payroll-related risks include over-claimed allowances, overused extra pay for weekend or public holiday work, fictitious overtime, vacation and sick leave taken but not deducted from leave balances, continued payment of employees who have left the organization, ghost employees arising from poor segregation of duties, and the vulnerability of data output to the bank for electronic payment, and roster dysfunction. Yet the personnel assigned to administer the complexities of payroll are often qualified by experience than by formal finance, legal, or systems training, thereby creating a competency bias over how payroll is managed. On top of that, payroll is normally shrouded in secrecy because of the inherently private nature of employee and executive pay. Underpayment errors are less probable than overpayment errors because they are more likely to be corrected when the affected employees complain; they are less likely to be discovered when employees are overpaid. These systemic biases further increase the risk of unnoticed payroll error and fraud.

Payroll data analysis can reveal individuals or entire teams who are unusually well-remunerated because team supervisors turn a blind eye to payroll malpractice, as well as low-remunerated personnel who represent excellent value to the organization. For example, it can identify the night shift worker who is paid extra for weekend or holiday work plus overtime while actually working only half the contracted hours, or workers who claim higher duty or tool allowances to which they are not entitled. In addition to providing management with new insights into payroll behaviors, which may in turn become part of ongoing management reporting, the total payroll cost distribution analysis can point forensic accountants toward urgent payroll control improvements.

The detail inside payroll and personnel databases can reveal hidden information to the forensic examiner. Who are the highest earners of overtime pay and why? Which employees gained the most from weekend and public holiday pay? Who consistently starts late? Finishes early? Who has the most sick leave? Although most employees may perform a fair day’s work, the forensic analysis may point to those who work less, sometimes considerably less, than the time for which they are paid. Joined-up query combinations to search payroll and human resources data can generate powerful insights into the organization’s worst and best outliers, which may be overlooked by the data custodians. An example of a query combination would be: employees with high sick leave + high overtime + low performance appraisal scores + negative disciplinary records. Or, reviewers could invert those factors to find the unrecognized exemplary performers.

Where predication suggests fraud concerns about identified employees, CFEs can add value by triangulating time sheet claims against external data sources such as site access biometric data, company cell phone logs, phone number caller identification, GPS data, company email, Internet usage, company motor fleet vehicle tolls, and vehicle refueling data, most of which contain useful date and time-of-day parameters.  The data buried within these databases can reveal employee behavior, including what they were doing, where they were, and who they were interacting with throughout the work day.

Common findings include:

–Employees who leave work wrongfully during their shift;
–Employees who work fewer hours and take sick time during the week to shift the workload to weekends and public holidays to maximize pay;
–Employees who use company property excessively for personal purposes during working hours;
–Employees who visit vacation destinations while on sick leave;
–Employees who take leave but whose managers do not log the paperwork, thereby not deducting leave taken and overstating leave balances;
–Employees who moonlight in businesses on the side during normal working hours, sometimes using the organization’s equipment to do so.

Well-researched and documented forensic accounting fieldwork can support management action against those who may have defrauded the organization or work teams that may be taking inappropriate advantage of the payroll system. Simultaneously, CFEs and forensic accountants, working proactively, can partner with management to recover historic costs, quantify future savings, reduce reputational and political risk, improve the organization’s anti-fraud policies, and boost the productivity and morale of employees who knew of wrongdoing but felt powerless to stop it.

The Who, the What, the When

CFEs and forensic accountants are seekers. We spend our days searching for the most relevant information about our client requested investigations from an ever-growing and increasingly tangled data sphere and trying to make sense of it. Somewhere hidden in our client’s computers, networks, databases, and spreadsheets are signs of the alleged fraud, accompanying control weaknesses and unforeseen risks, as well as possible opportunities for improvement. And the more data the client organization has, the harder all this is to find.  Although most computer-assisted forensic audit tests focus on the numeric data contained within structured sources, such as financial and transactional databases, unstructured or text based data, such as e-mail, documents, and Web-based content, represents an estimated 8o percent of enterprise data within the typical medium to large-sized organization. When assessing written communications or correspondence about fraud related events, CFEs often find themselves limited to reading large volumes of data, with few automated tools to help synthesize, summarize, and cluster key information points to aid the investigation.

Text analytics is a relatively new investigative tool for CFEs in actual practice although some report having used it extensively for at least the last five or more years. According to the ACFE, the software itself stems from a combination of developments in our sister fields of litigation support and electronic discovery, and from counterterrorism and surveillance technology, as well as from customer relationship management, and research into the life sciences, specifically artificial intelligence. So, the application of text analytics in data review and criminal investigations dates to the mid-1990s.

Generally, CFEs increasingly use text analytics to examine three main elements of investigative data: the who, the what, and the when.

The Who: According to many recent studies, substantially more than a half of business people prefer using e-mail to use of the telephone. Most fraud related business transactions or events, then, will likely have at least some e-mail communication associated with them. Unlike telephone messages, e-mail contains rich metadata, information stored about the data, such as its author, origin, version, and date accessed, and can be documented easily. For example, to monitor who is communicating with whom in a targeted sales department, and conceivably to identify whether any alleged relationships therein might signal anomalous activity, a forensic accountant might wish to analyze metadata in the “to,” “from,” “cc,” or “bcc” fields in departmental e-mails. Many technologies for parsing e-mail with text analytics capabilities are available on the market today, some stemming from civil investigations and related electronic discovery software. These technologies are like the social network diagrams used in law enforcement or in counterterrorism efforts.

The What: The ever-present ambiguity inherent in human language presents significant challenges to the forensic investigator trying to understand the circumstances and actions surrounding the text based aspects of a fraud allegation. This difficulty is compounded by the tendency of people within organizations to invent their own words or to communicate in code. Language ambiguity can be illustrated by examining the word “shred”. A simple keyword search on the word might return not only documents that contain text about shredding a document, but also those where two sports fans are having a conversation about “shredding the defense,” or even e-mails between spouses about eating Chinese “shredded pork” for dinner. Hence, e-mail research analytics seeks to group similar documents according to their semantic context so that documents about shredding as concealment or related to covering up an action would be grouped separately from casual e-mails about sports or dinner, thus markedly reducing the volume of e-mail requiring more thorough ocular review. Concept-based analysis goes beyond traditional search technology by enabling users to group documents according to a statistical inference about the co-occurrence of similar words. In effect, text analytics software allows documents to describe themselves and group themselves by context, as in the shred example. Because text analytics examines document sets and identifies relationships between documents according to their context, it can produce far more relevant results than traditional simple keyword searches.

Using text analytics before filtering with keywords can be a powerful strategy for quickly understanding the content of a large corpus of unstructured, text-based data, and for determining what is relevant to the search. After viewing concepts at an elevated level, subsequent keyword selection becomes more effective by enabling users to better understand the possible code words or company-specific jargon. They can develop the keywords based on actual content, instead of guessing relevant terms, words, or phrases up front.

The When: In striving to understand the time frames in which key events took place, CFEs often need to not only identify the chronological order of documents (e.g., sorted by or limited to dates), but also link related communication threads, such as e-mails, so that similar threads and communications can be identified and plotted over time. A thread comprises a set of messages connected by various relationships; each message consists of either a first message or a reply to or forwarding of some other message in the set. Messages within a thread are connected by relationships that identify notable events, such as a reply vs. a forward, or changes in correspondents. Quite often, e-mails accumulate long threads with similar subject headings, authors, and message content over time. These threads ultimately may lead to a decision, such as approval to proceed with a project or to take some other action. The approval may be critical to understanding business events that led up to a particular journal entry. Seeing those threads mapped over time can be a powerful tool when trying to understand the business logic of a complex financial transaction.

In the context of fraud risk, text analytics can be particularly effective when threads and keyword hits are examined with a view to considering the familiar fraud triangle; the premise that all three components (incentive/pressure, opportunity, and rationalization) are present when fraud exists. This fraud triangle based analysis can be applied in a variety of business contexts where increases in the frequency of certain keywords related to incentive/pressure, opportunity, and rationalization, can indicate an increased level of fraud risk.

Some caveats are in order.  Considering the overwhelming amount of text-based data within any modern enterprise, assurance professionals could never hope to analyze all of it; nor should they. The exercise would prove expensive and provide little value. Just as an external auditor would not reprocess or validate every sales transaction in a sales journal, he or she would not need to look at every related e-mail from every employee. Instead, any professional auditor would take a risk-based approach, identifying areas to test based on a sample of data or on an enterprise risk assessment. For text analytics work, the reviewer may choose data from five or ten individuals to sample from a high-risk department or from a newly acquired business unit. And no matter how sophisticated the search and information retrieval tools used, there is no guarantee that all relevant or high-risk documents will be identified in large data collections. Moreover, different search methods may produce differing results, subject to a measure of statistical variation inherent in probability searches of any type. Just as a statistical sample of accounts receivable or accounts payable in the general ledger may not identify fraud, analytics reviews are similarly limited.

Text analytics can be a powerful fraud examination tool when integrated with traditional forensic data-gathering and analysis techniques such as interviews, independent research, and existing investigative tests involving structured, transactional data. For example, an anomaly identified in the general ledger related to the purchase of certain capital assets may prompt the examiner to review e-mail communication traffic among the key individuals involved, providing context around the circumstances and timing, of events before the entry date. Furthermore, the forensic accountant may conduct interviews or perform additional independent research that may support or conflict with his or her investigative hypothesis. Integrating all three of these components to gain a complete picture of the fraud event can yield valuable information. While text analytics should never replace the traditional rules-based analysis techniques that focus on the client’s financial accounting systems, it’s always equally important to consider the communications surrounding key events typically found in unstructured data, as opposed to that found in the financial systems.

RVACFES May 2017 Event Sold-Out!

On May 17th and 18th the Central Virginia ACFE Chapter and our partners, the Virginia State Police and the Association of Certified Fraud Examiners (ACFE) were joined by an over-flow crowd of audit and assurance professionals for the ACFE’s training course ‘Conducting Internal Investigations’. The sold-out May 2017 seminar was the ninth that our Chapter has hosted over the years with the Virginia State Police utilizing a distinguished list of certified ACFE instructor-practitioners.

Our internationally acclaimed instructor for the May seminar was Gerard Zack, CFE, CPA, CIA, CCEP. Gerry has provided fraud prevention and investigation, forensic accounting, and internal and external audit services for more than 30 years. He has worked with commercial businesses, not-for-profit organizations, and government agencies throughout North America and Europe. Prior to starting his own practice in 1990, Gerry was an audit manager with a large international public accounting firm. As founder and president of Zack, P.C., he has led numerous fraud investigations and designed customized fraud risk management programs for a diverse client base. Through Zack, P.C., he also provides outsourced internal audit services, compliance and ethics programs, enterprise risk management, fraud risk assessments, and internal control consulting services.

Gerry is a Certified Fraud Examiner (CFE) and Certified Public Accountant (CPA) and has focused most of his career on audit and fraud-related services. Gerry serves on the faculty of the Association of Certified Fraud Examiners (ACFE) and is the 2009 recipient of the ACFE’s James Baker Speaker of the Year Award. He is also a Certified Internal Auditor (CIA) and a Certified Compliance and Ethics Professional (CCEP).

Gerry is the author of Financial Statement Fraud: Strategies for Detection and Investigation (published 2013 by John Wiley & Sons), Fair Value Accounting Fraud: New Global Risks and Detection Techniques (2009 by John Wiley & Sons), and Fraud and Abuse in Nonprofit Organizations: A Guide to Prevention and Detection (2003 by John Wiley & Sons). He is also the author of numerous articles on fraud and teaches seminars on fraud prevention and detection for businesses, government agencies, and nonprofit organizations. He has provided customized internal staff training on specialized auditing issues, including fraud detection in audits, for more than 50 CPA firms.

Gerry is also the founder of the Nonprofit Resource Center, through which he provides antifraud training and consulting and online financial management tools specifically geared toward the unique internal control and financial management needs of nonprofit organizations. Gerry earned his M.B.A at Loyola University in Maryland and his B.S.B.A at Shippensburg University of Pennsylvania.

To some degree, organizations of every size, in every industry, and in every city, experience internal fraud. No entity is immune. Furthermore, any member of an organization can carry out fraud, whether it is committed by the newest customer service employee or by an experienced and highly respected member of upper management. The fundamental reason for this is that fraud is a human problem, not an accounting problem. As long as organizations are employing individuals to perform business functions, the risk of fraud exists.

While some organizations aggressively adopt strong zero tolerance anti-fraud policies, others simply view fraud as a cost of doing business. Despite varying views on the prevalence of, or susceptibility to, fraud within a given organization, all must be prepared to conduct a thorough internal investigation once fraud is suspected. Our ‘Conducting Internal Investigations’ event was structured around the process of investigating any suspected fraud from inception to final disposition and beyond.

What constitutes an act that warrants an examination can vary from one organization to another and from jurisdiction to jurisdiction. It is often resolved based on a definition of fraud adopted by an employer or by a government agency. There are numerous definitions of fraud, but a popular example comes from the joint ACFE-COSO publication, Fraud Risk Management Guide:

Fraud is any intentional act or omission designed to deceive others, resulting in the victim suffering a loss and/or the perpetrator achieving a gain.

However, many law enforcement agencies have developed their own definitions, which might be more appropriate for organizations operating in their jurisdictions. Consequently, fraud examiners should determine the appropriate legal definition in the jurisdiction in which the suspected offense was committed.

Fraud examination is a methodology for resolving fraud allegations from inception to disposition. More specifically, fraud examination involves:

–Assisting in the detection and prevention of fraud;
–Initiating the internal investigation;
–Obtaining evidence and taking statements;
–Writing reports;
–Testifying to findings.

A well run internal investigation can enhance a company’s overall well-being and can help detect the source of lost funds, identify responsible parties and recover losses. It can also provide a defense to legal charges by terminated or disgruntled employees. But perhaps, most importantly, an internal investigation can signal to every company employee that the company will not tolerate fraud.

Our two-day seminar agenda included Gerry’s in depth look at the following topics:

–Assessment of the risk of fraud within an organization and responding when it is identified;
–Detection and investigation of internal frauds with the use of data analytics;
–The collection of documents and electronic evidence needed during an investigation;
–The performance of effective information gathering and admission seeking interviews;
–The wide variety of legal and regulatory concerns related to internal investigations.

Gerry did his usual tremendous job in preparing the professionals in attendance to deal with every step in an internal fraud investigation, from receiving the initial allegation to testifying as a witness. The participants learned to lead an internal investigation with accuracy and confidence by gaining knowledge about topics such as the relevant legal aspects impacting internal investigations, the use of computers and analytics during the investigation, collecting and analyzing internal and external information, and interviewing witnesses and the writing of effective reports.

Analytics & Fraud Prevention

During our Chapter’s live training event last year, ‘Investigating on the Internet’, our speaker Liseli Pennings, pointed out that, according to the ACFE’s 2014 Report to the Nations on Occupational Fraud and Abuse, organizations that have proactive, internet oriented, data analytics in place have a 60 percent lower median loss because of fraud, roughly $100,000 lower per incident, than organizations that don’t use such technology. Further, the report went on, use of proactive data analytics cuts the median duration of a fraud in half, from 24 months to 12 months.

This is important news for CFE’s who are daily confronting more sophisticated frauds and criminals who are increasingly cyber based.  It means that integrating more mature forensic data analytics capabilities into a fraud prevention and compliance monitoring program can improve risk assessment, detect potential misconduct earlier, and enhance investigative field work. Moreover, forensic data analytics is a key component of effective fraud risk management as described in The Committee of Sponsoring Organizations of the Treadway Commission’s most recent Fraud Risk Management Guide, issued in 2016, particularly around the areas of fraud risk assessment, prevention, and detection.  It also means that, according to Pennings, fraud prevention and detection is an ideal big data-related organizational initiative. With the growing speed at which they generate data, specifically around their financial reporting and sales business processes, our larger CFE client organizations need ways to prioritize risks and better synthesize information using big data technologies, enhanced visualizations, and statistical approaches to supplement traditional rules-based investigative techniques supported by spreadsheet or database applications.

But with this analytics and fraud prevention integration opportunity comes a caution.  As always, before jumping into any specific technology or advanced analytics technique, it’s crucial to first ask the right risk or control-related questions to ensure the analytics will produce meaningful output for the business objective or risk being addressed. What business processes pose a high fraud risk? High-risk business processes include the sales (order-to-cash) cycle and payment (procure-to-pay) cycle, as well as payroll, accounting reserves, travel and entertainment, and inventory processes. What high-risk accounts within the business process could identify unusual account pairings, such as a debit to depreciation and an offsetting credit to a payable, or accounts with vague or open-ended “catch all” descriptions such as a “miscellaneous,” “administrate,” or blank account names?  Who recorded or authorized the transaction? Posting analysis or approver reports could help detect unauthorized postings or inappropriate segregation of duties by looking at the number of payments by name, minimum or maximum accounts, sum totals, or statistical outliers. When did transactions take place? Analyzing transaction activities over time could identify spikes or dips in activity such as before and after period ends or weekend, holiday, or off-hours activities. Where does the CFE see geographic risks, based on previous events, the economic climate, cyber threats, recent growth, or perceived corruption? Further segmentation can be achieved by business units within regions and by the accounting systems on which the data resides.

The benefits of implementing a forensic data analytics program must be weighed against challenges such as obtaining the right tools or professional expertise, combining data (both internal and external) across multiple systems, and the overall quality of the analytics output. To mitigate these challenges and build a successful program, the CFE should consider that the priority of the initial project matters. Because the first project often is used as a pilot for success, it’s important that the project address meaningful business or audit risks that are tangible and visible to client management. Further, this initial project should be reasonably attainable, with minimal dollar investment and actionable results. It’s best to select a first project that has big demand, has data that resides in easily accessible sources, with a compelling, measurable return on investment. Areas such as insider threat, anti-fraud, anti-corruption, or third-party relationships make for good initial projects.

In the health care insurance industry where I worked for many years, one of the key goals of forensic data analytics is to increase the detection rate of health care provider billing non-compliance, while reducing the risk of false positives. From a capabilities perspective, organizations need to embrace both structured and unstructured data sources that consider the use of data visualization, text mining, and statistical analysis tools. Since the CFE will usually be working as a member of a team, the team should demonstrate the first success story, then leverage and communicate that success model widely throughout the organization. Results should be validated before successes are communicated to the broader organization. For best results and sustainability of the program, the fraud prevention team should be a multidisciplinary one that includes IT, business users, and functional specialists, such as management scientists, who are involved in the design of the analytics associated with the day-to-day operations of the organization and hence related to the objectives of  the fraud prevention program. It helps to communicate across multiple departments to update key stakeholders on the program’s progress under a defined governance regime. The team shouldn’t just report noncompliance; it should seek to improve the business by providing actionable results.

The forensic data analytics functional specialists should not operate in a vacuum; every project needs one or more business champions who coordinate with IT and the business process owners. Keep the analytics simple and intuitive, don’t include too much information in one report so that it isn’t easy to understand. Finally, invest time in automation, not manual refreshes, to make the analytics process sustainable and repeatable. The best trends, patterns, or anomalies often come when multiple months of vendor, customer, or employee data are analyzed over time, not just in the aggregate. Also, keep in mind that enterprise-wide deployment takes time. While quick projects may take four to six weeks, integrating the entire program can easily take more than one or two years. Programs need to be refreshed as new risks and business activities change, and staff need updates to training, collaboration, and modern technologies.

Research findings by the ACFE and others are providing more and more evidence of the benefits of integrating advanced forensic data analytics techniques into fraud prevention and detection programs. By helping increase their client organization’s maturity in this area, CFE’s can assist in delivering a robust fraud prevention program that is highly focused on preventing and detecting fraud risks.

Cybersecurity – Is There a Role for Fraud Examiners?

cybersecurityAt a cybersecurity fraud prevention conference, I attended recently in California one of the featured speakers addressed the difference between information security and cybersecurity and the complexity of assessing the fraud preparedness controls specifically directed against cyber fraud.  It seems the main difficulty is the lack of a standard to serve as the basis of a fraud examiner’s or auditor’s risk review. The National Institute of Standards and Technology’s (NIST) framework has become a de facto standard despite the fact that it’s more than a little light on specific details.  Though it’s not a standard, there really is nothing else at present against which to measure cybersecurity.  Moreover, the technology that must be the subject of a cybersecurity risk assessment is poorly understood and is mutating rapidly.  CFE’s, and everyone else in the assurance community, are hard pressed to keep up.

To my way of thinking, a good place to start in all this confusion is for the practicing fraud examiner to consider the fundamental difference between information security and cybersecurity, the differing nature of the threat itself.   There is simply a distinction between protecting information against misuse of all sorts (information security) and an attack by a government, a terrorist group, or a criminal enterprise that has immense resources of expertise, personnel and time, all directed at subverting one individual organization (cybersecurity).  You can protect your car with a lock and insurance but those are not the tools of choice if you see a gang of thieves armed with bricks approaching your car at a stoplight. This distinction is at the very core of assessing an organization’s preparations for addressing the risk of cyberattacks and for defending itself against them.

As is true in so many investigations, the cybersecurity element of the fraud risk assessment process begins with the objectives of the review, which leads immediately on to the questions one chooses to ask. If an auditor only wants to know “Are we secure against cyberattacks?” then the answer should be up on a billboard in letters fifty feet high: No organization should ever consider itself safe against cyber attackers. They are too powerful and pervasive for any complacency. If major television networks can be stricken, if the largest banks can be hit, if governments are not immune, then the CFE’s client organization is not secure either.  Still, all anti-fraud reviewers can ask subtle and meaningful questions of client management, specifically focused on the data and software at risk of an attack. A fraud risk assessment process specific to cybersecurity might delve into the internals of database management systems and system software, requiring the considerable skills of a CFE supported by one or more tech-savvy consultants s/he has engaged to form the assessment team. Or it might call for just asking simple questions and applying basic arithmetic.

If the fraud examiner’s concern is the theft of valuable information, the simple corrective is to make the data valueless, which is usually achieved through encryption. The CFE’s question might be, “Of all your data, what percentage is encrypted?” If the answer is 100 percent, the follow-up question is whether the data are always encrypted—at rest, in transit and in use. If it cannot be shown that all data are secured all of the time, the next step is to determine what is not protected and under what circumstances. The assessment finding would consist of a flat statement of the amount of unencrypted data susceptible to theft and a recitation of the potential value to an attacker in stealing each category of unprotected data. The readers of this blog know that data must be decrypted in order to be used and so would be quick to point out that “universal” encryption in use is, ultimately, a futile dream. There are vendors who, think otherwise, but let’s accept the fact that data will, at some time, be exposed within a computer’s memory. Is that a fault attributable to the data or to the memory and to the programs running in it? Experts say it’s the latter. In-memory attacks are fairly devious, but the solutions are not. Rebooting gets rid of them and antimalware programs that scan memory can find them. So a CFE can ask,” How often is each system rebooted?” and “Does your anti-malware software scan memory?

To the extent that software used for attacks is embedded in the programs themselves, the problem lies in a failure of malware protection or of change management. A CFE need not worry this point; according to my California presenter many auditors (and security professionals) have wrestled with this problem and not solved it either. All a CFE needs to ask is whether anyone would be able to know whether a program had been subverted. An audit of the change management process would often provide a bounty of findings, but would not answer the reviewer’s question. The solution lies in having a version of a program known to be free from flaws (such as newly released code) and an audit trail of

known changes. It’s probably beyond the talents of a typical CFE to generate a hash total using a program as data and then to apply the known changes in order to see if the version running in production matches a recalculated hash total. But it is not beyond the skills of IT expects the CFE can add to her team and for the in-house IM staff responsible keeping their employer’s programs safe. A CFE fraud risk reviewer need only find out if anyone is performing such a check. If not, the CFE can simply conclude and report to the client that no one knows for sure if the client’s programs have been penetrated or not.

Finally, a CFE might want to find out if the environment in which data are processed is even capable of being secured. Ancient software running on hardware or operating systems that have passed their end of life are probably not reliable in that regard. Here again, the CFE need only obtain lists and count. How many programs have not been maintained for, say, five years or more? Which operating systems that are no longer supported are still in use? How much equipment in the data center is more than 10 years old? All this is only a little arithmetic and common sense, not rocket science.

In conclusion, frauds associated with weakened or absent cybersecurity systems are not likely to become a less important feature of the corporate landscape over time. Instead, they are poised to become an increasingly important aspect of doing business for those who create automated applications and solutions, and for those who attempt to safeguard them on the front end and for those who investigate and prosecute crimes against them on the back end. While the ramifications of every cyber fraud prevention decision are broad and diverse, a few basic good practices can be defined which the CFE, the fraud expert, can help any client management implement:

  • Know your fraud risk and what it should be;
  • Be educated in management science and computer technology. Ensure that your education includes basic fraud prevention techniques and associated prevention controls;
  • Know your existing cyber fraud prevention decision model, including the shortcomings of those aspects of the model in current use and develop a schedule to address them;
  • Know your frauds. Understand the common fraud scenarios targeting your industry so that you can act swiftly when confronted with one of them.

We can conclude that the issues involving cybersecurity are many and complex but that CFE’s are equipped  to bring much needed, fraud related experience to any management’s table as part of the team in confronting them.