Category Archives: Analytical Techniques

Private Company Employee Health

Our last post presented a short list of the chief fraud threats targeting government run health programs.  We thought it might be useful to practitioners to balance it with one on frauds directed at private company health insurance plans.  From one perspective, many of the schemes, as you’d expect, are similar; but there are significant differences. Losses due to fraud in both public and private health-care spending are notoriously difficult to estimate but amount to more than US $6o billion annually, according to a statement made by the then U.S. Attorney General  several years ago at a National Institutes of Health summit.  Like all fraud, by definition, health-care fraud involves deception or misrepresentation that results in an unauthorized benefit.  In the private sector, it increases the cost of providing benefits to employees company-wide, which in turn increases the overall cost of doing business, regardless of industry. And while only a slight percentage of health-care providers and consumers deliberately engage in fraud, that small percentage can raise the cost of doing business significantly. The increased costs appear in the form of higher premiums and out-of-pocket expenses or reduced benefits or coverage for employees and affect small businesses disproportionately.

But the news isn’t all bad.  The good news is that, especially with the rise of fraud prevention approaches based on data analytics, companies have more and more tools at their disposal to help combat this problem. Most important, perhaps, are the contributions our fraud examiner profession is making and the unique expertise we bring to fraud-fighting efforts. With the right approach and technology tools, fraud risk assessors can help identify control weaknesses that leave the organization susceptible to health-care fraud and track down potential indicators that such fraud may have occurred or is in progress. Working with management as well as with other assurance professionals and external parties, fraud examiners can help meet this challenge and even prevent it by applying well designed system edits that identify fraudulent insurance claims on the front end, preventing them from even being paid (pre-payment prevention as opposed to post-payment pay and chase).

Every fraud examiner and forensic accountant knows that access to the right information is critical to combating the ever mutating array of health-care frauds targeting both the private and governmental sectors. Asking the appropriate questions and carefully sifting through relevant data can reveal potentially fraudulent activity and shed light on abuses that otherwise may not be identified.

Much of the needed health-care information often resides with an organization’s health insurance provider or third-party claims administrators (TPAs); fraud examiners and company management should work cooperatively with these parties to obtain an understanding of the details. Specifically, employers should hold regular discussions with their providers or TPAs to collaborate on anti-fraud activities and to understand their provider’s approach to the problem. Providers, on their side, should share the details of their anti-fraud efforts with organizational management. They should also explain their, often proprietary, techniques used to detect fraud and abuse and provide specific examples of potential frauds recently identified.  Companies also have access to employee historical health claims databases through their insurance provider or their TPA. Analysis should be performed by these parties, and it generally should focus on identifying unusual patterns or trends as such findings could signal fraudulent activity in the claims data; the objective in doing so is to develop payment system edits targeting specific fraud schemes so that claims related to the schemes are prevented by the edits from paying the related health service claims.  Even if the data does not contain indicators of potential fraud schemes, fraud examiners should still recommend that it be mined continually to ferret out potential mistakes.

If it’s not already part of your client company’s regular human resource (HR) administration process, simply matching employee data with the TPA’s files could also shed light on potential problems. Some employees, for instance, may be in the wrong plan or have the wrong coverage. Moreover, former employees may still be listed as covered.  Which brings us to the big problem of dependent eligibility; I say ‘big’ because dependent eligibility is a costly issue for all employee health plans because providing costly health insurance coverage to the ineligible dependents of company employees can quickly prove a budget buster for enterprises of all sizes.

To determine a client’s risk of exposure to ineligible dependents, fraud risk assessors should start with an assessment of the controls built into the benefits enrollment process. If the organization doesn’t require proof of eligibility during the initial enrollment process, the risk of exposure increases. Risk also increases if proof is required upon initial enrollment but not thereafter, such as when covered children reach a certain age. Based on the level of risk identified, examiners, in conjunction with HR, can select one of several approaches to the next phase of their review.

–Low Risk: Offer employees an amnesty period. The organization should remind employees of the benefit plan requirements and let them know that a review of eligibility will be performed. They should be given a reasonable amount of time to adjust their coverage as necessary without any repercussions; sometimes this alone can result in a significant level of compliance.

–Medium Risk: Require eligibility certification. In addition to the steps associated with low risk, the organization should require employees to complete an affidavit that certifies all of their covered dependents are eligible under the benefit plan requirements.

–High Risk: Audit employee eligibility. The company’s internal audit function should perform a full eligibility audit after the organization completes the steps associated with low and medium risk situations.

As this blog and the ACFE have repeated over and over again, employee awareness can be the best fraud prevention tool available. Fraud Examiners working in every industry should learn more about health-care fraud scenarios and their effect on their client’s businesses and pursue opportunities to educate management on the cost drivers and the impact of fraud on their companies. If the organization’s compliance program includes employee training and distribution of periodic educational updates, this would be a logical medium into which to integrate employee awareness messaging. At a minimum, Fraud Examiners should be sure that any new employee orientation sessions cover basic healthcare benefits guidance:

–Don’t provide personal health coverage information to strangers. If the employee is uncertain why a third party is requesting certain personal information, they should be instructed to contact their company’s benefits administrator.

–Don’t loan an insurance card to anyone not listed on the card as a covered individual.

–Employees need to familiarize themselves with the conditions under which health coverage is being extended to them and to their dependents.

Given the complexities of health benefits administration, an organization almost cannot provide too much information to its employees about their coverage. Taking the guesswork out of the administration process can result in lower costs and happier employees in the long run.  Although many anticipated long-term benefits from U.S. health-care reforms contained in the Affordable Care Act, in the short term most employers were required to expand coverage offerings for employees and their dependents, thereby increasing costs. All of these factors point to an opportunity for health-care fraud to continue growing and, consequently, for Fraud Examiners and for fraud risk assessors to continue to play an important role in keeping this relentless source of monetary loss at bay.

Detect and Prevent

I got a call last week from a long term colleague, one of whose smaller client firms recently discovered a long running key-employee initiated fraud. My friend has been asked to assist her client in developing approaches to strengthen controls to, hopefully, prevent such disasters in the future.

ACFE training has consistently told us over the years, and daily experience repeatedly confirmed, that it is simply not possible or economical to stop all fraud before it happens. The only way for a retail concern to absolutely stop shoplifting might be to close and accept orders only over the Internet. Similarly, the only way for a bank to absolutely stop all loan fraud might be for it to stop lending money.

In general, my friend and I agreed during our conversation, that increasing preventive security can reduce fraud losses, but beyond some point, the cost of additional preventive security will exceed the related savings from reduced fraud losses. This is where detection comes in; it may be economical when prevention is not. One way to prevent a salesclerk from stealing from the register would be for the security department to carefully monitor, review, and approve every one of the clerk’s sales. However, it would likely be much more cost effective instead to implement a simple detective control: an end-of-shift reconciliation between the cash in the register and the transactions logged by the cash register during the clerk’s shift. If refunds are not given at the point of sale, the end-of-shift balance of cash in the register should equal the shift’s sales per the transaction logs minus the balance of cash in the register at the beginning of the shift. Any significant failure of these numbers to reconcile would amount to a red flag. Of course, further investigation could show that the clerk simply made an error and so did not commit fraud.

But the cost effectiveness of detective controls, like preventive controls, imposes limits. First, such controls are not cost free to implement, and improving detective controls may cost more than the results they provide. Second, detective controls produce both false positives and false negatives. A false positive occurs when a detective control signals a possible fraud that upon investigation turns up a reasonable explanation for the indicator. A false negative occurs when a detective control fails to signal a possible fraud when one exists. Reducing false negatives means increasing the fraud detection rate.

Similarly, the cost effectiveness of increasing preventive security has a limit as does the benefit of increasing the fraud detection rate. To increase the detection rate, it’s necessary to increase the frequency at which the detective control signals possible fraud. The result is more expensive investigations, and the cost of such additional investigations can exceed the resulting reduction in fraud losses.

As we all learned in undergraduate auditing, controls are essentially policies and procedures designed to minimize losses due to fraud or to other events such as errors or acts of nature. Corrective controls are merely special control types involved once a loss is known to exist. With respect to fraud, an important corrective control involves the investigation of potential frauds and the investigation and recovery process from discovered frauds.

More generally speaking, fraud investigations themselves serve not only a corrective function but also detective and preventive functions. Such investigations are detective of fraud to the extent that they follow up on fraud signals or red flags in order to confirm or disconfirm the presence of fraud. But once fraud is confirmed to exist, fraud examinations shift toward gathering evidence and become corrective by assisting in recovery from the perpetrator and other sources such as from insurance. Fraud investigations are also corrective in that they can lead to the revelation and repair of heretofore unknown weaknesses.

The end result is that the fraud investigation functions to correct the original loss, and the related discovery of the fraud scenario leads to prevention of similar losses in the future. In summary, the fraud examination has served to detect, correct, and prevent fraud. However, fraud investigations are not normally thought of as detective controls. This so is because fraud investigations tend to be much more costly than standard detective controls and therefore are normally used only when there is already some predication in the form of a fraud indicator triggered by a typical detective control. Therefore, the primary functions of fraud investigations are to address existing frauds and help to prevent future ones.

In some cases, the primary benefit of a fraud investigation might be to prevent future frauds. Even when recovery is impossible or impractical (e.g., because the thief has no assets), unwinding the fraud scheme may still have the benefit of leading to the prevention of the same scheme in the future. Furthermore, a company might benefit from spending a very large sum of money to investigate and prosecute a very small theft in order to deter other individuals from defrauding the company in the same way. Many State governments have statutes specifying that every fraud affecting governmental assets, whether large or small, must be fully investigated because taxpayer funds are involved (the assets affected are public property).

There is never a guarantee that investigating a fraud indicator will lead to the discovery of fraud. Depending on the situation, an investigation might lead to nothing at all (i.e., produce a reasonable explanation for the original red flag) or to the discovery of losses due to simple errors, waste, inefficiencies, or even uncontrollable events like acts of nature. If a lender is considering a loan application, a fraud indicator might indicate nothing, fraud, or an error. On the other hand, in regard to the possible theft of raw materials in a production process, a fraud indicator just might indicate undocumented waste or scrap.

Two important factors to consider concerning the general design of a fraud detection process are not only the costs and benefits of detecting, correcting, and preventing a given fraud scenario but also the costs and benefits of detecting, correcting, and preventing errors, waste, uncontrollable events, and inefficiencies in general. Of course, the particular costs that are relevant will vary from one type of business process to another.

As a general rule, we can say that both preventive controls and detective controls cost less than corrective controls. Corrective controls tend to involve hands-on, resource-intensive investigations, and in many cases, such investigations do not result in recovering the loss. On the other hand, preventive controls can also be quite costly. Banks pay armed guards and incur costs to maintain expensive vaults and alarm systems. Companies surround their headquarters with high fences and armed guards, and use security checkpoints and biometric key card systems inside. On the information technology side, firms use sophisticated firewalls and multi-layer access controls. The costs of all these preventive measures can add up to staggering sums in large companies. Of course, losses that are not prevented or corrected in a timely fashion can lead to the ultimate corrective measure: bankruptcy. In fact, some ACFE estimates show that about one-third of all business failures relate to some form of fraudulent activity.

One positive aspect of the cost of preventive controls is that unlike detective controls, they do not generate fraud indicators that lead to costly investigations. In fact, they tend to do their job in complete silence so that management never even knows when they prevent a fraud. The thick door of a bank vault with a time lock prevents bank employees from entering the building at night to steal its contents. Similarly, passwords, pin numbers, and biometric data silently provide access to authorized individuals and prevent access from others.

The problem with preventive controls is that they are always subject to circumvention by determined and cunning fraudsters. There is no perfect solution to preventing acts of fraud, so detection is necessary as a secondary line of defense, and in some cases, as the primary line of defense. Consider a lending company that accepts online loan applications. It may be difficult or impossible to prevent fraudulent applications, but the company can certainly put a sophisticated (and expensive) system in place to analyze applications and provide indicators that suggest when an application may be fraudulent.

In general, the optimal allocation of resources to prevention versus detection depends on the particular business process under consideration. So, there is no general rule that dictates the optimal allocation of resources between prevention versus detection. But there are some general steps that can assist in making the allocation:

1. Analyze the target business process and identify threats and vulnerabilities.
2. Select reasonable preventive controls according to the business process and customs within the client’s industry.
3. Estimate fraud losses given the assumed preventive controls.
4. Identify and add a basic set of detective controls to the system.
5. For a given set of detective controls, identify the optimal mix of false negatives versus false positives. The optimal mix depends on the costs of investigations versus the costs of losses. Large losses and small investigation costs favor relatively low false negatives and high false positives for red flags.
6. Given the assumed mix of false negative and false positive errors, estimate the incremental cost associated with adding the detective (and related corrective) controls, and estimate the resulting reduction in fraud losses.
7. Compare the reduction in fraud losses with the increase in costs associated with adding the optimal mix of detection and correction controls.
8. If increase in costs is significantly lower than the related reduction in fraud losses, consider adding more detective controls. Otherwise, accept the set of detective controls under consideration.

MAC Documents

As our upcoming Ethics 2019 lecture for January-February 2019 makes clear, many of the most spectacular cases of fraud during the last two decades that were, at least initially, successfully concealed from auditors involved the long running falsification of documents. Bernie Madoff and Enron come especially to mind. In hindsight, the auditors involved in these individual cases failed to detect the fraud for multiple reasons, one of which was a demonstrated lack of professional skepticism coupled with a general lack of awareness.

Fraud audit and red flag testing procedures are designed to validate the authenticity of documents and the performance of internal controls. Red flag testing procedures are based on observing indicators in the internal documents and in the internal controls. In contrast, fraud audit testing procedures verify the authenticity of the representations in the documents and internal controls. While internal controls are an element of each, they are not the same as the testing procedures performed in a traditional audit. Considering that fraud audit testing procedures are the basis of the fraud audit program, the analysis of documents will differ between the fraud audit and the traditional verification audit. Business systems are driven by paper documents, both imaged paper documents and electronic documents. Approvals are handwritten, created mechanically, or created electronically through a computerized business application. Therefore, the ability to examine a document for the red flags indicative of a fraud scenario is a critical component in the process of fraud detection.

The ACFE points out that within fraud auditing, there are levels of document examination: the forensic document examination performed by a certified document examiner and the document examination performed by an independent external auditor conducting a fraud audit are distinct. Clearly, the auditor is not required to have the skills of a certified document examiner; however, the auditor should understand the difference between questioned document examination and the examination of documents for red flags.

Questioned, or forensic, document examination is the application of science to the law. The forensic document examiner, using specialized techniques, examines documents and any handwriting on the documents to establish their authenticity and to detect alterations. The American Academy of Forensic Sciences (AAFS) Questioned Document Section and the American Society of Questioned Document Examiners (ASQDE) provide guidance and standards to assurance professionals in the field of document examination. For example, the American Society for Testing and Materials, International (ASTM) Standard E444-09 (Standard Guide for Scope of Work of Forensic Document Examiners) indicates there are four components to the work of a forensic document examiner. These components are the following:

1. Establish document genuineness or non-genuineness, expose forgery, or reveal alterations, additions, or deletions.
2. Identify or eliminate persons as the source of handwriting.
3. Identify or eliminate the source of typewriting or other impression, marks, or relative evidence.
4. Write reports or give testimony, when needed, to aid the users of the examiner’s services in understanding the examiner’s findings.

CFEs will find that some forensic document examiners (FDEs) limit their work to the examination and comparison of handwriting, however, most inspect and examine the whole document in accordance with the ASTM standard.

The fraud examiner or auditor also focuses on the authenticity of the document, with two fundamental differences:

1. The degree of certainty. With forensic document examination, the forensic certainty is based on scientific principles. Fraud audit document examination is based on visual observations and informed audit experience.
2. Central focus. Fraud audit document examination focuses on the red flags associated with a hypothetical fraud scenario. Forensic document examination focuses on the genuineness of the document or handwriting under examination.

Awareness of the basic principles and objectives of forensic document examination is of assistance to any auditor or examiner in determining if, when and how to use the services of a certified document examiner in the process of conducting a fraud audit.

ACFE training indicates that documentary red flags are among the most important of all red flags. Examiners and auditors need to be aware not only of how a fraud scenario occurs, but also of how to employ the correct methodology in identifying and describing the documents related to a given scenario. These capabilities are critical as well in order to be successful in the identification of document related red flags. Specifically, a document must link to the fraud scenario and to the key controls of the involved business process(es).

The target document should be examined for the following: document condition, document format, document information, and industry standards. To these characteristics the concepts of missing, altered, and created content should be applied. The second aspect of the document examination is linking the document to the internal controls. Linking the document examination to the internal controls is a critical aspect of developing the decision tree aspect of the fraud audit program. Using a document examination methodology aids the fraud auditor in building his or her fraud audit program.

The ACFE’s acronym MAC is a useful aid to assist the auditor in identifying red flags and the corresponding audit response. The ‘M’ stands for missing, either missing the entire document or missing information on a document; the ‘A’ for altered information on a document; and the ‘C’ for created documents or information on a document. Specifically:

A missing document is a red flag. Missing documents occur because the document was never created, was destroyed, or has been misfiled. Documents are either the basis of initiating the transaction or support the transaction.

The frequency of missing documents must be linked to the fraud scenario. In some instances, missing one document may be a red flag, although typically repetition is necessary to warrant fraud audit testing procedures. The audit response should focus on the following attributes assuming the document links to a key control:

— Is the document externally or internally created? The existence of externally created documents can be confirmed with the source, assuming the source is not identified as involved in the fraud scenario.
— Is the document necessary to initiate the transaction or is the document a supporting one? Documents used to initiate a transaction had to have existed at some point; therefore, logic dictates that the document was destroyed or misfiled.
— One, two, or all three of the following questions could apply to internal documents:

• Is there a pattern of missing documents associated with the same entity?
• Is there a pattern of missing documents associated with an internal employee?
• Does the document support a key anti-fraud control, therefore being a trigger red flag, or is the missing document related to a non-key control?

With regard to missing information on a document, several questions arise, one of which is: are there tears, torn pieces, soiled areas, or charred areas that cause information to be missing? To address any of these situations, finding a similar document type is needed to determine if the intent of the document has changed because of the missing information.  Another question is: is information obliterated (e.g., covered, blotted, or wiped out)? Overwriting is commonly used to obscure existing writing. Correction fluid is also a common method, but the underlying writing can be read and photographed using transmitted light from underneath the document.

Scratching out writing with a pen will obliterate writing successfully if it results in the page being torn. Spilled liquids can also obliterate writing.

‘A’, altered, pertains to changing or adding information to the original document. The information may be altered manually or through the use of desktop publishing capabilities. For example, manual changes tend to be visible through a difference in handwriting, and electronic documents would generally be altered via the software used to create the document.

Any altering of information would be detected through the same red flags as adding information. In the context of fraud, forgery is the first thing that comes to mind in any discussion of the altering of documents. Forgery is a legal term applied to fraudulent imitation. It is an alteration of writing as to convey a false impression that a document itself, not its contents, is authentic, thereby imposing a legal liability. It is an alteration of a document with the intent to defraud. It should be noted that it is possible for a document examiner to identify a document or signature as a forgery, but it is much less common for the examiner to identify the forger. This is due to the nature of handwriting, whereby a forger is attempting to imitate the writing habit of another person, thereby suppressing his own writing characteristics and style, and in essence, disguising his or her writing.

A ‘C’, or created document is any document prepared by the perpetrator of the fraud scenario. This type of changed document can include added or created documents or added and created text on a document. The document can be prepared by an external source (e.g., a vendor in an over-billing scheme) or an internal source (e.g., a purchasing agent who creates false bids).

Some signs of document creation can include the age of the document being inconsistent with the purported creation date, or the document lacking the sophistication typically associated with normal business standards. Added or created text can inserted with the use of ink or whatever type of writing instrument was used on the original. It can also be added through cutting and pasting sections of text, then photocopying the document to eliminate any outline. When pages are suspected of being added in this manner, a comparison of the type of paper used for the original and the photocopy should be made. In terms of computer-generated and machine-produced documents differences in the software used may result in textual differences.

As the MAC acronym seeks to demonstrate, fraudulent document information can be categorized as missing information, incorrect information, or information inconsistent with normal business standards. Therefore, the investigating CFE or auditor needs to have the requisite business and industry knowledge to correctly associate the appropriate red flags with the relevant documentary information consistent with the fraud scenario under investigation.

Needles & Haystacks

A long-time acquaintance of mine told me recently that, fresh out of the University of Virginia and new to forensic accounting, his first assignment consisted in searching, at the height of summer, through two unairconditioned trailers full of thousands of savings and loan records for what turned out to be just two documents critical to proving a loan fraud. He told me that he thought then that his job would always consist of finding needles in haystacks. Our profession and our tools have, thankfully, come a long way since then!

Today, digital analysis techniques afford the forensic investigator the ability to perform cost-effective financial forensic investigations. This is achieved through the following:

— The ability to test or analyze 100 percent of a data set, rather than merely sampling the data set.
–Massive amounts of data can be imported into working files, which allows for the processing of complex transactions and the profiling of certain case-specific characteristics.
–Anomalies within databases can be quickly identified, thereby reducing the number of transactions that require review and analysis.
–Digital analysis can be easily customized to address the scope of the engagement.

Overall, digital analysis can streamline investigations that involve a large number of transactions, often turning a needle-in-the-haystack search into a refined and efficient investigation. Digital analysis is not designed to replace the pick-and-shovel aspect of an investigation. However, the proper application of digital analysis will permit the forensic operator to efficiently identify those specific transactions that require further investigation or follow up.

As every CFE knows, there are an ever-growing number of software applications that can assist the forensic investigator with digital analysis. A few such examples are CaseWare International Inc.’s IDEA, ACL Services Ltd.’s ACL Desktop Edition, and the ActiveData plug-in, which can be added to Excel.

So, whether using the Internet in an investigation or using software to analyze data, fraud examiners can today rely heavily on technology to aid them in almost any investigation. More data is stored electronically than ever before; financial data, marketing data, customer data, vendor listings, sales transactions, email correspondence, and more, and evidence of fraud can be located within that data. Unfortunately, fraudulent data often looks like legitimate data when viewed in the raw. Taking a sample and testing it might or might not uncover evidence of fraudulent activity. Fortunately, fraud examiners now have the ability to sort through piles of information by using special software and data analysis techniques. These methods can identify future trends within a certain industry, and they can be configured to identify breaks in audit control programs and anomalies in accounting records.

In general, fraud examiners perform two primary functions to explore and analyze large amounts of data: data mining and data analysis. Data mining is the science of searching large volumes of data for patterns. Data analysis refers to any statistical process used to analyze data and draw conclusions from the findings. These terms are often used interchangeably.

If properly used, data analysis processes and techniques are powerful resources. They can systematically identify red flags and perform predictive modeling, detecting a fraudulent situation long before many traditional fraud investigation techniques would be able to do so.

Big data is now a buzzword in the worlds of business, audit, and fraud investigation. Big data are high volume, high velocity, and/or high variety information assets that require new forms of processing to enable enhanced decision making, insight discovery, and process optimization. Simply put, big data is information of extreme size, diversity, and complexity.

In addition to thinking of big data as a single set of data, fraud investigators should think about the way data grow when different data sets are connected together that might not normally be connected. Big data represents the continuous expansion of data sets, the size, variety, and speed of generation of which makes it difficult to manage and analyze.

Big data can be instrumental to fact gathering during an investigation. Distilled down to its core, how do fraud examiners gather data in an investigation? We look at documents and financial or operational data, and we interview people. The challenge is that people often gravitate to the areas with which they are most comfortable. Attorneys will look at documents and email messages and then interview individuals. Forensic accounting professionals will look at the accounting and financial data (structured data). Some people are strong interviewers. The key is to consider all three data sources in unison. Big data helps to make it all work together to tell the complete picture. With the ever-increasing size of data sets, data analytics has never been more important or useful. Big data requires the use of creative and well-planned analytics due to its size and complexity. One of the main advantages of using data analytics in a big data environment is, as indicated above, that it allows the investigator to analyze an entire population of data rather than having to choose a sample and risk drawing conclusions in the event of a sampling error.

To conduct an effective data analysis, a fraud examiner must take a comprehensive approach. Any direction can (and should) be taken when applying analytical tests to available data. The more creative fraudsters get in hiding their schemes, the more creative the fraud examiner must become in analyzing data to detect these schemes. For this reason, it is essential that fraud investigators consider both structured and unstructured data when planning their engagements.
Data are either structured or unstructured. Structured data is the type of data found in a database, consisting of recognizable and predictable structures. Examples of structured data include sales records, payment or expense details, and financial reports.

Unstructured data, by contrast, is data not found in a traditional spreadsheet or database. Examples of unstructured data include vendor invoices, email and user documents, human resources files, social media activity, corporate document repositories, and news feeds.

When using data analysis to conduct a fraud examination, the fraud examiner might use structured data, unstructured data, or a combination of the two. For example, conducting an analysis on email correspondence (unstructured data) among employees might turn up suspicious activity in the purchasing department. Upon closer inspection of the inventory records (structured data), the fraud examiner might uncover that an employee has been stealing inventory and covering her tracks in the records.

Data mining has roots in statistics, machine learning, data management and databases, pattern recognition, and artificial intelligence. All of these are concerned with certain aspects of data analysis, so they have much in common; yet they each have a distinct and individual flavor, emphasizing particular problems and types of solutions.

Although data mining technologies provide key advantages to marketing and business activities, they can also manipulate financial data that was previously hidden within a company’s database, enabling fraud examiners to detect potential fraud.

Data mining software provides an easy to use process that gives the fraud examiner the ability to get to data at a required level of detail. Data mining combines several different techniques essential to detecting fraud, including the streamlining of raw data into understandable patterns.

Data mining can also help prevent fraud before it happens. For example, computer manufacturers report that some of their customers use data mining tools and applications to develop anti-fraud models that score transactions in real-time. The scoring is customized for each business, involving factors such as locale and frequency of the order, and payment history, among others. Once a transaction is assigned a high-risk score, the merchant can decide whether to accept the transaction, deny it, or investigate further.

Often, companies use data warehouses to manage data for analysis. Data warehouses are repositories of a company’s electronic data designed to facilitate reporting and analysis. By storing data in a data warehouse, data users can query and analyze relevant data stored in a single location. Thus, a company with a data warehouse can perform various types of analytic operations (e.g., identifying red flags, transaction trends, patterns, or anomalies) to assist management with its decision making responsibilities.

In conclusion, after the fraud examiner has identified the data sources, s/he should identify how the information is stored by reviewing the database schema and technical documentation. Fraud examiners must be ready to face a number of pitfalls when attempting to identify how information is stored, from weak or nonexistent documentation to limited collaboration from the IT department.

Moreover, once collected, it’s critical to ensure that the data is complete and appropriate for the analysis to be performed. Depending on how the data was collected and processed, it could require some manual work to make it usable for analysis purposes; it might be necessary to modify certain field formats (e.g., date, time, or currency) to make the information usable.

Fraud Prevention Oriented Data Mining

One of the most useful components of our Chapter’s recently completed two-day seminar on Cyber Fraud & Data Breaches was our speaker, Cary Moore’s, observations on the fraud fighting potential of management’s creative use of data mining. For CFEs and forensic accountants, the benefits of data mining go much deeper than as just a tool to help our clients combat traditional fraud, waste and abuse. In its simplest form, data mining provides automated, continuous feedback to ensure that systems and anti-fraud related internal controls operate as intended and that transactions are processed in accordance with policies, laws and regulations. It can also provide our client managements with timely information that can permit a shift from traditional retrospective/detective activities to the proactive/preventive activities so important to today’s concept of what effective fraud prevention should be. Data mining can put the organization out front of potential fraud vulnerability problems, giving it an opportunity to act to avoid or mitigate the impact of negative events or financial irregularities.

Data mining tests can produce “red flags” that help identify the root cause of problems and allow actionable enhancements to systems, processes and internal controls that address systemic weaknesses. Applied appropriately, data mining tools enable organizations to realize important benefits, such as cost optimization, adoption of less costly business models, improved program, contract and payment management, and process hardening for fraud prevention.

In its most complex, modern form, data mining can be used to:

–Inform decision-making
–Provide predictive intelligence and trend analysis
–Support mission performance
–Improve governance capabilities, especially dynamic risk assessment
–Enhance oversight and transparency by targeting areas of highest value or fraud risk for increased scrutiny
–Reduce costs especially for areas that represent lower risk of irregularities
–Improve operating performance

Cary emphasized that leading, successful organizational implementers have tended to take a measured approach initially when embarking on a fraud prevention-oriented data mining initiative, starting small and focusing on particular “pain points” or areas of opportunity to tackle first, such as whether only eligible recipients are receiving program funds or targeting business processes that have previously experienced actual frauds. Through this approach, organizations can deliver quick wins to demonstrate an early return on investment and then build upon that success as they move to more sophisticated data mining applications.

So, according to ACFE guidance, what are the ingredients of a successful data mining program oriented toward fraud prevention? There are several steps, which should be helpful to any organization in setting up such an effort with fraud, waste, abuse identification/prevention in mind:

–Avoid problems by adopting commonly used data mining approaches and related tools.

This is essentially a cultural transformation for any organization that has either not understood the value these tools can bring or has viewed their implementation as someone else’s responsibility. Given the cyber fraud and breach related challenges faced by all types of organizations today, it should be easier for fraud examiners and forensic accountants to convince management of the need to use these tools to prevent problems and to improve the ability to focus on cost-effective means of better controlling fraud -related vulnerabilities.

–Understand the potential that data mining provides to the organization to support day to day management of fraud risk and strategic fraud prevention.

Understanding, both the value of data mining and how to use the results, is at the heart of effectively leveraging these tools. The CEO and corporate counsel can play an important educational and support role for a program that must ultimately be owned by line managers who have responsibility for their own programs and operations.

–Adopt a version of an enterprise risk management program (ERM) that includes a consideration of fraud risk.

An organization must thoroughly understand its risks and establish a risk appetite across the enterprise. In this way, it can focus on those area of highest value to the organization. An organization should take stock of its risks and ask itself fundamental questions, such as:

-What do we lose sleep over?
-What do we not want to hear about us on the evening news or read about in the print media or on a blog?
-What do we want to make sure happens and happens well?

Data mining can be an integral part of an overall program for enterprise risk management. Both are premised on establishing a risk appetite and incorporating a governance and reporting framework. This framework in turn helps ensure that day-to-day decisions are made in line with the risk appetite, and are supported by data needed to monitor, manage and alleviate risk to an acceptable level. The monitoring capabilities of data mining are fundamental to managing risk and focusing on issues of importance to the organization. The application of ERM concepts can provide a framework within which to anchor a fraud prevention program supported by effective data mining.

–Determine how your client is going to use the data mined information in managing the enterprise and safeguarding enterprise assets from fraud, waste and abuse.

Once an organization is on top of the data, using it effectively becomes paramount and should be considered as the information requirements are being developed. As Cary pointed out, getting the right data has been cited as being the top challenge by 20 percent of ACFE surveyed respondents, whereas 40 percent said the top challenge was the “lack of understanding of how to use analytics”. Developing a shared understanding so that everyone is on the same page is critical to success.

–Keep building and enhancing the application of data mining tools.

As indicated above, a tried and true approach is to begin with the lower hanging fruit, something that will get your client started and will provide an opportunity to learn on a smaller scale. The experience gained will help enable the expansion and the enhancement of data mining tools. While this may be done gradually, it should be a priority and not viewed as the “management reform initiative of the day. There should be a clear game plan for building data mining capabilities into the fiber of management’s fraud and breach prevention effort.

–Use data mining as a tool for accountability and compliance with the fraud prevention program.

It is important to hold managers accountable for not only helping institute robust data mining programs, but for the results of these programs. Has the client developed performance measures that clearly demonstrate the results of using these tools? Do they reward those managers who are in the forefront in implementing these tools? Do they make it clear to those who don’t that their resistance or hesitation are not acceptable?

–View this as a continuous process and not a “one and done” exercise.

Risks change over time. Fraudsters are always adjusting their targets and moving to exploit new and emerging weaknesses. They follow the money. Technology will continue to evolve, and it will both introduce new risks but also new opportunities and tools for management. This client management effort to protect against dangers and rectify errors is one that never ends, but also one that can pay benefits in preventing or managing cyber-attacks and breaches that far outweigh the costs if effectively and efficiently implemented.

In conclusion, the stark realities of today’s cyber related challenges at all levels of business, private and public, and the need to address ever rising service delivery expectations have raised the stakes for managing the cost of doing business and conducting the on-going war against fraud, waste and abuse. Today’s client-managers should want to be on top of problems before they become significant, and the strategic use of data mining tools can help them manage and protect their enterprises whilst saving money…a win/win opportunity for the client and for the CFE.

Finding the Words

I had lunch with a long-time colleague the other day and the topic of conversation having turned to our May training event next week, he commented that when conducting a fraud examination, he had always found it helpful to come up with a list of words specifically associated with the type of fraud scenario on which he was working.  He found the exercise useful when scanning through the piles of textual material he frequently had to plow through during complex examinations.

Data analysis in the traditional sense involves running rule-based queries on structured data, such as that contained in transactional databases or financial accounting systems. This type of analysis can yield valuable insight into potential frauds. But, a more complete analysis requires that fraud examiners (like my friend) also consider unstructured textual data. Data are either structured or unstructured. Structured data is the type of data found in a database, consisting of recognizable and predictable structures. Examples of structured data include sales records, payment or expense details, and financial reports. Unstructured data, by contrast, is data that would not be found in a traditional spreadsheet or database. It is typically text based.

Our client’s employees are sending and receiving more email messages each year, retaining ever more electronic source documents, and using more social media tools. Today, we can anticipate unstructured data to come from numerous sources, including:

• Social media posts
• Instant messages
• Videos
• Voice files
• User documents
• Mobile phone software applications
• News feeds
• Sales and marketing material
• Presentations

Textual analytics is a method of using software to extract usable information from unstructured text data. Through the application of linguistic technologies and statistical techniques, including weighted fraud indicators (e.g., my friend’s fraud keywords) and scoring algorithms, textual analytics software can categorize data to reveal patterns, sentiments, and relationships indicative of fraud. For example, an analysis of email communications might help a fraud examiner gauge the pressures/incentives, opportunities, and rationalizations to commit fraud that exist in a client organization.

According to my colleague, as a prelude to textual analytics (depending on the type of fraud risk present in a fraud examiner’s investigation), the examiner  will frequently profit by coming up with a list of fraud keywords that are likely to point to suspicious activity. This list will depend on the industry of the client, suspected fraud schemes, and the data set the fraud examiner has available. In other words, if s/he is running a search through journal entry detail, s/he will likely search for different fraud keywords than if s/he were running a search of emails. It might be helpful to look at the ACFE’s fraud triangle when coming up with a keyword list. The factors identified in the triangle are helpful when coming up with a fraud keyword list. Consider how someone in the entity under investigation might have the opportunity to commit fraud, be under pressure to commit fraud, or be able to rationalize the commission of fraud.

Many people commit fraud because of something that has happened in their life that motivates them to steal. Maybe they find themselves in debt, or perhaps they must meet a certain goal to qualify for a performance-based bonus. Keywords that might indicate pressure include deadline, quota, trouble, short, problem, and concern. Think of words that would indicate that someone has the opportunity or ability to commit fraud. Examples include override, write-off, recognize revenue, adjust, discount, and reserve/provision.

Since most fraudsters do not have a criminal background, justifying their actions is a key part of committing fraud. Some keywords that might indicate a fraudster is rationalizing his actions include reasonable, deserve, and temporary.

So, even though the concepts embodied in the fraud triangle are a good place to start when developing a keyword list, it’s also important to consider the nature of the client entity’s industry and the types of payments it makes or is suspected of making. Think about the fraud scenarios that are likely to have occurred. Does the entity do a significant amount of work overseas or have many contractors? If so, there might be an elevated risk of bribery. Focus on the payment text descriptions in journal entries or in work delated documentation, since no one calls it “bribe expense.” Some examples of word combinations in payment descriptions that might merit special attention include:

• Goodwill payment
• Consulting fee
• Processing fee
• Incentive payment
• Donation
• Special commission
• One-time payment
• Special payment
• Friend fee
• Volume contract incentive

Any payment descriptions bearing these, or similar terms warrant extra scrutiny to check for reasonableness. Also, examiners should always be wary of large cash disbursements that have a blank journal payment description.

Beyond key word lists, the ACFE tells us that another way to discover fraud clues hidden in text is to consider the emotional tone of employee correspondence. In emails and instant messages, for instance, a fraud examiner should identify derogatory, surprised, secretive, or worried communications. In one example, former Enron CEO Ken Lay’s emails were analyzed, revealing that as the company came closer to filing bankruptcy, his email correspondence grew increasingly derogatory, confused, and angry. This type of analysis provided powerful evidence that he knew something was wrong at the company.

While advanced textual analytics can be extremely revealing and can provide clues for potential frauds that might otherwise go unnoticed, the successful application of such analytics requires the use of sophisticated software, as well as a thorough understanding of the legal environment of employee rights and workplace searches. Consequently, fraud examiners who are considering adding textual analytics to their fraud detection arsenal should consult with technological and legal experts before undertaking such techniques.

Even with sophisticated data analysis techniques, some data are so vast or complex that they remain difficult to analyze using traditional means. Visually representing data via graphs,  link diagrams, time-series charts, and other illustrative representations can bring clarity to a fraud examination. The utility of visual representations is enhanced as data grow in volume and complexity. Visual analytics build on humans’ natural ability to absorb a greater volume of information in visual rather than numeric form and to perceive certain patterns, shapes, and shades more easily than others.

Link analysis software is used by fraud examiners to create visual representations (e.g., charts with lines showing connections) of data from multiple data sources to track the movement of money; demonstrate complex networks; and discover communications, patterns, trends, and relationships. Link analysis is very effective for identifying indirect relationships and relationships with several degrees of separation. For this reason, link analysis is particularly useful when conducting a money laundering investigation because it can track the placement, layering, and integration of money as it moves around unexpected sources. It could also be used to detect a fictitious vendor (shell company) scheme. For instance, the investigator could map visual connections between a variety of entities that share an address and bank account number to reveal a fictitious vendor created to embezzle funds from a company.  The following are some other examples of the analyses and actions fraud examiners can perform using link analysis software:

• Associate communications, such as email, instant messages, and internal phone records, with events and individuals to reveal connections.
• Uncover indirect relationships, including those that are connected through several intermediaries.
• Show connections between entities that share an address, bank account number, government identification number (e.g., Social Security number), or other characteristics.
• Demonstrate complex networks (including social networks).

Imagine a listing of vendors, customers, employees, or financial transactions of a global company. Most of the time, these records will contain a reference to a location, including country, state, city, and possibly specific street address. By visually analyzing the site or frequency of events in different geographical areas, a fraud investigator has yet another variable with which s/he can make inferences.

Finally, timeline analysis software aids fraud examiners in transforming their data into visual timelines. These visual timelines enable fraud examiners to:

• Highlight key times, dates, and facts.
• More readily determine a sequence of events.
• Analyze multiple or concurrent sequences of events.
• Track unaccounted for time.
• Identify inconsistencies or impossibilities in data.

The Anti-Fraud Blockchain

Blockchain technology, the series of interlocking algorithms powering digital currencies like BitCoin, is emerging as a potent fraud prevention tool.  As every CFE knows, technology is enabling new forms of money and contracting, and the growing digital economy holds great promise to provide a full range of new financial tools, especially to the world’s poor and unbanked. These emerging virtual currencies and financial techniques are often anonymous, and none have received quite as much press as Bitcoin, the decentralized peer-to-peer digital form of money.

Bitcoins were invented in 2009 by a mysterious person (or group of people) using the alias Satoshi Nakamoto, and the coins are created or “mined” by solving increasingly difficult mathematical equations, requiring extensive computing power. The system is designed to ensure no more than twenty-one million Bitcoins are ever generated, thereby preventing a central authority from flooding the market with new Bitcoins. Most people purchase Bitcoins on third-party exchanges with traditional currencies, such as dollars or euros, or with credit cards. The exchange rates against the dollar for Bitcoin fluctuate wildly and have ranged from fifty cents per coin around the time of its introduction to over $16,0000 in December 2017. People can send Bitcoins, or percentages of bitcoin, to each other using computers or mobile apps, where coins are stored in digital wallets. Bitcoins can be directly exchanged between users anywhere in the world using unique alphanumeric identifiers, akin to e-mail addresses, and there are no transaction fees in the basic system, absent intermediaries.

Anytime a purchase takes place, it is recorded in a public ledger known as the blockchain, which ensures no duplicate transactions are permitted. Crypto currencies are called such because they use cryptography to regulate the creation and transfer of money, rather than relying on central authorities. Bitcoin acceptance continues to grow rapidly, and it is possible to use Bitcoins to buy cupcakes in San Francisco, cocktails in Manhattan, and a Subway sandwich in Allentown.

Because Bitcoin can be spent online without the need for a bank account and no ID is required to buy and sell the crypto currency, it provides a convenient system for anonymous, or more precisely pseudonymous, transactions, where a user’s true name is hidden. Though Bitcoin, like all forms of money, can be used for both legal and illegal purposes, its encryption techniques and relative anonymity make it strongly attractive to fraudsters and criminals of all kinds. Because funds are not stored in a central location, accounts cannot readily be seized or frozen by police, and tracing the transactions recorded in the blockchain is significantly more complex than serving a subpoena on a local bank operating within traditionally regulated financial networks. As a result, nearly all the so-called Dark Web’s illicit commerce is facilitated through alternative currency systems. People do not send paper checks or use credit cards in their own names to buy meth and pornography. Rather, they turn to anonymous digital and virtual forms of money such as Bitcoin.

A blockchain is, essentially, a way of moving information between parties over the Internet and storing that information and its transaction history on a disparate network of computers. Bitcoin, and all the other digital currencies, operates on a blockchain: as transactions are aggregated into blocks, each block is assigned a unique cryptographic signature called a “hash.” Once the validating cryptographic puzzle for the latest block has been solved by a coin mining computer, three things happen: the result is time-stamped, the new block is linked irrevocably to the blocks before and after it by its unique hash, and the block and its hash are posted to all the other computers that were attempting to solve the puzzle involved in the mining process for new coins. This decentralized network of computers is the repository of the immutable ledger of bitcoin transactions.  If you wanted to steal a bitcoin, you’d have to rewrite the coin’s entire history on the blockchain in broad daylight.

While bitcoin and other digital currencies operate on a blockchain, they are not the blockchain itself. It’s an insight of many computer scientists that in addition to exchanging digital money, the blockchain can be used to facilitate transactions of other kinds of digitized data, such as property registrations, birth certificates, medical records, and bills of lading. Because the blockchain is decentralized and its ledger immutable, all these types of transactions would be protected from hacking; and because the blockchain is a peer-to-peer system that lets people and businesses interact directly with each other, it is inherently more efficient and  cheaper than current systems that are burdened with middlemen such as lawyers and regulators.

A CFE’s client company that aims to reduce drug counterfeiting could have its CFE investigator use the blockchain to follow pharmaceuticals from provenance to purchase. Another could use it to do something similar with high-end sneakers. Yet another, a medical marijuana producer, could create a blockchain that registers everything that has happened to a cannabis product, from seed to sale, letting consumers, retailers and government regulators know where everything came from and where it went. The same thing can be done with any normal crop so, in the same way that a consumer would want to know where the corn on her table came from, or the apple that she had at lunch originated, all stake holders involved in the medical marijuana enterprise would know where any batch of product originated and who touched it all along the way.

While a blockchain is not a full-on solution to fraud or hacking, its decentralized infrastructure ensures that there are no “honeypots” of data available, like financial or medical records on isolated company servers, for criminals to exploit. Still, touting a bitcoin-derived technology as an answer to cybercrime may seem a stretch considering the high-profile, and lucrative, thefts of cryptocurrency over the past few years. Its estimated that as of March 2015, a full third of  all Bitcoin exchanges, (where people store their bitcoin), up to then had been hacked, and nearly half had closed. There was, most famously, the 2014 pilferage of Mt. Gox, a Japanese based digital coin exchange, in which 850,000 bitcoins worth $460,000,000 disappeared. Two years later another exchange, Bitfinex, was hacked and around $60 million in bitcoin was taken; the company’s solution was to spread the loss to all its customers, including those whose accounts had not been drained.

Unlike money kept in a bank, cryptocurrencies are uninsured and unregulated. That is one of the consequences of a monetary system that exists, intentionally, beyond government control or oversight. It may be small consolation to those who were affected by these thefts that the bitcoin network itself and the blockchain has never been breached, which perhaps proves the immunity of the blockchain to hacking.

This security of the blockchain itself demonstrates how smart contracts can be written and stored on it. These are covenants, written in code, that specify the terms of an agreement. They are smart because as soon as its terms are met, the contract executes automatically, without human intervention. Once triggered, it can’t be amended, tampered with, or impeded. This is programmable money. Such smart contracts are a tool with the potential to change how business in done. The concept, as with digital currencies, is based on computers synced together. Now imagine that rather than syncing a transaction, software is synced. Every machine in the network runs the same small program. It could be something simple, like a loan: A sends B some money, and B’s account automatically pays it back, with interest, a few days later. All parties agree to these terms, and it’s locked in using the smart contract. The parties have achieved programmable money!

There is no doubt that smart contracts and the blockchain itself will augment the trend toward automation, though it is automation through lines of code, not robotics. For businesses looking to cut costs and reduce fraud, this is one of the main attractions of blockchain technology. The challenge is that, if contracts are automated, what will happen to traditional firm control structures, processes, and intermediaries like lawyers and accountants? And what about managers? Their roles would all radically change. Most blockchain advocates imagine them changing so radically as to disappear altogether, taking with them many of the costs currently associated with doing business. According to a recent report in the trade press, the blockchain could reduce banks’ infrastructure costs attributable to cross-border payments, securities trading, and regulatory compliance by $15-20 billion per annum by 2022.  Whereas most technologies tend to automate workers on the periphery, blockchain automates away the center. Instead of putting the taxi driver out of a job, blockchain puts Uber out of a job and lets the taxi drivers work with the customer directly.

Whether blockchain technology will be a revolution for good or one that continues what has come to seem technology’s inexorable, crushing ascendance will be determined not only by where it is deployed, but how. The blockchain could be used by NGOs to eliminate corruption in the distribution of foreign aid by enabling funds to move directly from giver to receiver. It is also a way for banks to operate without external oversight, encouraging other kinds of corruption. Either way, we as CFEs would be wise to remember that technology is never neutral. It is always endowed with the values of its creators. In the case of the blockchain and crypto-currency, those values are libertarian and mechanistic; trust resides in algorithmic rules, while the rules of the state and other regulatory bodies are often viewed with suspicion and hostility.

New Rules for New Tools

I’ve been struck these last months by several articles in the trade press about CFE’s increasingly applying advanced analytical techniques in support of their work as full-time employees of private and public-sector enterprises.  This is gratifying to learn because CFE’s have been bombarded for some time now about the risks presented by cloud computing, social media, big data analytics, and mobile devices, and told they need to address those risk in their investigative practice.  Now there is mounting evidence of CFEs doing just that by using these new technologies to change the actual practice of fraud investigation and forensic accounting by using these innovative techniques to shape how they understand and monitor fraud risk, plan and manage their work, test transactions against fraud scenarios, and report the results of their assessments and investigations to management; demonstrating what we’ve all known, that CFEs, especially those dually certified as CPAs, CIAs, or CISA’s can bring a unique mix of leveraged skills to any employer’s fraud prevention or detection program.

Some examples …

Social Media — following a fraud involving several of the financial consultants who work in its branches and help customers select accounts and other investments, a large multi-state bank requested that a staff CFE determine ways of identifying disgruntled employees who might be prone to fraud. The effort was important to management not only because of fraud prevention but because when the bank lost an experienced financial consultant for any reason, it also lost the relationships that individual had established with the bank’s customers, affecting revenue adversely. The staff CFE suggested that the bank use social media analytics software to mine employees’ email and posts to its internal social media groups. That enabled the bank to identify accurately (reportedly about 33 percent) the financial consultants who were not currently satisfied with their jobs and were considering leaving. Management was able to talk individually with these employees and address their concerns, with the positive outcome of retaining many of them and rendering them less likely to express their frustration by ethically challenged behavior.  Our CFE’s awareness that many organizations use social media analytics to monitor what their customers say about them, their products, and their services (a technique often referred to as sentiment analysis or text analytics) allowed her to suggest an approach that rendered value. This text analytics effort helped the employer gain the experience to additionally develop routines to identify email and other employee and customer chatter that might be red flags for future fraud or intrusion attempts.

Analytics — A large international bank was concerned about potential money laundering, especially because regulators were not satisfied with the quality of their related internal controls. At a CFE employee’s recommendation, it invested in state-of-the-art business intelligence solutions that run “in-memory”, a new technique that enables analytics and other software to run up to 300,000 times faster, to monitor 100 percent of its transactions, looking for the presence of patterns and fraud scenarios indicating potential problems.

Mobile — In the wake of an identified fraud on which he worked, an employed CFE recommended that a global software company upgrade its enterprise fraud risk management system so senior managers could view real-time strategy and risk dashboards on their mobile devices (tablets and smartphones). The executives can monitor risks to both the corporate and to their personal objectives and strategies and take corrective actions as necessary. In addition, when a risk level rises above a defined target, the managers and the risk officer receive an alert.

Collaboration — The fraud prevention and information security team at a U.S. company wanted to increase the level of employee acceptance and compliance with its fraud prevention – information security policy. The CFE certified Security Officer decided to post a new policy draft to a collaboration area available to every employee and encouraged them to post comments and suggestions for upgrading it. Through this crowd-sourcing technique, the company received multiple comments and ideas, many of which were incorporated into the draft. When the completed policy was published, the company found that its level of acceptance increased significantly, its employees feeling that they had part ownership.

As these examples demonstrate, there is a wonderful opportunity for private and public sector employed CFE’s to join in the use of enterprise applications to enhance both their and their employer’s investigative efficiency and effectiveness.  Since their organizations are already investing heavily in a wide variety of innovative technologies to transform the way in which they deliver products to and communicate with customers, as well as how they operate, manage, and direct the business, there is no reason that CFE’s can’t use these same tools to transform each stage of their examination and fraud prevention work.

A risk-based fraud prevention approach requires staff CFEs to build and maintain the fraud prevention plan, so it addresses the risks that matter to the organization, and then update that plan as risks change. In these turbulent times, dominated by cyber, risks change frequently, and it’s essential that fraud prevention teams understand the changes and ensure their approach for addressing them is updated continuously. This requires monitoring to identify and assess both new risks and changes in previously identified risks.  Some of the recent technologies used by organizations’ financial and operational analysts, marketing and communications professionals, and others to understand both changes within and outside the business can also be used to great advantage by loss prevention staff for risk monitoring. The benefits of leveraging this same software are that the organization has existing experts in place to teach CFE’s how to use it, the IT department already is providing technical support, and the software is currently used against the very data enterprise fraud prevention professionals like staff CFEs want to analyze.  A range of enhanced analytics software such as business intelligence, analytics (including predictive and mobile analytics), visual intelligence, sentiment analysis, and text analytics enable fraud prevention to monitor and assess risk levels. In some cases, the software monitors transactions against predefined rules to identify potential concerns such as heightened fraud risks in any given business process or in a set of business processes (the inventory or financial cycles).  For example, a loss prevention team headed by a staff CFE can monitor credit memos in the first month of each quarter to detect potential revenue accounting fraud. Another use is to identify trends associated with known fraud scenarios, such as changes in profit margins or the level of employee turnover, that might indicate changes in risk levels. For example, the level of emergency changes to enterprise applications can be analyzed to identify a heightened risk of poor testing and implementation protocols associated with a higher vulnerability to cyber penetration.

Finally, innovative staff CFEs have used some interesting techniques to report fraud risk assessments and examination results to management and to boards. Some have adopted a more visually appealing representation in a one-page assessment report; others have moved to the more visual capabilities of PowerPoint from the traditional text presentation of Microsoft Word.  New visualization technology, sometimes called visual analytics when allied with analytics solutions, provides more options for fraud prevention managers seeking to enhance or replace formal reports with pictures, charts, and dashboards.  The executives and boards of their employing organizations are already managing their enterprise with dashboards and trend charts; effective loss prevention communications can make effective use of the same techniques. One CFE used charts and trend lines to illustrate how the time her employing company was taking to process small vendor contracts far exceeded acceptable levels, had contributed to fraud risk and was continuing to increase. The graphic, generated by a combination of a business intelligence analysis and a visual analytics tool to build the chart, was inserted into a standard monthly loss prevention report.

CFE headed loss prevention departments and their allied internal audit and IT departments have a rich selection of technologies that can be used by them individually or in combination to make them all more effective and efficient. It is questionable whether these three functions can remain relevant in an age of cyber, addressing and providing assurance on the risks that matter to the organization, without an ever wider use of modern technology. Technology can enable the an internal CFE to understand the changing business environment and the risks that can affect the organization’s ability to achieve its fraud prevention related objectives.

The world and its risks are evolving and changing all the time, and assurance professionals need to address the issues that matter now. CFEs need to review where the risk is going to be, not where it was when the anti-fraud plan was built. They increasingly need to have the ability to assess cyber fraud risk quickly and to share the results with the board and management in ways that communicate assurance and stimulate necessary change.

Technology must be part of the solution to that need. Technological tools currently utilized by CFEs will continue to improve and will be joined by others over time. For example, solutions for augmented or virtual reality, where a picture or view of the physical world is augmented by data about that picture or view enables loss prevention professionals to point their phones at a warehouse and immediately access operational, personnel, safety, and other useful information; representing that the future is a compound of both challenge and opportunity.

Threat Assessment & Cyber Security

One rainy Richmond evening last week I attended the monthly dinner meeting of one of the professional organizations of which I’m a member.  Our guest speaker’s presentation was outstanding and, in my opinion, well worth sharing with fellow CFE’s especially as we find more and more of our client’s grappling with the reality of  ever-evolving cyber threats.

Our speaker started by indicating that, according to a wide spectrum of current thinking, technology issues in isolation should be but one facet of the overall cyber defense strategy of any enterprise. A holistic view on people, process and technology is required in any organization that wants to make its chosen defense strategy successful and, to be most successful, that strategy needs to be supplemented with a good dose of common sense creative thinking. That creative thinking proved to be the main subject of her talk.

Ironically, the sheer size, complexity and geopolitical diversity of the modern-day enterprise can constitute an inherent obstacle for its goal of achieving business objectives in a secured environment.  The source of the problem is not simply the cyber threats themselves, but threat agents. The term “threat agent,” from the Open Web Application Security Project (OWASP), is used to indicate an individual or group that can manifest a threat. Threat agents are represented by the phenomena of:

–Hacktivism;
–Corporate Espionage;
–Government Actors;
–Terrorists;
–Common Criminals (individual and organized).

Irrespective of the type of threat, the threat agent takes advantage of an identified vulnerability and exploits it in the attempt to negatively impact the value the individual business has at risk. The attempt to execute the threat in combination with the vulnerability is called hacking. When this attempt is successful, and the threat agent can negatively impact the value at risk, it can be concluded that the vulnerability was successfully exploited. So, essentially, enterprises are trying to defend against hacking and, more importantly, against the threat agent that is the hacker in his or her many guises. The ACFE identifies hacking as the single activity that has resulted in the greatest number of cyber breaches in the past decade.

While there is no one-size-fits-all standard to build and run a sustainable security defense in a generic enterprise context, most companies currently deploy something resembling the individual components of the following general framework:

–Business Drivers and Objectives;
–A Risk Strategy;
–Policies and Standards;
–Risk Identification and Asset Profiling;
–People, Process, Technology;
–Security Operations and Capabilities;
–Compliance Monitoring and Reporting.

Most IT risk and security professionals would be able to identify this framework and agree with the assertion that it’s a sustainable approach to managing an enterprise’s security landscape. Our speaker pointed out, however, that in her opinion, if the current framework were indeed working as intended, the number of security incidents would be expected to show a downward trend as most threats would fail to manifest into full-blown incidents. They could then be routinely identified by enterprises as known security problems and dealt with by the procedures operative in day-to-day security operations. Unfortunately for the existing framework, however, recent security surveys conducted by numerous organizations and trade groups clearly show an upward trend of rising security incidents and breaches (as every reader of daily press reports well knows).

The rising tide of security incidents and breaches is not surprising since the trade press also reports an average of 35 new, major security failures on each and every day of the year.  Couple this fact with the ease of execution and ready availability of exploit kits on the Dark Web and the threat grows in both probability of exploitation and magnitude of impact. With speed and intensity, each threat strikes the security structure of an enterprise and whittles away at its management credibility to deal with the threat under the routine, daily operational regimen presently defined. Hence, most affected enterprises endure a growing trend of negative security incidents experienced and reported.

During the last several years, in response to all this, many firms have responded by experimenting with a new approach to the existing paradigm. These organizations have implemented emergency response teams to respond to cyber-threats and incidents. These teams are a novel addition to the existing control structure and have two main functions: real-time response to security incidents and the collection of concurrent internal and external security intelligence to feed predictive analysis. Being able to respond to security incidents via a dedicated response team boosts the capacity of the operational organization to contain and recover from attacks. Responding to incidents, however efficiently, is, in any case, a reactive approach to deal with cyber-threats but isn’t the whole story. This is where cyber-threat intelligence comes into play. Threat intelligence is a more proactive means of enabling an organization to predict incidents. However, this approach also has a downside. The influx of a great deal of intelligence information may limit the ability of the company to render it actionable on a timely basis.

Cyber threat assessments are an effective means to tame what can be this overwhelming influx of intelligence information. Cyber threat assessment is currently recognized in the industry as red teaming, which is the practice of viewing a problem from an adversary or competitor’s perspective. As part of an IT security strategy, enterprises can use red teams to test the effectiveness of the security structure as a whole and to provide a relevance factor to the intelligence feeds on cyber threats. This can help CEOs decide what threats are relevant and have higher exposure levels compared to others. The evolution of cyber threat response, cyber threat
intelligence and cyber threat assessment (red teams) in conjunction with the existing IT risk framework can be used as an effective strategy to counter the agility of evolving cyber threats. The cyber threat assessment process assesses and challenges the structure of existing enterprise security systems, including designs, operational-level controls and the overall cyber threat response and intelligence process to ensure they remain capable of defending against current relevant exploits.

Cyber threat assessment exercises can also be extremely helpful in highlighting the most relevant attacks and in quantifying their potential impacts. The word “adversary” in the definition of the term ‘red team’ is key in that it emphasizes the need to independently challenge the security structure from the view point of an attacker.  Red team exercises should be designed to be independent of the scope, asset profiling, security, IT operations and coverage of existing security policies. Only then can enterprises realistically apply the attacker’s perspective, measure the success of its risk strategy and see how it performs when challenged. It’s essential that red team exercises have the freedom to treat the complete security structure and to point to flaws in all components of the IT risk framework. It’s a common notion that a red team exercise is a penetration test. This is not the case. Use of penetration test techniques by red teams is a means to identify the information required to replicate cyber threats and to create a controlled security incident. The technical shortfalls that are identified during standard penetration testing are mere symptoms of gaps that may exist in the governance of people, processes and technology. Hence, to make the organization more resilient against cyber threats, red team focus should be kept on addressing the root cause and not merely on fixing the security flaws discovered during the exercise. Another key point is to include cyber threat response and threat monitoring in the scope of such assessments. This demands that red team exercises be executed, and partially announced, with CEO-level approval. This ensures that enterprises challenge the end-to-end capabilities of an enterprise to cope with a real-time security incident. Lessons learned from red teaming can be documented to improve the overall security posture of the organization and as an aid in dealing with future threats.

Our speaker concluded by saying that as cyber threats evolve, one-hundred percent security for an active business is impossible to achieve. Business is about making optimum use of existing resources to derive the desired value for stakeholders. Cyber-defense cannot be an exception to this rule. To achieve optimized use of their security investments, CEOs should ensure that security spending for their organization is mapped to the real emerging cyber threat landscape. Red teaming is an effective tool to challenge the status quo of an enterprise’s security framework and to make informed judgements about the actual condition of its actual security posture today. Not only can the judgements resulting from red team exercises be used to improve cyber threat defense, they can also prove an effective mechanism to guide a higher return on cyber-defense investment.

Sock Puppets

The issue of falsely claimed identity in all its myriad forms has shadowed the Internet since the beginning of the medium.  Anyone who has used an on-line dating or auction site is all too familiar with the problem; anyone can claim to be anyone.  Likewise, confidence games, on or off-line, involve a range of fraudulent conduct committed by professional con artists against unsuspecting victims. The victims can be organizations, but more commonly are individuals. Con artists have classically acted alone, but now, especially on the Internet, they usually group together in criminal organizations for increasingly complex criminal endeavors. Con artists are skilled marketers who can develop effective marketing strategies, which include a target audience and an appropriate marketing plan: crafting promotions, product, price, and place to lure their victims. Victimization is achieved when this marketing strategy is successful. And falsely claimed identities are always an integral component of such schemes, especially those carried out on-line.

Such marketing strategies generally involve a specific target market, which is usually made up of affinity groups consisting of individuals grouped around an objective, bond, or association like Facebook or LinkedIn Group users. Affinity groups may, therefore, include those associated through age, gender, religion, social status, geographic location, business or industry, hobbies or activities, or professional status. Perpetrators gain their victims’ trust by affiliating themselves with these groups.  Historically, various mediums of communication have been initially used to lure the victim. In most cases, today’s fraudulent schemes begin with an offer or invitation to connect through the Internet or social network, but the invitation can come by mail, telephone, newspapers and magazines, television, radio, or door-to-door channels.

Once the mark receives and accepts the offer to connect, some sort of response or acceptance is requested. The response will typically include (in the case of Facebook or LinkedIn) clicking on a link included in a fraudulent follow-up post to visit a specified web site or to call a toll-free number.

According to one of Facebook’s own annual reports, up to 11.2 percent of its accounts are fake. Considering the world’s largest social media company has 1.3 billion users, that means up to 140 million Facebook accounts are fraudulent; these users simply don’t exist. With 140 million inhabitants, the fake population of Facebook would be the tenth-largest country in the world. Just as Nielsen ratings on television sets determine different advertising rates for one television program versus another, on-line ad sales are determined by how many eyeballs a Web site or social media service can command.

Let’s say a shyster want 3,000 followers on Twitter to boost the credibility of her scheme? They can be hers for $5. Let’s say she wants 10,000 satisfied customers on Facebook for the same reason? No problem, she can buy them on several websites for around $1,500. A million new friends on Instagram can be had for only $3,700. Whether the con man wants favorites, likes, retweets, up votes, or page views, all are for sale on Web sites like Swenzy, Fiverr, and Craigslist. These fraudulent social media accounts can then be freely used to falsely endorse a product, service, or company, all for just a small fee. Most of the work of fake account set up is carried out in the developing world, in places such as India and Bangladesh, where actual humans may control the accounts. In other locales, such as Russia, Ukraine, and Romania, the entire process has been scripted by computer bots, programs that will carry out pre-encoded automated instructions, such as “click the Like button,” repeatedly, each time using a different fake persona.

Just as horror movie shape-shifters can physically transform themselves from one being into another, these modern screen shifters have their own magical powers, and organizations of men are eager to employ them, studying their techniques and deploying them against easy marks for massive profit. In fact, many of these clicks are done for the purposes of “click fraud.” Businesses pay companies such as Facebook and Google every time a potential customer clicks on one of the ubiquitous banner ads or links online, but organized crime groups have figured out how to game the system to drive profits their way via so-called ad networks, which capitalize on all those extra clicks.

Painfully aware of this, social media companies have attempted to cut back on the number of fake profiles. As a result, thousands and thousands of identities have disappeared over night among the followers of many well know celebrities and popular websites. If Facebook has 140 million fake profiles, there is no way they could have been created manually one by one. The process of creation is called sock puppetry and is a reference to the children’s toy puppet created when a hand is inserted into a sock to bring the sock to life. In the online world, organized crime groups create sock puppets by combining computer scripting, web automation, and social networks to create legions of online personas. This can be done easily and cheaply enough to allow those with deceptive intentions to create hundreds of thousands of fake online citizens. One only needs to consult a readily available on-line directory of the most common names in any country or region. Have a scripted bot merely pick a first name and a last name, then choose a date of birth and let the bot sign up for a free e-mail account. Next, scrape on-line photo sites such as Picasa, Instagram, Facebook, Google, and Flickr to choose an age-appropriate image to represent your new sock puppet.

Armed with an e-mail address, name, date of birth, and photograph, you sign up your fake persona for an account on Facebook, LinkedIn, Twitter, or Instagram. As a last step, you teach your puppets how to talk by scripting them to reach out and send friend requests, repost other people’s tweets, and randomly like things they see Online. Your bots can even communicate and cross-post with one another. Before the fraudster knows it, s/he has thousands of sock puppets at his disposal for use as he sees fit. It is these armies of sock puppets that criminals use as key constituents in their phishing attacks, to fake on-line reviews, to trick users into downloading spyware, and to commit a wide variety of financial frauds, all based on misplaced and falsely claimed identity.

The fraudster’s environment has changed and is changing over time, from a face-to-face physical encounter to an anonymous on-line encounter in the comfort of the victim’s own home. While some consumers are unaware that a weapon is virtually right in front of them, others are victims who struggle with the balance of the many wonderful benefits offered by advanced technology and the painful effects of its consequences. The goal of law enforcement has not changed over the years; to block the roads and close the loopholes of perpetrators even as perpetrators continue to strive to find yet another avenue to commit fraud in an environment in which they can thrive. Today, the challenge for CFEs, law enforcement and government officials is to stay on the cutting edge of technology, which requires access to constantly updated resources and communication between organizations; the ability to gather information; and the capacity to identify and analyze trends, institute effective policies, and detect and deter fraud through restitution and prevention measures.

Now is the time for CFEs and other assurance professionals to continuously reevaluate all we for take for granted in the modern technical world and to increasingly question our ever growing dependence on the whole range of ubiquitous machines whose potential to facilitate fraud so few of our clients and the general public understand.