Category Archives: Forensic Accounting

Cloud Shapes

Just as clouds can take different shapes and be perceived differently, so too is cloud computing perceived differently by our various types of client companies. To some, the cloud looks like web-based applications, a revival of the old thin client. To others, the cloud looks like utility computing, a grid that charges metered rates for processing time. To some, the cloud could be parallel computing, designed to scale complex processes for improved efficiency. Interestingly, cloud services are wildly different. Amazon’s Elastic Compute Cloud offers full Linux machines with root access and the opportunity to run whatever apps the user chooses. Google’s App Engine will also let users run any program they want, as long as the user specifies it in a limited version of Python and uses Google’s database.

The National Institute of Standards and Technology (NIST) defines cloud computing as a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. It is also important to remember what our ACFE tells us, that the Internet itself is in fact a primitive transport cloud. Users place something on the path with an expectation that it will get to the proper destination, in a reasonable time, with all parties respecting the privacy and security of the artifact.

Cloud computing, as everyone now knows, brings many advantages to users and vendors. One of its biggest advantages is that a user may no longer have to be tethered to a traditional computer to use an application, or have to buy a version of an application that is specifically configured for a phone, a tablet or other device. Today, any device that can access the Internet can run a cloud-based application. Application services are available independent of the user’s home or office devices and network interfaces. Regardless of the device being used, users also face fewer maintenance issues. End users don’t have to worry about storage capacity, compatibility or other similar concerns.

From a fraud prevention perspective, these benefits are the result of the distributed nature of the web, which necessitates a clear separation between application and interaction logic. This is because application logic and user data reside mostly on the web cloud and manifest themselves in the form of tangible user interfaces at the point of interaction, e.g., within a web browser or mobile web client. Cloud computing is also beneficial for our client’s vendors. Businesses frequently find themselves using the vast majority of their computing capacity in a small percentage of time, leaving expensive equipment often idle. Cloud computing can act as a utility grid for vendors and optimize the use of their resources. Consider, for example, a web-based application running in Amazon’s cloud. Suppose there is a sudden surge in visitors as a result of media coverage, for example. Formerly, many web applications would fail under the load of big traffic spikes. But in the cloud, assuming that the web application has been designed intelligently, additional machine instances can be launched on demand.

With all the benefits, there are related constraints. Distrust is one of the main constraints on online environments generally. particularly in terms of consumer fraud, waste and abuse protection. Although the elements that contribute to building trust can be identified in broad terms, there are still many uncertainties in defining and establishing trust in online environments. Why should users trust cloud environments to store their personal information and to share their privacy in such a large and segregated environment? This question can be answered only by investigating these uncertainties in the context of risk assessment and by exploring the relationship between trust and the way in which the risk is perceived by stakeholders. Users are assumed to be willing to disclose personal information and have that information used subsequently to store their personal data or to create consumer profiles for business use when they perceive that fair procedures are in place to protect their individual privacy.

The changing trust paradigm represented by cloud computing means that less information is stored locally on our client’s machines and is instead being hosted elsewhere on earth. No one for the most part buys software anymore; users just rent it or receive it for free using the Software as a Service (SaaS) business model. On the personal front, cloud computing means Google is storing user’s mail, Instagram their photographs, and Dropbox their documents, not to mention what mobile phones are automatically uploading to the cloud for them. In the corporate world, enterprise customers not only are using Dropbox but also have outsourced primary business functions that would have previously been handled inside the company to SaaS providers such as Salesforce.com, Zoho.com, and Box.com.

From a crime and security perspective, the aggregation of all these data, exabytes and exabytes of it, means that user’s most personal of information is no longer likely stored solely on their local hard drives but now aggregated on computer servers around the world. By aggregating important user data, financial and otherwise, on cloud-based computer servers, the cloud has obviated the need for criminals to target everybody’s hard drive individually and instead put all the jewels in a single place for criminals and hackers to target (think Willie Sutton).

The cloud is here to stay, and at this point there is no going back. But with this move to store all available data in the cloud come additional risks. Thinking of some of the largest hacks to date, Target, Heartland Payment Systems, TJX, and Sony PlayStation Network; all of these thefts of hundreds of millions of accounts were made possible because the data were stored in the same virtual location. The cloud is equally convenient for individuals, businesses, and criminals.

The virtualization and storage of all of these data is a highly complex process and raises a wide array of security, public policy, and legal issues for all CFEs and for our clients. First, during an investigation, where exactly is this magical cloud storing my defrauded client’s data? Most users have no idea when they check their status on Facebook or upload a photograph to Pinterest where in the real world this information is actually being stored. That they do not even stop to pose the question is a testament to the great convenience, and opacity, of the system. Yet from a corporate governance and fraud prevention risk perspective, whether your client’s data are stored on a computer server in America, Russia, China, or Iceland makes a difference.

ACFE guidance emphasizes that the corporate and individual perimeters that used to protect information internally are disappearing, and the beginning and end of corporate user computer networks are becoming far less well defined. It’s making it much harder for examiners and auditors to see what data are coming and going from a company, and the task is nearly impossible on the personal front. The transition to the cloud is a game changer for anti-fraud security because it completely redefines where data are stored, moved, and accessed, creating sweeping new opportunities for criminal hackers. Moreover, the non-local storage of data raises important questions about deep dependence on cloud-based information systems. When these services go down or become unavailable i.e., a denial of service attack, or the Internet connection is lost, the data become unavailable, and your client for our CFE services is out of business.

All the major cloud service providers are routinely remotely targeted by criminal attacks, including Dropbox, Google, and Microsoft, and more such attacks occur daily. Although it may be your client’s cloud service provider that is targeted in such attack, the client is the victim, and the data taken is theirs’s. Of course, the rights reserved to the providers in their terms of service agreements (and signed by users) usually mean that provider companies bear little or no liability when data breaches occur. These attacks threaten intellectual property, customer data, and even sensitive government information.

To establish trust with end users in the cloud environment, all organizations should address these fraud related risks. They also need to align their users’ perceptions with their policies. Efforts should be made to develop a standardized approach to trust and risk assessment across different domains to reduce the burden on users who seek to better understand and compare policies and practices across cloud provider organizations. This standardized approach will also aid organizations that engage in contractual sharing of consumer information, making it easier to assess risks across organizations and monitor practices for compliance with contracts. policies and law.

During the fraud risk assessment process, CFEs need to advise their individual corporate clients to mandate a given cloud based activity in which they participate to be conducted fairly and to address their privacy concerns. By ensuring this fairness and respecting privacy, organizations give their customers the confidence to disclose personal information on the cloud and to allow that information subsequently to be used to create consumer profiles for business use. Thus, organizations that understand the roles of trust and risk should be advised to continuously monitor user perceptions to understand their relation to risk aversion and risk management. Managers should not rely solely on technical control measures. Security researchers have tended to focus on the hard issues of cryptography and system design. By contrast. issues revolving around the use of computers by lay users and the creation of active incentives to avoid fraud have been relatively neglected. Many ACFE lead studies have shown that human errors are the main cause of information security incidents.

Piecemeal approaches to control security issues related to cloud environments fail simply because they are usually driven by a haphazard occurrence; reaction to the most recent incident or the most recently publicized threat. In other words, managing information security in cloud environments requires collaboration among experts from different disciplines, including computer scientists. engineers. economists, lawyers and anti-fraud assurance professionals like CFE’s, to forge common approaches.

MAC Documents

As our upcoming Ethics 2019 lecture for January-February 2019 makes clear, many of the most spectacular cases of fraud during the last two decades that were, at least initially, successfully concealed from auditors involved the long running falsification of documents. Bernie Madoff and Enron come especially to mind. In hindsight, the auditors involved in these individual cases failed to detect the fraud for multiple reasons, one of which was a demonstrated lack of professional skepticism coupled with a general lack of awareness.

Fraud audit and red flag testing procedures are designed to validate the authenticity of documents and the performance of internal controls. Red flag testing procedures are based on observing indicators in the internal documents and in the internal controls. In contrast, fraud audit testing procedures verify the authenticity of the representations in the documents and internal controls. While internal controls are an element of each, they are not the same as the testing procedures performed in a traditional audit. Considering that fraud audit testing procedures are the basis of the fraud audit program, the analysis of documents will differ between the fraud audit and the traditional verification audit. Business systems are driven by paper documents, both imaged paper documents and electronic documents. Approvals are handwritten, created mechanically, or created electronically through a computerized business application. Therefore, the ability to examine a document for the red flags indicative of a fraud scenario is a critical component in the process of fraud detection.

The ACFE points out that within fraud auditing, there are levels of document examination: the forensic document examination performed by a certified document examiner and the document examination performed by an independent external auditor conducting a fraud audit are distinct. Clearly, the auditor is not required to have the skills of a certified document examiner; however, the auditor should understand the difference between questioned document examination and the examination of documents for red flags.

Questioned, or forensic, document examination is the application of science to the law. The forensic document examiner, using specialized techniques, examines documents and any handwriting on the documents to establish their authenticity and to detect alterations. The American Academy of Forensic Sciences (AAFS) Questioned Document Section and the American Society of Questioned Document Examiners (ASQDE) provide guidance and standards to assurance professionals in the field of document examination. For example, the American Society for Testing and Materials, International (ASTM) Standard E444-09 (Standard Guide for Scope of Work of Forensic Document Examiners) indicates there are four components to the work of a forensic document examiner. These components are the following:

1. Establish document genuineness or non-genuineness, expose forgery, or reveal alterations, additions, or deletions.
2. Identify or eliminate persons as the source of handwriting.
3. Identify or eliminate the source of typewriting or other impression, marks, or relative evidence.
4. Write reports or give testimony, when needed, to aid the users of the examiner’s services in understanding the examiner’s findings.

CFEs will find that some forensic document examiners (FDEs) limit their work to the examination and comparison of handwriting, however, most inspect and examine the whole document in accordance with the ASTM standard.

The fraud examiner or auditor also focuses on the authenticity of the document, with two fundamental differences:

1. The degree of certainty. With forensic document examination, the forensic certainty is based on scientific principles. Fraud audit document examination is based on visual observations and informed audit experience.
2. Central focus. Fraud audit document examination focuses on the red flags associated with a hypothetical fraud scenario. Forensic document examination focuses on the genuineness of the document or handwriting under examination.

Awareness of the basic principles and objectives of forensic document examination is of assistance to any auditor or examiner in determining if, when and how to use the services of a certified document examiner in the process of conducting a fraud audit.

ACFE training indicates that documentary red flags are among the most important of all red flags. Examiners and auditors need to be aware not only of how a fraud scenario occurs, but also of how to employ the correct methodology in identifying and describing the documents related to a given scenario. These capabilities are critical as well in order to be successful in the identification of document related red flags. Specifically, a document must link to the fraud scenario and to the key controls of the involved business process(es).

The target document should be examined for the following: document condition, document format, document information, and industry standards. To these characteristics the concepts of missing, altered, and created content should be applied. The second aspect of the document examination is linking the document to the internal controls. Linking the document examination to the internal controls is a critical aspect of developing the decision tree aspect of the fraud audit program. Using a document examination methodology aids the fraud auditor in building his or her fraud audit program.

The ACFE’s acronym MAC is a useful aid to assist the auditor in identifying red flags and the corresponding audit response. The ‘M’ stands for missing, either missing the entire document or missing information on a document; the ‘A’ for altered information on a document; and the ‘C’ for created documents or information on a document. Specifically:

A missing document is a red flag. Missing documents occur because the document was never created, was destroyed, or has been misfiled. Documents are either the basis of initiating the transaction or support the transaction.

The frequency of missing documents must be linked to the fraud scenario. In some instances, missing one document may be a red flag, although typically repetition is necessary to warrant fraud audit testing procedures. The audit response should focus on the following attributes assuming the document links to a key control:

— Is the document externally or internally created? The existence of externally created documents can be confirmed with the source, assuming the source is not identified as involved in the fraud scenario.
— Is the document necessary to initiate the transaction or is the document a supporting one? Documents used to initiate a transaction had to have existed at some point; therefore, logic dictates that the document was destroyed or misfiled.
— One, two, or all three of the following questions could apply to internal documents:

• Is there a pattern of missing documents associated with the same entity?
• Is there a pattern of missing documents associated with an internal employee?
• Does the document support a key anti-fraud control, therefore being a trigger red flag, or is the missing document related to a non-key control?

With regard to missing information on a document, several questions arise, one of which is: are there tears, torn pieces, soiled areas, or charred areas that cause information to be missing? To address any of these situations, finding a similar document type is needed to determine if the intent of the document has changed because of the missing information.  Another question is: is information obliterated (e.g., covered, blotted, or wiped out)? Overwriting is commonly used to obscure existing writing. Correction fluid is also a common method, but the underlying writing can be read and photographed using transmitted light from underneath the document.

Scratching out writing with a pen will obliterate writing successfully if it results in the page being torn. Spilled liquids can also obliterate writing.

‘A’, altered, pertains to changing or adding information to the original document. The information may be altered manually or through the use of desktop publishing capabilities. For example, manual changes tend to be visible through a difference in handwriting, and electronic documents would generally be altered via the software used to create the document.

Any altering of information would be detected through the same red flags as adding information. In the context of fraud, forgery is the first thing that comes to mind in any discussion of the altering of documents. Forgery is a legal term applied to fraudulent imitation. It is an alteration of writing as to convey a false impression that a document itself, not its contents, is authentic, thereby imposing a legal liability. It is an alteration of a document with the intent to defraud. It should be noted that it is possible for a document examiner to identify a document or signature as a forgery, but it is much less common for the examiner to identify the forger. This is due to the nature of handwriting, whereby a forger is attempting to imitate the writing habit of another person, thereby suppressing his own writing characteristics and style, and in essence, disguising his or her writing.

A ‘C’, or created document is any document prepared by the perpetrator of the fraud scenario. This type of changed document can include added or created documents or added and created text on a document. The document can be prepared by an external source (e.g., a vendor in an over-billing scheme) or an internal source (e.g., a purchasing agent who creates false bids).

Some signs of document creation can include the age of the document being inconsistent with the purported creation date, or the document lacking the sophistication typically associated with normal business standards. Added or created text can inserted with the use of ink or whatever type of writing instrument was used on the original. It can also be added through cutting and pasting sections of text, then photocopying the document to eliminate any outline. When pages are suspected of being added in this manner, a comparison of the type of paper used for the original and the photocopy should be made. In terms of computer-generated and machine-produced documents differences in the software used may result in textual differences.

As the MAC acronym seeks to demonstrate, fraudulent document information can be categorized as missing information, incorrect information, or information inconsistent with normal business standards. Therefore, the investigating CFE or auditor needs to have the requisite business and industry knowledge to correctly associate the appropriate red flags with the relevant documentary information consistent with the fraud scenario under investigation.

The Human Financial Statement

A finance professor of mine in graduate school at the University of Richmond was fond of saying, in relation to financial statement fraud, that as staff competence goes down, the risk of fraud goes up. What she meant by that was that the best operated, most flawless control ever put in place can be tested and tested and tested again and score perfectly every time. But its still no match for the employee who doesn’t know, or perhaps doesn’t even care, how to operate that control; or for the manager who doesn’t read the output correctly, or for the executive who hides part of a report and changes the numbers in the rest. That’s why CFEs and the members of any fraud risk assessment team (especially our client managers who actually own the process and its results), should always take a careful look at the human component of risk; the real-world actions, and lack thereof, taken by real-life employees in addressing the day-to-day duties of their jobs.

ACFE training emphasizes that client management must evaluate whether it has implemented anti-fraud controls that adequately address the risk that a material misstatement in the financial statements will not be prevented or detected timely and then focus on fixing or developing controls to fill any gaps. The guidance offers several specific suggestions for conducting top-down, risk-based anti-fraud focused evaluations, and many of them require the active participation of staff drawn from all over the assessed enterprise. The ACFE documentation also recommends that management consider whether a control is manual or automated, its complexity, the risk of management override, and the judgment required to operate it. Moreover, it suggests that management consider the competence of the personnel who perform the control or monitor its performance.

That’s because the real risk of financial statement misstatements lies not in a company’s processes or the controls around them, but in the people behind the processes and controls who make the organization’s control environment such a dynamic, challenging piece of the corporate puzzle. Reports and papers that analyze fraud and misstatement risk use words like “mistakes” and “improprieties.” Automated controls don’t do anything “improper.” Properly programmed record-keeping and data management processes don’t make “mistakes.” People make mistakes, and people commit improprieties. Of course, human error has always been and will always be part of the fraud examiner’s universe, and an SEC-encouraged, top-down, risk-based assessment of a company’s control environment, with a view toward targeting the control processes that pose the greatest misstatement risk, falls nicely within most CFE’s existing operational ambit. The elevated role for CFEs, whether on staff or in independent private practice, in optionally conducting fraud risk evaluations offers our profession yet another chance to show its value.

Focusing on the human element of misstatement fraud risk is one important way our client companies can make significant progress in identifying their true financial statement and other fraud exposures. It also represents an opportunity for management to identify the weak links that could ultimately result in a misstatement, as well as for CFEs to make management’s evaluation process a much simpler task. I can remember reading many articles in the trade press these last years in which commentators have opined that dramatic corporate meltdowns like Wells Fargo are still happening today, under today’s increased regulatory strictures, because the controls involved in those frauds weren’t the problem, the people were. That is certainly true. Hence, smart risk assessors are integrating the performance information they come across in their risk assessments on soft controls into management’s more quantitative, control-related evaluation data to paint a far more vivid picture of what the risks look like. Often the risks will wear actual human faces. The biggest single factor in calculating restatement risk as a result of a fraud relates to the complexity of the control(s) in question and the amount of human judgment involved. The more complex a control, the more likely it is to require complicated input data and to involve highly technical calculations that make it difficult to determine from system output alone whether something is wrong with the process itself. Having more human judgment in the mix gives rise to greater apparent risk.

A computer will do exactly what you tell it to over and over; a human may not, but that’s what makes humans special, special and risky. In the case of controls, especially fraud prevention related controls, our human uniqueness can manifest as simple afternoon sleepiness or family financial troubles that prove too distracting to put aside during the workday. So many things can result in a mistaken judgment, and simple mistakes in judgment can be extremely material to the final financial statements.

CFEs, of course, aren’t in the business of grading client employees or of even commenting to them about their performance but whether the fraud risk assessment in question is related to financial report integrity or to any other issue, CFEs in making such assessments at management’s request need to consider the experience, training, quality, and capabilities of the people performing the most critical controls.

You can have a well-designed control, but if the person in charge doesn’t know, or care, what to do, that control won’t operate. And whether such a lack of ability, or of concern, is at play is a judgment call that assessing CFEs shouldn’t be afraid to make. A negative characterization of an employee’s capability doesn’t mean that employee is a bad worker, of course. It may simply mean he or she is new to the job, or it may reveal training problems in that employee’s department. CFEs proactively involved in fraud risk assessment need to keep in mind that, in some instances, competence may be so low that it results in greater risk. Both the complexity of a control and the judgment required to operate it are important. The ability to interweave notions of good and bad judgment into the fabric of a company’s overall fraud risk comes from CFEs experience doing exactly that on fraud examinations. A critical employee’s intangibles like conscientiousness, commitment, ethics and morals, and honesty, all come into play and either contribute to a stronger fraud control environment or cause it to deteriorate. CFEs need to be able, while acting as professional risk assessors, to challenge to management the quality, integrity, and motivation of employees at all levels of the organization.

Many companies conduct fraud-specific tests as a component of the fraud prevention program, and many of the most common forms of fraud can be detected by basic controls already in place. Indeed, fraud is a common concern throughout all routine audits, as opposed to the conduct of separate fraud-only audits. It can be argued that every internal control is a fraud deterrent control. But fraud still exists.

What CFEs have to offer to the risk assessment of financial statement and other frauds is their overall proficiency in fraud detection and the reality that they are well-versed in, and cognizant of, the risk of fraud in every given business process of the company; they are, therefore, well positioned to apply their best professional judgment to the assessment of the degree of risk of financial statement misstatement that fraud represents in any given client enterprise.

Forensic Data Analysis

As a long term advocate of big data based solutions to investigative challenges, I have been interested to see the recent application of such approaches to the ever-growing problem of data beaches. More data is stored electronically than ever before, financial data, marketing data, customer data, vendor listings, sales transactions, email correspondence, and more, and evidence of fraud can be located anywhere within those mountains of data. Unfortunately, fraudulent data often looks like legitimate data when viewed in the raw. Taking a sample and testing it might not uncover fraudulent activity. Fortunately, today’s fraud examiners have the ability to sort through piles of information by using special software and data analysis techniques. These methods can identify future trends within a certain industry, and they can be configured to identify breaks in audit control programs and anomalies in accounting records.

In general, fraud examiners perform two primary functions to explore and analyze large amounts of data: data mining and data analysis. Data mining is the science of searching large volumes of data for patterns. Data analysis refers to any statistical process used to analyze data and draw conclusions from the findings. These terms are often used interchangeably. If properly used, data analysis processes and techniques are powerful resources. They can systematically identify red flags and perform predictive modeling, detecting a fraudulent situation long before many traditional fraud investigation techniques would be able to do so.

Big data are high volume, high velocity, and/or high variety information assets that require new forms of processing to enable enhanced decision making, insight discovery, and process optimization. Simply put, big data is information of extreme size, diversity, and complexity. In addition to thinking of big data as a single set of data, fraud investigators and forensic accountants are conceptualizing about the way data grow when different data sets are connected together that might not normally be connected. Big data represents the continuous expansion of data sets, the size, variety, and speed of generation of which makes it difficult for investigators and client managements to manage and analyze.

Big data can be instrumental to the evidence gathering phase of an investigation. Distilled down to its core, how do fraud examiners gather data in an investigation? They look at documents and financial or operational data, and they interview people. The challenge is that people often gravitate to the areas with which they are most comfortable. Attorneys will look at documents and email messages and then interview individuals. Forensic accounting professionals will look at the accounting and financial data (structured data). Some people are strong interviewers. The key is to consider all three data sources in unison.

Big data helps to make it all work together to bring the complete picture into focus. With the ever-increasing size of data sets, data analytics has never been more important or useful. Big data requires the use of creative and well-planned analytics due to its size and complexity. One of the main advantages of using data analytics in a big data environment is that it allows the investigator to analyze an entire population of data rather than having to choose a sample and risk drawing erroneous conclusions in the event of a sampling error.

To conduct an effective data analysis, a fraud examiner must take a comprehensive approach. Any direction can (and should) be taken when applying analytical tests to available data. The more creative fraudsters get in hiding their breach-related schemes, the more creative the fraud examiner must become in analyzing data to detect these schemes. For this reason, it is essential that fraud investigators consider both structured and unstructured data when planning their engagements.

Data are either structured or unstructured. Structured data is the type of data found in a database, consisting of recognizable and predictable structures. Examples of structured data include sales records, payment or expense details, and financial reports. Unstructured data, by contrast, is data not found in a traditional spreadsheet or database. Examples of unstructured data include vendor invoices, email and user documents, human resources files, social media activity, corporate document repositories, and news feeds. When using data analysis to conduct a fraud examination, the fraud examiner might use structured data, unstructured data, or a combination of the two. For example, conducting an analysis on email correspondence (unstructured data) among employees might turn up suspicious activity in the purchasing department. Upon closer inspection of the inventory records (structured data), the fraud examiner might uncover that an employee has been stealing inventory and covering her tracks in the record.

Recent reports of breach responses detailed in social media and the trade press indicate that those investigators deploying advanced forensic data analysis tools across larger data sets provided better insights into the penetration, which lead to more focused investigations, better root cause analysis and contributed to more effective fraud risk management. Advanced technologies that incorporate data visualization, statistical analysis and text-mining concepts, as compared to spreadsheets or relational database tools, can now be applied to massive data sets from disparate sources enhancing breach response at all organizational levels.

These technologies enable our client companies to ask new compliance questions of their data that they might not have been able to ask previously. Fraud examiners can establish important trends in business conduct or identify suspect transactions among millions of records rather than being forced to rely on smaller samplings that could miss important transactions.

Data breaches bring enhanced regulatory attention. It’s clear that data breaches have raised the bar on regulators’ expectations of the components of an effective compliance and anti-fraud program. Adopting big data/forensic data analysis procedures into the monitoring and testing of compliance can create a cycle of improved adherence to company policies and improved fraud prevention and detection, while providing additional comfort to key stakeholders.

CFEs and forensic accountants are increasingly being called upon to be members of teams implementing or expanding big data/forensic data analysis programs so as to more effectively manage data breaches and a host of other instances of internal and external fraud, waste and abuse. To build a successful big data/forensic data analysis program, your client companies would be well advised to:

— begin by focusing on the low-hanging fruit: the priority of the initial project(s) matters. The first and immediately subsequent projects, the low-hanging investigative fruit, normally incurs the largest cost associated with setting up the analytics infrastructure, so it’s important that the first few investigative projects yield tangible results/recoveries.

— go beyond usual the rule-based, descriptive analytics. One of the key goals of forensic data analysis is to increase the detection rate of internal control noncompliance while reducing the risk of false positives. From a technology perspective, client’s internal audit and other investigative groups need to move beyond rule-based spreadsheets and database applications and embrace both structured and unstructured data sources that include the use of data visualization, text-mining and statistical analysis tools.

— see that successes are communicated. Share information on early successes across divisional and departmental lines to gain broad business process support. Once validated, success stories will generate internal demand for the outputs of the forensic data analysis program. Try to construct a multi-disciplinary team, including information technology, business users (i.e., end-users of the analytics) and functional specialists (i.e., those involved in the design of the analytics and day-to-day operations of the forensic data analysis program). Communicate across multiple departments to keep key stakeholders assigned to the fraud prevention program updated on forensic data analysis progress under a defined governance program. Don’t just seek to report instances of noncompliance; seek to use the data to improve fraud prevention and response. Obtain investment incrementally based on success, and not by attempting to involve the entire client enterprise all at once.

—leadership support will gets the big data/forensic data analysis program funded, but regular interpretation of the results by experienced or trained professionals are what will make the program successful. Keep the analytics simple and intuitive; don’t try to cram too much information into any one report. Invest in new, updated versions of tools to make analytics sustainable. Develop and acquire staff professionals with the required skill sets to sustain and leverage the forensic data analysis effort over the long-term.
Finally, enterprise-wide deployment of forensic data analysis takes time; clients shouldn’t be lead to expect overnight adoption; an analytics integration is a journey, not a destination. Quick-hit projects might take four to six weeks, but the program and integration can take one to two years or more.

Our client companies need to look at a broader set of risks, incorporate more data sources, move away from lightweight, end-user, desktop tools and head toward real-time or near-real time analysis of increased data volumes. Organizations that embrace these potential areas for improvement can deliver more effective and efficient compliance programs that are highly focused on identifying and containing damage associated with hacker and other exploitation of key high fraud-risk business processes.

Regulating the Financial Data Breach

During several years of my early career, I was employed as a Manager of Operations Research by a mid-sized bank holding company. My small staff and I would endlessly discuss issues related to fraud prevention and develop techniques to keep our customer’s checking and savings accounts safe, secure and private. A never ending battle!

It was a simpler time back then technically but since a large proportion of fraud committed against banks and financial institutions today still involves the illegal use of stolen customer or bank data, some of the newest and most important laws and regulations that management assurance professionals, like CFEs, must be aware of in our practice, and with which our client banks must comply, relate to the safeguarding of confidential data both from internal theft and from breaches of the bank’s information security defenses by outside criminals.

As the ACFE tells us, there is no silver bullet for fully protecting any organization from the ever growing threat of information theft. Yet full implementation of the measures specified by required provisions of now in place federal banking regulators can at least lower the risk of a costly breach occurring. This is particularly true since the size of recent data breaches across all industries have forced Federal enforcement agencies to become increasingly active in monitoring compliance with the critical rules governing the safeguarding of customer credit card data, bank account information, Social Security numbers, and other personal identifying information. Among these key rules are the Federal Reserve Board’s Inter-agency Guidelines Establishing Information Security Standards, which define customer information as any record containing nonpublic personal information about an individual who has obtained a financial product or service from an institution that is to be used primarily for personal, family, or household purposes and who has an ongoing relationship with the institution.

Its important to realize that, under the Inter-agency Guidelines, customer information refers not only to information pertaining to people who do business with the bank (i.e., consumers); it also encompasses, for example, information about (1) an individual who applies for but does not obtain a loan; (2) an individual who guarantees a loan; (3) an employee; or (4) a prospective employee. A financial institution must also require, by contract, its own service providers who have access to consumer information to develop appropriate measures for the proper disposal of the information.

The FRB’s Guidelines are to a large extent drawn from the information protection provisions of the Gramm Leach Bliley Act (GLBA) of 1999, which repealed the Depression-era Glass-Steagall Act that substantially restricted banking activities. However, GLBA is best known for its formalization of legal standards for the protection of private customer information and for rules and requirements for organizations to safeguard such information. Since its enactment, numerous additional rules and standards have been put into place to fine-tune the measures that banks and other organizations must take to protect consumers from the identity-related crimes to which information theft inevitably leads.

Among GLBA’s most important information security provisions affecting financial institutions is the so-called Financial Privacy Rule. It requires banks to provide consumers with a privacy notice at the time the consumer relationship is established and every year thereafter.

The notice must provide details collected about the consumer, where that information is shared, how that information is used, and how it is protected. Each time the privacy notice is renewed, the consumer must be given the choice to opt out of the organization’s right to share the information with third-party entities. That means that if bank customers do not want their information sold to another company, which will in all likelihood use it for marketing purposes, they must indicate that preference to the financial institution.

CFEs should note , that most pro-privacy advocacy groups strongly object to this and other privacy related elements of GLBA because, in their view, these provisions do not provide substantive protection of consumer privacy. One major advocacy group has stated that GLBA does not protect consumers because it unfairly places the burden on the individual to protect privacy with an opt-out standard. By placing the burden on the customer to protect his or her data, GLBA weakens customer power to control their financial information. The agreement’s opt-out provisions do not require institutions to provide a standard of protection for their customers regardless of whether they opt-out of the agreement. This provision is based on the assumption that financial companies will share information unless expressly told not to do so by their customers and, if customers neglect to respond, it gives institutions the freedom to disclose customer nonpublic personal information.

CFEs need to be aware, however, that for bank clients, regardless of how effective, or not, GLBA may be in protecting customer information, noncompliance with the Act itself is not an option. Because of the current explosion in breaches of bank information security systems, the privacy issue has to some degree been overshadowed by the urgency to physically protect customer data; for that reason, compliance with the Interagency Guidelines concerning information security is more critical than ever. The basic elements partially overlap with the preventive measures against internal bank employee abuse of the bank’s computer systems. However, they go quite a bit further by requiring banks to:

—Design an information security program to control the risks identified through a security risk assessment, commensurate with the sensitivity of the information and the complexity and scope of its activities.
—Evaluate a variety of policies, procedures, and technical controls and adopt those measures that are found to most effectively minimize the identified risks.
—Application and enforcement of access controls on customer information systems, including controls to authenticate and permit access only to authorized individuals and to prevent employees from providing customer information to unauthorized individuals who may seek to obtain this information through fraudulent means.
—Access restrictions at physical locations containing customer information, such as buildings, computer facilities, and records storage facilities to permit access only to authorized individuals.
—Encryption of electronic customer information, including while in transit or in storage on networks or systems to which unauthorized individuals may gain access.
—Procedures designed to ensure that customer information system modifications are consistent with the institution’s information security program.
—Dual control procedures, segregation of duties, and employee background checks for employees with responsibilities for or access to customer information.
—Monitoring systems and procedures to detect actual and attempted attacks on or intrusions into customer information systems.
—Response programs that specify actions to be taken when the institution suspects or detects that unauthorized individuals have gained access to customer information systems, including appropriate reports to regulatory and law enforcement agencies.
—Measures to protect against destruction, loss, or damage of customer information due to potential environmental hazards, such as fire and water damage or technological failures.

The Inter-agency Guidelines require a financial institution to determine whether to adopt controls to authenticate and permit only authorized individuals access to certain forms of customer information. Under this control, a financial institution also should consider the need for a firewall to safeguard confidential electronic records. If the institution maintains Internet or other external connectivity, its systems may require multiple firewalls with adequate capacity, proper placement, and appropriate configurations.

Similarly, the institution must consider whether its risk assessment warrants encryption of electronic customer information. If it does, the institution must adopt necessary encryption measures that protect information in transit, in storage, or both. The Inter-agency Guidelines do not impose specific authentication or encryption standards, so it is advisable for CFEs to consult outside experts on the technical details applicable to your client institution’s security requirements especially when conducting after the fact fraud examinations.

The financial institution also must consider the use of an intrusion detection system to alert it to attacks on computer systems that store customer information. In assessing the need for such a system, the institution should evaluate the ability, or lack thereof, of its staff to rapidly and accurately identify an intrusion. It also should assess the damage that could occur between the time an intrusion occurs and the time the intrusion is recognized and action is taken.

The regulatory agencies have also provided our clients with requirements for responding to information breaches. These are contained in a related document entitled Interagency Guidance on Response Programs for Unauthorized Access to Customer Information and Customer Notice (Incident Response Guidance). According to the Incident Response Guidance, a financial institution should develop and implement a response program as part of its information security program. The response program should address unauthorized access to or use of customer information that could result in substantial harm or inconvenience to a customer.

Finally, the Inter-agency Guidelines require financial institutions to train staff to prepare and implement their information security programs. The institution should consider providing specialized training to ensure that personnel sufficiently protect customer information in accordance with its information security program.

For example, an institution should:

—Train staff to recognize and respond to schemes to commit fraud or identity theft, such as guarding against pretext spam calling.
—Provide staff members responsible for building or maintaining computer systems and local and wide area networks with adequate training, including instruction about computer security.
—Train staff to properly dispose of customer information.

An Ancient Skill

I remember Professor Jerome Taylor in his graduate class at the University of Chicago introducing us to the complexities of what the ancients called the trivium.  Because the setting for the process of fraud examination is so often fraught with emotion and confusion, even a beginning fraud examiner quickly realizes that presenting evidence collected during examination fieldwork merely as a succession of facts often isn’t enough to fully convince clients and to adequately address their many concerns (many of which always seem to emerge all at once). To capture stakeholders’ attention, and to elicit a satisfactory response, CFEs need to possess some degree of rhetorical skill.

Rhetoric refers to the use of language to persuade and instruct. Throughout the Middle Ages, European universities taught rhetoric to beginning students as one of three foundational topics composing what was known as the trivium. Logic and grammar, the other two foundational topics, refer to the mechanics of thought and analysis, and to the mechanics of language, respectively. We CFEs and forensic accountants essentially follow the trivium in our work, whether we realize it or not. After gathering evidence through fieldwork, we apply logic to analyze that evidence and to present our vision of the facts to our client organizations in our final reports. We also use grammatical rules to structure text within our reports and memorandum.

Applying the trivium requires a balanced approach; too much focus on any one of the three components to the exclusion of the others can lead to ineffective communication. Fraud examiners need to consider all three trivium components evenly and avoid the common trap of collecting too much evidence or performing too much analysis in the belief that such concentrations will help strengthen our final reports.

The ancient Greeks defined three key components of rhetoric, the speech itself (text), the speaker delivering the speech (author), and those who listen to the speech (audience). Collectively, these components form what’s called the rhetorical triangle. For CFEs, the triangle’s three points equate to the final report or memorandum, the CFE him or herself, and our clients or stakeholders. All three of the rhetorical triangle components are interrelated, and they are each essential to the success of all investigative and/or assurance work. Each should be considered before any engagement and kept in mind throughout the engagement life cycle but especially during the report writing and presentation process.

Although the investigative team lead would be considered the primary author, each of the engagement team members plays a supporting role by authoring observations and preliminary findings that are then compiled into an integrated report. The person performing the important task of draft reviewer also has a role to play, ensuring that the final report meets ACFE and other applicable standards and fulfills the overall purpose defined in the planning document.

The character of the intended audience should be considered with each engagement. Audience members are not homogeneous; each may have different perspectives and expectations. For this reason, CFEs need to consult with them and consider their perspectives even before the engagement begins to the extent feasible.

Once engagement fieldwork has been completed, the authors compose a written report containing the results of the investigative field work. The report represents perhaps the most important outcome communication from the examination process, and the best chance to focus the client’s attention.

When crafting the final report, three separate but interrelated components, designated ‘appeals’, need to be considered and applied: ethos, logos, and pathos.

Ethos is an appeal to the audience’s perception of the honesty, authority, and expertise of the report’s author. Closely related to reputation, ethos is established when the audience determines that the author is qualified, trustworthy, and believable. Because the term ethics derives from ethos, adhering to ACFEs standards and Code of Ethics supports this appeal.

Some helpful formulations, in the form of questions, to keep in mind regarding the ethos component when formulating your report are:

–What assumptions does your audience likely make about you and the investigative process, what you produce, and the level of service you and your team provide?
–Is there a way to take advantage of their positive assumptions to improve the fraud investigation process for the future?
–What can you do to overcome their negative assumptions, if any?
–Do you create the expectation that what you produce and the level of service you provide will be above average or even exceptional?
–Are you using all the available channels to create an impression of excellence?

For CFEs with an on-going or long-term employment or other relationship with the client, the need to consider ethos begins long before the start of any particular engagement. Ethos is supported by the structure and governance of the fraud examination or forensic accounting function as well as by the selection of team members, including alignment between the type of engagements to be performed and the team’s qualifications, education, and training. The ethos appeal is also established by choosing to comply with examination and audit standards and with other professional requirements to demonstrate a high level of credibility, build trust, and gain a favorable reputation over time.

Logos appeals to the audience’s sense of logic, encompassing factors such as the reason and analysis used, the underlying meaning communicated, and the supporting facts and figures presented. The written document’s visual appeal, diagrams, charts, and other elements, as well as how the information is organized, presented, and structured, also factor into logos. Story conveys meaning. From the time we’re born we learn about the world around us through narratives. This aspect of logos continues to be important throughout our lives. We experience the world through our senses, particularly our eyes. Design and visual attractiveness are key to engaging an audience made up of the visual animals we are.

–Is what you are presenting easy to understand?
–Is your presentation design simple and pleasing to the eye?

Investigators need for logos is addressed by their written report’s executive summary; detailed observations, and findings as well as appendices with secondary information that can be used to further instruct the audience. The report describes the origin, drivers and overall purpose of the engagement, its findings, and conclusions. Ultimately, from a rhetorical standpoint, examiners try to tell a convincing, self-contained short story that conveys key messages to the audience. The structure and format of the report, together with its textual content and visual elements, also support the logos appeal.

Like ethos, the logos appeal is fulfilled long before an individual engagement begins. It starts with the rational, periodic assessment and identification of business processes at high-risk for fraud; areas requiring management’s attention, resulting in the development and implementation of effective anti-fraud controls. CFEs are then prepared to undertake engagements, executing steps to collect valid and relevant evidence to justify conclusions and to guide and support the client’s initiation of successful prosecutions.

Pathos is an appeal to the audience’s emotions, either positive (joy, excitement, hopefulness) or negative (anger, sadness). It is used to establish compassion or empathy. Unlike logos, pathos focuses on the audience’s irrational modes of response. The Greeks maintained that pathos was the strongest and most reliable form of persuasion. Pathos can be especially powerful when it is used well and connects with the audience’s underlying values and perspective. Used incorrectly, however, pathos can distort or detract from the impact of actual factual evidence.

Examiners should strive to walk a mile in someone else’s shoes and look for ways to better understand the client/audience’s perspective. Attention to pathos can help support not only examination objectives, but the overarching goal of creating a satisfactory investigative outcome. CFEs should also be mindful of their overall tone and word selection, and ensure they balance negative and positive comments giving credit to individuals and circumstances where credit is due.

To some extent, pathos is interdependent with ethos and logos: The sting of negative results can be reduced somewhat by the positive effect of the other two appeals. For example, clients/audience members are more likely to accept bad news from someone they trust and respect, and who they know has followed a rational, structured approach to the engagement. But at the same time, ethos and logos can be offset by negative pathos. Preferred practice generally consists of holding regular meetings with corporate counsel and/or other critical stakeholders over the course of the investigation, maintaining transparency, and providing stakeholders with an opportunity to address investigative findings or provide evidence that counters or clarifies the CFEs observations.

In summary, while all three elements of rhetorical appeal play an important role in communication and while none should be neglected, CFEs and forensic accountants should pay particular attention to pathos. The dominance of feelings over reason is part of human nature, and examiners should consider this powerful element when planning and executing engagements and reporting the results. By doing so, certified investigators can help ensure audiences accept our message and make informed judgements related to fraud recovery, prosecution and possible restitution.

The Versatile Microcap

A microcap is a publicly traded company whose stock might be worth only pennies, which causes its price to be volatile and thus easier for fraudsters to manipulate. Although CFEs like our Central Virginia Chapter members might not regularly come across microcap stock manipulation, it’s important for all of us to be aware of the methods and motivations behind this significant criminal activity. In this scheme, promoters and insiders, after cheaply purchasing a stock, typically pump up its value through embellished or entirely false news. However, as reported recently in the trade press, other fraudsters have successfully employed much more creative strategies in exploiting microcaps. Several articles and books have told of the involvement of organized crime, especially throughout the ’00s and ’10s, in this highly profitable illegal business.

Basic pump and dump schemes, also known as hype and dump manipulation, involve the touting of a company’s stock (typically micro-cap companies) through false or misleading statements to the marketplace. After pumping up the stock, scam artists make huge profits by selling or dumping their cheap stock onto the market. Today, pump and dump schemes have been updated and most frequently occur over the Internet, where it is common to see e-mail and other messages posted that urge consumers to buy a stock quickly or to sell their stocks before the price goes down. In some cases, a spam-call telemarketer contacts potential investors using the same sort of pitch. Often the promoters claim to have inside information about an impending development, or to have employed an infallible combination of economic and stock market data to pick stocks. In reality, they may be company insiders or paid promoters who stand to gain by selling their shares after the stock price is pumped up by the buying frenzy they create. Once these fraudulent promoters dump their shares and stop hyping the stock, the price typically falls and investors lose their money.

In another recent but simple form of the micro-cap scheme, a caller leaves a message on a potential victim’s voice mail under the guise of someone who dialed the wrong number. Sounding as if they didn’t realize they had misdialed, the message contains a hot investment tip for a friend. However, the caller is actually a spammer, someone being paid to tout this stock on hundreds of cell phones. Those behind the scheme generally own some of the stock and hope to profit by pumping up the share price and selling off their investments.

Pump-and-dump schemes can be as relatively simple as the one above, or such as an individual or small group releasing false information in a chat room or insiders publishing inflated company information. Sometimes the business owners themselves are complicit, especially with shell corporations that have little actual operations or value. Occasionally, scammers dupe business owners into participating in schemes through promises of investment support and/or related marketing help. Or fraudsters, unbeknownst to the victim company, hijack their target company’s stock and falsely hype it, which often causes irreparable damage to the owners’ and to their business’ reputations. CFEs whose clients include small or new venture businesses should be especially cautious of unsolicited offers made to their clients to receive loans or to raise capital through microcap stock offerings. Criminals commonly target businesses in the pharmaceutical, energy or technology sectors, attempting to use their names and initial offerings to manipulate stock for profit.

More complex microcap stock manipulation schemes involving organized crime typically employ a number of persons who are instructed to buy in at various points that coincide with a series of false press releases and concurrent investor forum-controlled chat and spam emails. This orchestrated activity provides the illusion of stock movement resulting from large investor interest thus drawing in the required funds of outsider victims. The actual manipulation often resembles a series of smaller pumps and dumps instead of one large event. So the fraudsters can use the same stock over and over with less chance of detection by regulatory authorities. More refined players also employ foreign or off-shore brokerage accounts as a further veil over their illegal activities.

When the organized manipulation plan succeeds, the ringleaders will permit the accomplices to sell and obtain their related profit depending on their hierarchy in the organization. However, the end process is often far from perfect. Occasionally, accomplices don’t follow instructions, at their significant personal risk, and sell too early or late. Even if the manipulation isn’t always successful, organized crime members who have invested in the process expect and demand a certain profit, which places additional pressure on participants who might find they have debt on their hands because of their failures.

Occasionally, outsiders also take large positions either profiting from or destroying the momentum of the criminal group. In the 1990s, when trades were completed through actual brokers, criminals could use threats or actual violence to control such unwanted participants. However, technological trading platforms have made this more difficult.

A less common, yet also profitable, technique is to put downward pressure on a stock (or cause the price to decrease) after buying the equity on loan through a contract, or option, with the hopes of buying the stock or settling the contract once the stock has dropped in price. Fraudsters can initiate this manipulation technique, commonly known as ‘short and distort,’ by promoting rumors such as a bad quarter or failed new drug test.

The ability to manipulate microcap stocks with relative ease also makes the activity an ideal tool to hide payments between parties and launder money. Instead of paying cash or wiring funds to settle a drug debt, one can simply provide a tip relating to a microcap stock that’s about to be manipulated. The party who’s owed the debt then only has to buy the stock cheaply and await for the pump to make the sale and generate the profit.

Perpetrators also have used the same process to offer bribes to public servants. Troublesome envelopes or bags of cash aren’t required. The profit appears as a simple lucky or astute stock pick, and culprits can even report them as capital gains thus removing the risk of highly feared and powerful tax investigators becoming involved in a possible money-laundering investigation. Police and securities regulatory authorities have observed and reported such suspicious activity. However, it’s often difficult to link those who profit from the manipulation with the culpable manipulators. Also, considering that organized crime elements employ microcap manipulation for debt payments and as profitable crimes, it’s again challenging for authorities to identify the exact goals of their participation without some inside knowledge. Proving all the elements of the crime is nearly impossible without wire taps or a co-conspirator witness.

With all this said, it’s ironic, yet not surprising, that more than one organized-crime figure has said they don’t invest their own criminal earnings in microcap stocks because they deem such markets to be too risky and plagued by manipulators.

So, in summary, if you, as a CFE, come across information relating to a microcap investment involving a case you’re working, you might want to take a closer look.

With regard to preventing investment fraud schemes in general … caution your clients:

• to not invest in anything based upon appearances. Just because an individual or company has a flashy website doesn’t mean it is legitimate. Websites can be created in a matter of hours and taken down even faster. After a short period of taking money, a site can vanish without a trace.
• to not invest in anything about which they are not absolutely sure. Do homework on an investment to ensure it is legitimate.
• to thoroughly investigate the offering individual or company to ensure legitimacy.
• to check out other websites regarding this person or company.
• to be cautious when responding to special investment offers (especially through unsolicited e-mail) by fast talking telemarketers. Know with whom you are dealing!
• to inquire about all the terms and conditions involved with the investors and the investment.
• Rule of thumb: If it sounds too good to be true, it probably is.

Then & Now

I was chatting over lunch last week at the John Marshal Hotel here in Richmond with a former officer of our Chapter when the subject of interviewing came up; interviewing generally, but also viewed in the context of the challenges and obstacles that fraud examiners of the next generation will face as they increasingly confront their peers, the present and future fraudsters of the Millennial and Z generations.

Joseph Wells says somewhere, in one of his excellent writings, that skill as an interviewer is one of the most important attributes that a CFE or forensic accountant can possess and probably the one of all our skills most worthy of on-going cultivation. But, as with any other professional craft, there are common pitfalls of which newer professionals especially need to be aware to increase their chances of successfully achieving their interviewing objectives.

Failure to plan sufficiently is without a doubt, the primary error interviewers make. It seems that the more experience an interviewer has, the less he or she prepares. Whether because of busyness or overconfidence, this pitfall spells disaster. Not only does efficiency suffer because the interviewer might have to schedule another interview, but effectiveness suffers because the interviewer might never discover needed information. Fraudsters often take time before interviews to prepare answers to anticipated questions. The ACFE reports having briefed career criminals on their tactics, thoughts and behaviors about interviews, and they typically respond, “I had my routines that I was going to run down on them” and “I always had my story made up”.

During his or her planning for an interview, the CFE must carefully consider the interviewee’s role in the fraud and his or her relationship to the fraudster (if the interviewee isn’t the fraudster), available information, desired outcomes from the interview and primary interview strategy plus alternate, viable strategies. The success or failure of the interview is determined prior to the time the interviewer walks into the room. Either the interviewer is part of his or her own plan or she is part of someone else’s. The CFE, not the interviewee, has to control the interview.

An interviewer whose mind is made up before an interview even begins is courting danger. Confirmation bias (also known as confirmatory bias or myside bias) greatly decreases the likelihood that an interviewer dismisses, ignores or filters any contradictory information during an interview, whether the interviewee expresses it verbally or non-verbally. Thus, interviewers might not even be aware that they’re missing important information that could increase the examination’s effectiveness.

How many times have experienced practitioners been told by colleagues that they believed that particular interviewees were guilty only to later discover they were actually innocent? If such practitioners hadn’t been aware that their colleagues could have caused them to have confirmation bias, they might have dismissed contradictory interviewee behaviors during subsequent interviews as minor aberrations. It’s imperative that the interviewer maintain an open mind, which isn’t so much a skill set as an attitude. The effective interviewer gives the interviewee a chance by looking at all the data, listening to others and theorizing a hypothesis without precluding anything. Also, the ACFE tells us, if the interviewer maintains an open mind, the interviewee will perceive it and be more cooperative.

A guiding principle should be, the interview is not about the CFE; the CFE is conducting the interview. The interview is a professional encounter. If you don’t conduct the interview, someone else can conduct it, but the interviewee remains the same. Interviewers are replaceable; interviewees aren’t. Never lose sight of this foundational truth. If the interviewer personalizes the interview process s/he will focus on his or her inward emotions rather than on the interviewee’s verbal and non-verbal behavior. An interviewer’s unfettered emotions will have a debilitating impact on a number of levels.

If the interviewer becomes personally involved in an interview, the interviewer becomes the interviewee and the interviewee becomes the interviewer. Most of us want to search for connections to others. But if we connect too strongly, we will become so similar (at least in our own minds) to interviewees that we might have difficulty believing the interviewee is guilty or is providing inaccurate information. Once that occurs, the interviewer probably wont obtain necessary evidence or could discount incriminating evidence.

Before each interview, remind yourself that your objective is to collect evidence in a dispassionate manner; you won’t become emotionally involved. Focus on the overall objective of the interview so that you won’t be caught up in details that could connect you too closely with the interviewee. If, for example, you discover that the interviewee is from the same part of the country you’re from, remind yourself of the many persons you know who also are from that area so you’ll dilute the influence that this information could have on your interview.

With regard to interviewing members of the present and up-and-coming generation, a majority of our youngest future citizens spend an inordinate amount of time looking at plastic screens as a significant mode for learning, communicating, being entertained and experiencing the world instead of interacting directly with others in the same space and time. This places novice CFE interviewers at a disadvantage because they have been formally trained that much of the communication between an interviewer and an interviewee takes place non-verbally. Concurrently, the verbal aspects of communication are replete with meta-messages. For example, what kind of impression does an individual make whose voice inflection rises or falls at the end of a sentence? Can this inflection be as adequately and consistently communicated via a text message compared to in-person communication? This example (and there are many more) contains the essence of the interviewing process. Unfortunately, nuances, interpersonal communication subtleties and appropriate responses that were previously thought to be integral parts of the social modeling process aren’t as readily available to the current generation of interviewers and interviewees as they were to previous generations. Research has shown that electronic devices, such as tablets, cellphones and laptops shorten attention spans. Web surfers usually spend no more than 10 to 20 seconds on a page before ads or links distract them and they move on to burrow down into succeeding rabbit holes.

A great deal of communication now takes place via 244-character communication snippets on Twitter. The average person checks his or her phone once every six minutes. Psychologists have recently coined the term ‘nomophobia’, the fear of being out of cellphone contact; shortened from ‘no-mobile-phone-phobia. A 2015 global study reported that students’ ‘addiction’ to media is similar to drug cravings.

The attention span of the average adult is believed to have fallen from 12 minutes in 1998 to five minutes in 2014. If interviewees’ attentive capacities are just five minutes, or less, then after that point interviews provide diminishing returns. Our attention deficits probably result from a lack of self-discipline and the delusional belief that we can cognitively multi-task. We can’t do anything about our natural limitations, but we can discipline ourselves to pay attention. We can also plan and conduct our interviews with few distractions. Interviewers new and experienced should require that all participants turn off their cellphones and, when possible, interviewers should try to ask questions in an unpredictable order.

So, we can expect that a new generation of fraud examiners will soon be interviewing individuals for extended periods of time who have as much of a dearth of direct, face-to-face interpersonal communication as they do. At the extreme, we can envision two or more uncomfortable people in an interview room. All of whom can only remain in the moment for five minutes or less and are fidgety because they need plastic-screen fixes.

An additional challenge will be that CFEs of the Millennial and Z generations will soon be spending hours interviewing older interviewees who are more familiar, explicitly and implicitly, with the subtleties of interpersonal communication. These are people who have spent significantly more time in direct, face-to-face communication. The interpersonal communication-challenged interviewer will be at a significant disadvantage when interviewing guilty, guilty-knowledge, deceptive and/or antagonistic interviewees. As my lunch companion pointed out, many experienced fraudsters are master manipulators of inexperienced interviewers.

It is urgent that younger fraud examiners and forensic accountants be instructed in the strongest terms to put down their plastic screens and practice engagement with others in direct communication, with friends, family and those who cross their paths in the normal flow of life. As a lead CFE examiner or supervisor, encourage your younger employee-colleagues to write down their communication goals for each day. Suggest they read all they can on face-to face interviewing and questioning plus verbal and non-verbal behaviors. They can take interviewing and public-speaking classes or join a toastmasters group. Anything to get them to converse and observe body language and expressions.

Interviewing techniques are the vehicles that ride up and down the road of interpersonal communication. If that road isn’t adequate, then drivers can’t maneuver their vehicles. Your younger employees are the only persons who can bring themselves up to the necessary interpersonal speed limit to make their one-on-one interviews successful.

Whistle & Fish

Every CFE and forensic accountant in practice encounters companies that operate outside accounting rules and tax laws. Blowing the whistle on such companies can be risky for the employee whistleblower; we all know that doing so often results in tipsters losing their jobs and reputations and facing limited future career prospects. Yet, on every side such employees are exhorted to offer the information they do to uncover fraud.

The whistleblower programs set up by U.S. government agencies are of particular interest to our Chapter members, practicing as they do in such close proximity to Washington D.C., and to those practicing in and around Richmond, the seat of government of the Commonwealth of Virginia. State and Federal entities encourage these tips by offering hot-lines and whistleblower awards programs that pay monetary awards to tipsters if their information leads to successful enforcement and to collection of money from a violator.

The two most important of these programs likely to be encountered by our Central Virginia Chapter members are the whistleblower rewards programs of the Internal Revenue Service (IRS) and the Security and Exchange Commission (SEC). The IRS program, which began 140 years ago, authorizes the Department of the Treasury to pay amounts to individuals who provide information that allow the IRS to detect, bring to trial and punish those guilty of violating internal revenue laws. A 2006 amendment created the current IRS whistleblower program, which mandates that the government pay whistleblowers awards based on the size of the taxes collected as a result of their tips.

The seminal U.S. Federal Claims Act, enacted in 1863, allows whistleblowers a portion of reclaimed money when defendants are found guilty of defrauding the federal government. The Commodities Futures Trading Commission has also recently established a whistleblower program. As I’m sure most of you remember, in 2010, the Dodd-Frank Wall Street Reform and Consumer Protection Act established the SEC’S whistleblower awards program. The program seeks to encourage high-quality tips about securities violations with its monetary awards supplemented by protections from retaliation.

The IRS created the whistleblower awards program, codified in IRC 7623(a), to close the tax gap and fight tax fraud more aggressively. In this original program, the maximum award was 15 percent of collected taxes, penalties and other amounts not to exceed $10 million, but the decision whether to make an award at all was wholly within the IRS’ discretion. When the courts considered attempts to challenge award decisions under this law, they uniformly found that the discretion to make or not make an award is essentially not reviewable. In other words, the courts decided the IRS has the right to make an award or not, and the whistleblower can’t appeal that decision.

The Tax Relief and Health Care Act of 2006, which made major changes to the IRS awards program, mandated that the IRS pay out a substantial award whenever a whistleblower’s information leads to the collection of tax, interest and penalties based on disputes in excess of $2 million. The new section, IRC 7623(b), was intended to create strong incentives to bolster insider reporting of tax violations for claims enacted after Dec. 20, 2006. The awards are now mandatory rather than discretionary; and they range from 15 percent to 30 percent of monies collected with no cap on the dollar amount of the award. With some exceptions, a whistleblower may collect an award even if convicted of a felony.

Whistleblowers are eligible for awards based on additions to tax, penalties, interest, and other amounts collected as a result of any administrative or judicial action resulting from the information provided. The 2006 amendment added whistleblower appeal rights to the U.S. Tax Court. To implement the law, the IRS was also required to create a Whistleblower Office that reports to the IRS commissioner. Submissions that don’t qualify under the new section IRC 7623(b) (usually because the disputes are for less than $2 million) are processed under the original IRC 7623(a). The IRS will continue to consider these cases, but the award is at the discretion of the agency, and there’s no requirement that an award be issued. These whistleblowers have no minimum statutory award percentage and no appeal provision.

The Dodd-Frank bill was partly a response to financial debacles such as the Madoff fraud and widespread mortgage frauds. Many criticized the SEC for its inaction to the causative circumstances that led to the Great Recession, although it definitely wasn’t alone in its failure to uncover and stop massive frauds. The SEC had an awards program before Dodd-Frank, but it wasn’t particularly effective, and it focused solely on insider trading. The new whistleblower awards program, which is much broader, encourages tips related to all kinds of securities violations from financial statement fraud to alleged Ponzi schemes.

The Dodd-Frank whistleblower program stipulates that as long as collected monetary sanctions exceed $1 million, awards are 10 percent to 30 percent of that amount. Awards are paid to individuals who voluntarily provide original information that leads to successful SEC enforcement. The award percentage is increased or decreased based on several factors including the extent of the whistleblower’s assistance.

Section 924(d) of the Dodd-Frank Act required the SEC to create a separate office within the agency to enforce the new regulation. In May 2011, the SEC adopted the Final Rules, Regulation 21F, which included prohibitions against retaliation, defined terms and established policies for submitting tips, applying for awards and filing appeals on award decisions.

In the IRS program, a whistleblower must be a “natural person”, in other words, not a corporation or other business organization. Because the claim form must be signed under penalty of perjury, the whistleblower can’t be anonymous, nor can the claim come from a representative of the whistleblower. Multiple whistleblowers can submit a joint claim, but each must sign under penalty of perjury. Similarly, in the SEC program the whistleblower must be a natural person or persons. However, the SEC whistleblower can be anonymous up to the point that the award is paid out, and he or she can be represented by an attorney or other person. IRS whistleblowers can’t be taxpayer’s representatives, employees of the Treasury Department, or employees of federal, state or local governments if they learned of the information as part of their job duties. The SEC whistleblower can’t be an auditor who learned of the issue as part of his or her duties during an audit or other engagement. The SEC whistleblower also must provide the information “voluntarily’ which means that the whistleblower can’t provide it in response to a request from regulators or law enforcement.

IRS claims must include the tax violator’s name and address, date of birth, Social Security number and the specific nature of the violation. If possible, it should also include the tax year(s), the dollar amounts of unreported income or erroneous deductions and supporting documentation. SEC claims must be original information about possible securities laws violations not already known to the SEC and not derived from publicly available sources. Even though the whistleblower employee might have first reported the information to his or her company’s internal hotline process, the SEC will still consider the information to be original. The content of this required information isn’t as clearly specified as in the IRS program, but it must cause the SEC to open (or expand) an investigation and bring a successful enforcement action.

The IRS protects the whistleblower’s identity as far as possible. If the whistleblower is needed as a witness in a court case, the IRS will notify the whistleblower who can then decide whether or not to proceed. The legislation that established the IRS program failed to include any protection for the whistleblower from possible retaliation. However, the alleged tax violator’s information is strictly protected, so that the whistleblower can only be told whether the case is open or closed. If the case is closed, the IRS can reveal to the whistleblower if his or her claim is payable, the amount of a payment or if a payment has been denied.

The SEC can’t disclose information that could reasonably be expected to reveal the identity of a whistleblower except if it needs to comply with law enforcement proceedings or protect investors by notifying another authority. For example, the SEC might need to notify the U.S. Department of Justice or a state attorney general or even foreign law enforcement if a criminal investigation should be opened as a result of the whistleblower’s allegations. The SEC informant must file through an attorney to remain anonymous during the process. After the SEC presents the award to the whistleblower, it will release the whistleblower’s name. Federal laws state that the whistleblower’s company can’t retaliate against the employee.

The IRS pays its awards when the proceeds are collected, and the appeals period for the taxpayer has expired. Many have said that the IRS program process is lengthy and slow. Claimants can generally expect to wait five to seven years to receive an award. While a whistleblower can’t appeal the award amount for IRC 7623(a) through the Tax Court, awards filed under the newer IRC 7623(b) are subject to appeal in the Tax Court.

The SEC will pay after the time has expired for the violator to file an appeal or after any appeals have been concluded. Then it evaluates all claims. The SEC must collect all sanctions from the violator before the SEC pays the award. A whistleblower can’t appeal an award amount but can appeal a denial.

In summary, we CFEs should inform our clients, individual and corporate, that whistleblowers can expect a long and bumpy ride to the chance, but not the promise, of monetary reward.

Needles & Haystacks

A long-time acquaintance of mine told me recently that, fresh out of the University of Virginia and new to forensic accounting, his first assignment consisted in searching, at the height of summer, through two unairconditioned trailers full of thousands of savings and loan records for what turned out to be just two documents critical to proving a loan fraud. He told me that he thought then that his job would always consist of finding needles in haystacks. Our profession and our tools have, thankfully, come a long way since then!

Today, digital analysis techniques afford the forensic investigator the ability to perform cost-effective financial forensic investigations. This is achieved through the following:

— The ability to test or analyze 100 percent of a data set, rather than merely sampling the data set.
–Massive amounts of data can be imported into working files, which allows for the processing of complex transactions and the profiling of certain case-specific characteristics.
–Anomalies within databases can be quickly identified, thereby reducing the number of transactions that require review and analysis.
–Digital analysis can be easily customized to address the scope of the engagement.

Overall, digital analysis can streamline investigations that involve a large number of transactions, often turning a needle-in-the-haystack search into a refined and efficient investigation. Digital analysis is not designed to replace the pick-and-shovel aspect of an investigation. However, the proper application of digital analysis will permit the forensic operator to efficiently identify those specific transactions that require further investigation or follow up.

As every CFE knows, there are an ever-growing number of software applications that can assist the forensic investigator with digital analysis. A few such examples are CaseWare International Inc.’s IDEA, ACL Services Ltd.’s ACL Desktop Edition, and the ActiveData plug-in, which can be added to Excel.

So, whether using the Internet in an investigation or using software to analyze data, fraud examiners can today rely heavily on technology to aid them in almost any investigation. More data is stored electronically than ever before; financial data, marketing data, customer data, vendor listings, sales transactions, email correspondence, and more, and evidence of fraud can be located within that data. Unfortunately, fraudulent data often looks like legitimate data when viewed in the raw. Taking a sample and testing it might or might not uncover evidence of fraudulent activity. Fortunately, fraud examiners now have the ability to sort through piles of information by using special software and data analysis techniques. These methods can identify future trends within a certain industry, and they can be configured to identify breaks in audit control programs and anomalies in accounting records.

In general, fraud examiners perform two primary functions to explore and analyze large amounts of data: data mining and data analysis. Data mining is the science of searching large volumes of data for patterns. Data analysis refers to any statistical process used to analyze data and draw conclusions from the findings. These terms are often used interchangeably.

If properly used, data analysis processes and techniques are powerful resources. They can systematically identify red flags and perform predictive modeling, detecting a fraudulent situation long before many traditional fraud investigation techniques would be able to do so.

Big data is now a buzzword in the worlds of business, audit, and fraud investigation. Big data are high volume, high velocity, and/or high variety information assets that require new forms of processing to enable enhanced decision making, insight discovery, and process optimization. Simply put, big data is information of extreme size, diversity, and complexity.

In addition to thinking of big data as a single set of data, fraud investigators should think about the way data grow when different data sets are connected together that might not normally be connected. Big data represents the continuous expansion of data sets, the size, variety, and speed of generation of which makes it difficult to manage and analyze.

Big data can be instrumental to fact gathering during an investigation. Distilled down to its core, how do fraud examiners gather data in an investigation? We look at documents and financial or operational data, and we interview people. The challenge is that people often gravitate to the areas with which they are most comfortable. Attorneys will look at documents and email messages and then interview individuals. Forensic accounting professionals will look at the accounting and financial data (structured data). Some people are strong interviewers. The key is to consider all three data sources in unison. Big data helps to make it all work together to tell the complete picture. With the ever-increasing size of data sets, data analytics has never been more important or useful. Big data requires the use of creative and well-planned analytics due to its size and complexity. One of the main advantages of using data analytics in a big data environment is, as indicated above, that it allows the investigator to analyze an entire population of data rather than having to choose a sample and risk drawing conclusions in the event of a sampling error.

To conduct an effective data analysis, a fraud examiner must take a comprehensive approach. Any direction can (and should) be taken when applying analytical tests to available data. The more creative fraudsters get in hiding their schemes, the more creative the fraud examiner must become in analyzing data to detect these schemes. For this reason, it is essential that fraud investigators consider both structured and unstructured data when planning their engagements.
Data are either structured or unstructured. Structured data is the type of data found in a database, consisting of recognizable and predictable structures. Examples of structured data include sales records, payment or expense details, and financial reports.

Unstructured data, by contrast, is data not found in a traditional spreadsheet or database. Examples of unstructured data include vendor invoices, email and user documents, human resources files, social media activity, corporate document repositories, and news feeds.

When using data analysis to conduct a fraud examination, the fraud examiner might use structured data, unstructured data, or a combination of the two. For example, conducting an analysis on email correspondence (unstructured data) among employees might turn up suspicious activity in the purchasing department. Upon closer inspection of the inventory records (structured data), the fraud examiner might uncover that an employee has been stealing inventory and covering her tracks in the records.

Data mining has roots in statistics, machine learning, data management and databases, pattern recognition, and artificial intelligence. All of these are concerned with certain aspects of data analysis, so they have much in common; yet they each have a distinct and individual flavor, emphasizing particular problems and types of solutions.

Although data mining technologies provide key advantages to marketing and business activities, they can also manipulate financial data that was previously hidden within a company’s database, enabling fraud examiners to detect potential fraud.

Data mining software provides an easy to use process that gives the fraud examiner the ability to get to data at a required level of detail. Data mining combines several different techniques essential to detecting fraud, including the streamlining of raw data into understandable patterns.

Data mining can also help prevent fraud before it happens. For example, computer manufacturers report that some of their customers use data mining tools and applications to develop anti-fraud models that score transactions in real-time. The scoring is customized for each business, involving factors such as locale and frequency of the order, and payment history, among others. Once a transaction is assigned a high-risk score, the merchant can decide whether to accept the transaction, deny it, or investigate further.

Often, companies use data warehouses to manage data for analysis. Data warehouses are repositories of a company’s electronic data designed to facilitate reporting and analysis. By storing data in a data warehouse, data users can query and analyze relevant data stored in a single location. Thus, a company with a data warehouse can perform various types of analytic operations (e.g., identifying red flags, transaction trends, patterns, or anomalies) to assist management with its decision making responsibilities.

In conclusion, after the fraud examiner has identified the data sources, s/he should identify how the information is stored by reviewing the database schema and technical documentation. Fraud examiners must be ready to face a number of pitfalls when attempting to identify how information is stored, from weak or nonexistent documentation to limited collaboration from the IT department.

Moreover, once collected, it’s critical to ensure that the data is complete and appropriate for the analysis to be performed. Depending on how the data was collected and processed, it could require some manual work to make it usable for analysis purposes; it might be necessary to modify certain field formats (e.g., date, time, or currency) to make the information usable.