Category Archives: Anti-Fraud Policy

The Know It All

As fraud examiners intimately concerned with the general on-going state of health of fraud management and response systems, we find ourselves constantly looking at the integrity of the data that’s truly the life blood of today’s client organizations.  We’re constantly evaluating the network of anti-fraud controls we hope will help keep those pesky, uncontrolled, random data vulnerabilities to a minimum.   Every little bit of critical information that gets mishandled or falls through the cracks, every transaction that doesn’t get recorded, every anti-fraud policy or procedure that’s misapplied has some effect on the client’s overall fraud management picture. 

When it comes to managing its client, financial and payment data, almost every organization has a Pauline.  Pauline’s the person everyone goes to get the answers about data, and the state of the system(s) that process it, that no one else in her unit ever seems to have.  That’s because Pauline is an exceptional employee with years of detailed hands-on-experience in daily financial system operations and maintenance.  Pauline is also an example of the extraordinary level of dependence that many organizations have today on a small handful of their key employees.   The great recession of past memory where enterprises relied on retaining the experienced employees they had rather than on traditional hiring and cross-training practices only exacerbated a still existing, ever growing trend.  The very real threat to the fraud management system that the Pauline’s of the corporate data world pose is not so much that they will commit fraud themselves (although that’s an ever present possibility) but that they will retire or get another job out of state, taking their vital knowledge of the company systems and data with them. 

The day after Pauline’s retirement party and, to an increasing degree thereafter, it will dawn on  Pauline’s unit management that it’s lost a large amount of valuable information about the true state of its data and financial processing system(s), of its total lack of a large amount of system critical data documentation that’s been carried around nowhere but in Jane’s head.  The point is that, for some organizations, their reliance on a few key employees for day to day, operationally related information on their data goes well beyond what’s appropriate and constitutes an unacceptable level of risk to their fraud prevention system.  Today’s newspapers and the internet are full of stories about data breeches, only reinforcing the importance of vulnerable data and of its documentation to the on-going operational viability of our client organizations. 

Anyone whose investigated frauds involving large scale financial systems (insurance claims, bank records, client payment information) is painfully aware that when the composition of data changes (field definitions or content) surprisingly little of that change related information is ever formally documented.  Most of the information is stored in the heads of some key employees, and those key employees aren’t necessarily the ones involved in everyday, routine data management projects.  There’s always a significant level of detail that’s gone undocumented, left out or to chance, and it becomes up to the analyst of the data (be s/he an auditor, a management scientist, a fraud examiner or other assurance professional) to find the anomalies and question them.  The anomalies might be in the form of missing data, changes in data field definitions, or change in the content of the fields; the possibilities are endless.  Without proper, formal documentation, the immediate or future significance of these types of anomalies for the fraud management systems and for the overall fraud risk assessment process itself become almost impossible to determine.   

If our auditor or fraud examiner, operating under today’s typical budget or time constraints,  is not very thorough and misses even finding some of these anomalies, they can end up never being addressed.   How many times as an analyst have you tried to explain something (like apparently duplicate transactions) about the financial system that just doesn’t look right only to be told, “Oh, yeah.  Pauline made that change back in February before she retired; we don’t have too many details on it.”  In other words, undocumented changes to transactions and data, details of which are now only existent in Pauline’s head.  When a data driven system is built on incomplete information, the system can be said to have failed in its role as a component of overall fraud management.  The cycle of incomplete information gets propagated to future decisions, and the cost of the missing or inadequately explained data can be high.  What can’t be seen, can’t ever be managed or even explained. 

It’s truly humbling for any practitioner to experience how much critical financial information resides in the fading (or absent) memories of past or present key employees.  As fraud examiners we should attempt to foster a culture among our clients supportive of the development of concurrent transaction related documentation and the sharing of knowledge on a consistent basis for all systems but especially in matters involving changes to critical financial systems.  One nice benefit of this approach, which I brought to the attention of one of my clients not too long ago, would be to free up the time of one of these key employees to work on more productive fraud control projects rather than constantly serving as the encyclopedia for the rest of the operational staff. 

The Critical Twenty Percent

According to the Pareto Principle, for many phenomena, 80 percent of the consequences stem from 20 percent of the causes. Application of the principle to fraud prevention efforts related particularly to automated systems seems increasingly apropos given the deluge of intrusions, data thefts, worms and other attacks which continue unabated, with organizations of all kinds losing productivity, revenue and more customers every month. ACFE members report having asked the IT managers of numerous victimized organizations over the years what measures their organization took prior to an experienced fraud to secure their networks, systems, applications and data, and the answer has typically involved a combination of traditional perimeter protection solutions (such as firewalls, intrusion detection, antivirus and antispyware) together with patch management, business continuance strategies, and access control methods and policies. As much sense as these traditional steps make at first glance, they clearly aren’t proving sufficiently effective in preventing or even containing many of today’s most sophisticated attacks.

The ACFE has determined that not only are some organizations vastly better than the rest of their industries at preventing and responding to cyber-attacks, but also that the difference between these and other organizations’ effectiveness boils down to just a few foundational controls. And the most significant within these foundational controls are not rooted in standard forms of access control, but, surprisingly, in monitoring and managing change. It turns out that for the best performing organizations there are six important control categories – access, change, resolution, configuration, version release and service levels. There are performance measures involving each of the categories defining audit, operations and security performance measures. These include security effectiveness, audit compliance disruption levels, IT user satisfaction and unplanned work. By analyzing relationships between control objectives and corresponding performance indicators, numerous researchers have been able to differentiate which controls are actually most effective for consistently predictable service delivery, as well as for preventing and responding to security incidents and fraud related exploits.

Of the twenty-one most important foundational controls used by the most effective organizations at controlling intrusions, there were two used by virtually all of them. Both of these controls revolve around change management:

• Are systems monitored for unauthorized changes in real time?
• Are there defined consequences for intentional unauthorized changes?

These controls are supplemented by 1) a formal process for IT configuration management; 2) an automated process for configuration management; 3) a process to track change success rates (the percentage of changes that succeed without causing an incident, service outage or impairment); 4) a process that provides relevant personnel with correct and accurate information on all current IT infrastructure configurations. Researchers found that these top six controls help organizations help manage risks and respond to security incidents by giving them the means to look forward, averting the riskiest changes before they happen, and to look backward, identifying definitively the source of outages, fraud associated abnormalities or service issues. Because they have a process that tracks and records all changes to their infrastructure and their associated success rates, the most effective organizations have a more informed understanding of their production environments and can rule out change as a cause very early in the incident response process. This means they can easily find the changes that caused the abnormal incident and remediate them quickly.

The organizations that are most successful in preventing and responding to fraud related security incidents are those that have mastered change management, thereby documenting and knowing the ‘normal’ state of their systems in the greatest possible detail. The organization must cultivate a ‘culture’ of change management and causality throughout, with zero tolerance for any unauthorized changes. As with any organizational culture, the culture of change management should start at the top, with leaders establishing a tone that all change must follow an explicit change management policy and process from the highest to the lowest levels of the organization, with zero tolerance for unauthorized change. These same executives should establish concrete, well-publicized consequences for violating change management procedures, with a clear, written change management policy. One of the components of an effective change management policy is the establishment of a governing body, such as a change advisory board that reviews and evaluates all changes for risk before approving them. This board reinforces the written policy, requiring mandatory testing tor each and every change, and an explicit rollback plan for each in the case of an unexpected result.

ACFE studies stress that post incident reviews are also crucial, so that the organization protects itself from repeating past mistakes. During these reviews, change owners should document their findings and work to integrate lessons learned into future anti-fraud operational practices.
Perhaps most important for responding to changes is having clear visibility into all change activities, not just those that are authorized. Automated controls that can maintain a change history reduce the risk of human error in managing and controlling the overall process.

So organizations that focus solely on access and reactive resolution controls at the expense of real time change management process controls are almost guaranteed to experience in today’s environment more security incidents, more damage from security incidents, and dramatically longer and less-effective resolution times. On the other hand, organizations that foster a culture of disciplined change management and causality, with full support from senior management, and have zero tolerance for unauthorized change and abnormalities, will have a superior security posture with fewer incidents, dramatically less damage to the business from security breaches and much faster incident identification and resolution of incidents when they happen.

In conducting a cyber-fraud post-mortem, CFE’s and other assurance professionals should not fail to focus on strengthening controls related to reducing 1) the amount of overall time the IT department devotes to unplanned work; 2) a high volume of emergency system changes; 3) and the number and nature of a high volume of failed system changes. All these are red-flags for cyber fraud risk and indicative of a low level of real time system knowledge on the part of the client organization.

Cash In – Cash Out

One of our associate Chapter members has become involved in her first fraud investigation just months after graduating from university and joining her first employer. She’s working for a restaurant management consulting practice and the investigation involves cash theft targeting the cash registers of one of the firm’s smaller clients. Needless to say, we had a lively discussion!

There are basically two ways a fraudster can steal cash from his or her employer. One is to trick the organization into making a payment for a fraudulent purpose. For instance, a fraudster might produce an invoice from a nonexistent company or submit a timecard claiming hours that s/he didn’t really work. Based on the false information that the fraudster provides, the organization issues a payment, e.g., by sending a check to the bogus company or by issuing an inflated paycheck to the employee. These schemes are known as fraudulent disbursements of cash. In a fraudulent disbursement scheme, the organization willingly issues a payment because it thinks that the payment is for a legitimate purpose. The key to the success of these types of schemes is to convince the organization that money is owed.

The second way (as in our member’s restaurant case) to misappropriate cash is to physically remove it from the organization through a method other than the normal disbursement process. An employee takes cash out of his cash register, puts it in his pocket, and walks out the door. Or, s/he might just remove a portion of the cash from the bank deposit on their way to the bank. This type of misappropriation is what is referred to as a cash theft scheme. These schemes reflect what most people think of when they hear the term “theft”; a person simply grabs the money and sneaks away with it.

What are commonly denoted cash theft schemes divide into two categories, skimming and larceny. The difference between whether it’s skimming or larceny depends completely on when the cash is stolen, a distinction confusing to our associate member. Cash larceny is the theft of money that has already appeared on a victim organization’s books, while skimming is the theft of cash that has not yet been recorded in the accounting system. The way an employee extracts the cash may be exactly the same for a cash larceny or skimming scheme. Because the money is stolen before it appears on the books, skimming is known as an “off-book” fraud. The absence of any recorded entry for the missing money also means there is no direct audit trail left by a skimming scheme. The fact that the funds are stolen before they are recorded means that the organization may not be “aware” that the cash was ever received. Consequently, it may be very difficult to detect that the money has been stolen.

The basic structure of a skimming scheme is simple: Employee receives payment from a customer, employee pockets payment, employee does not record the payment. There are a number of variations on the basic plot, however, depending on the position of the perpetrator, the type of company that is victimized, and the type of payment that is skimmed. In addition, variations can occur depending on whether the employee skims sales or receivables (this post is only about sales).

Most skimming, particularly in the retail sector, occurs at the cash register – the spot where revenue enters the organization. When the customer purchases merchandise, he or she pays a cashier and leaves the store with whatever s/he purchased, i.e., a shirt, a meal, etc. Instead of placing the money in the cash register, the employee simply puts it in his or her pocket without ever recording the sale. The process is made much easier when employees at cash collection points are left unsupervised as is the case in many small restaurants. A common technique is to ring a “no sale” or some other non-cash transaction on the employee’s register. The false transaction is entered on the register so that it appears that the employee is recording the sale. If a manager is nearby, it will look like the employee is following correct cash receipting procedures, when in fact the employee is stealing the customer’s payment. Another way employees sometimes skim unrecorded sales is by conducting sales during nonbusiness hours. For instance, many employees have been caught selling company merchandise on weekends or after hours without the knowledge of the owners. In one case, a manager opened his store two hours early every day and ran it business-as-usual, pocketing all sales made during the “unofficial” store hours. As the real opening time approached, he would destroy all records from the off-hours transactions and start the day from scratch.

Although sales skimming does not directly affect the books, it can show up on a company’s records in indirect ways, usually as inventory shrinkage; this is how the skimming thefts were detected at our member’s client. The bottom line is that unless skimming is being conducted on a very large scale, it is usually easier for the fraudster to ignore the shrinkage problem. From a practical standpoint, a few missing pieces of inventory are not usually going to trigger a fraud investigation. However, if a skimming scheme is large enough, it can have a marked effect on a small business’ inventory, especially in a restaurant where profit margins are always tight and a few bad sales months can put the concern out of business. Small business owners should conduct regular inventory counts and make sure that all shortages are promptly investigated and accounted for.

Any serious attempt to deter and detect cash theft must begin with observation of employees.
Skimming and cash larceny almost always involve some form of physical misappropriation of cash or checks; the perpetrator actually handles, conceals, and removes money from the company. Because the perpetrator will have to get a hold of funds and actually carry them away from the company’s premises, it is crucial for management to be able to observe employees who handle incoming cash.

Matching SOCS

I was chatting with the soon-to-be-retired information systems director of a major Richmond insurance company several nights ago at the gym. Our friendship goes back many years to when we were both audit directors for the Virginia State Auditor of Public Accounts. My friend was commenting, among other things, on the confusing flood of regulatory changes that’s swept over his industry in recent years relating to Service Organization Controls (SOC) reports. Since SOC reports can be important tools for fraud examiners, I thought they might be an interesting topic for a post.

Briefly, SOC reports are a group of internal control assurance reports, performed by independent reviewers, of IT organizations providing a range of computer based operational services, usually to multiple client corporations. The core idea of a SOC report is to have one or a series of reviews conducted of the internal controls related to financial reporting of the service organization and to then make versions of these reports available to the independent auditors of all the service organization’s user clients; in this way the service organization doesn’t have to be separately and repeatedly audited by the auditors of each of its separate clients, thereby avoiding much duplication of effort and expense on all sides.

In 2009 the International Auditing and Assurance Standards Board (IAASB) issued a new International Standard on Assurance Engagements: ‘ISAE 3402 Assurance Reports on Controls in a Service Organization’. The AICPA followed shortly thereafter with a revision of its own Statement on Auditing Standards (SAS) No. 70, guidance around the performance of third party service organization reports, releasing Statement on Standards for Attestation Engagement (SSAE) 16, ‘Reporting on Controls in a Service Organization’. So how does the SOC process work?

My friend’s insurance company (let’s call it Richmond Mutual) outsources (along with a number of companion companies) its claims processing functions to Fiscal Agent, Ltd. Richmond Mutual is the user organization and Fiscal Agent, Ltd is the service organization. To ensure that all the claims are processed and adequate internal controls are in place and functioning at the service organization, Richmond Mutual could appoint an independent CPA or service auditor to examine and report on the service organization’s controls. In the case of Richmond Mutual, however, the service organization itself, Fiscal Agent, Ltd, obtains the SOC report by appointing an independent service auditor to perform the audit and provide it with a SOC 1 report. A SOC 1 report provides assurance on the business processes that support internal controls over financial reporting and is, consequently, of interest to fraud examiners as, for example, an element to consider in structuring the fraud risk assessment. This report can then be shared with user organizations like Richmond Mutual and with their auditors as deemed necessary. The AICPA also provides for two other SOC reports: SOC 2 and SOC 3. The SOC 2 and SOC 3 reports are used for reporting on controls other than the internal controls over financial reporting. One of the key differences between SOC 2 and SOC 3 reports is that a SOC 3 is a general use report to be provided to anyone while SOC 2 reports are only for those users specifically specified in the report; in other words, the distribution is limited.

SOC reports are valuable to their many users for a whole host of obvious reasons but Fraud Examiners and other assurance professionals need to keep in mind some common misconceptions about them (some shared, I found, by my IT friend). SOC reports are not assurances. IASSB and AICPA guidelines specify that SOC reports are to be of limited distribution, to be used by the service organization, user organization and user auditors only and thus should never be used for any other service organization purpose; never, for example, as marketing or advertising tools to assure potential clients of service organization quality.

SOC 1 reports are used only for reporting on service organization internal controls over financial reporting; in cases where a user or a service organization wants to assess such areas as data privacy or confidentiality, they need to arrange for the performance of a SOC 2 and/or SOC 3 report.

It’s also a common mistake to assume that the SOC report is sufficient verification of internal controls and that no controls on the user organization side need to be assessed by the auditors; the guidelines are clear that while verifying controls at the service organization, controls at the user organization should also be verified. Since service the organization provides considerable information as background for the service auditor’s review, service organizations are often under the mistaken impression that the accuracy of this background information will not be evaluated by the SOC reviewer. The guidelines specify that SOC auditors should carefully verify the quality and accuracy of the information provided by the service organization under the “information provided by the service organization” section of their audit program.

In summary, the purpose of SOC 1 reports is to provide assurance on the processes that support internal controls over financial reporting. Fraud examiners and other users should take the time to understand the varied purpose(s) of the three types of SOC reports so they can use them intelligently. These reports can be extremely useful to fraud examiners assessing the fraud enterprise risk prevention programs of user organizations to understand the controls that impact financial operations and related IT controls, especially in multiple-service provider scenarios.

Detect and Prevent

I got a call last week from a long term colleague, one of whose smaller client firms recently discovered a long running key-employee initiated fraud. My friend has been asked to assist her client in developing approaches to strengthen controls to, hopefully, prevent such disasters in the future.

ACFE training has consistently told us over the years, and daily experience repeatedly confirmed, that it is simply not possible or economical to stop all fraud before it happens. The only way for a retail concern to absolutely stop shoplifting might be to close and accept orders only over the Internet. Similarly, the only way for a bank to absolutely stop all loan fraud might be for it to stop lending money.

In general, my friend and I agreed during our conversation, that increasing preventive security can reduce fraud losses, but beyond some point, the cost of additional preventive security will exceed the related savings from reduced fraud losses. This is where detection comes in; it may be economical when prevention is not. One way to prevent a salesclerk from stealing from the register would be for the security department to carefully monitor, review, and approve every one of the clerk’s sales. However, it would likely be much more cost effective instead to implement a simple detective control: an end-of-shift reconciliation between the cash in the register and the transactions logged by the cash register during the clerk’s shift. If refunds are not given at the point of sale, the end-of-shift balance of cash in the register should equal the shift’s sales per the transaction logs minus the balance of cash in the register at the beginning of the shift. Any significant failure of these numbers to reconcile would amount to a red flag. Of course, further investigation could show that the clerk simply made an error and so did not commit fraud.

But the cost effectiveness of detective controls, like preventive controls, imposes limits. First, such controls are not cost free to implement, and improving detective controls may cost more than the results they provide. Second, detective controls produce both false positives and false negatives. A false positive occurs when a detective control signals a possible fraud that upon investigation turns up a reasonable explanation for the indicator. A false negative occurs when a detective control fails to signal a possible fraud when one exists. Reducing false negatives means increasing the fraud detection rate.

Similarly, the cost effectiveness of increasing preventive security has a limit as does the benefit of increasing the fraud detection rate. To increase the detection rate, it’s necessary to increase the frequency at which the detective control signals possible fraud. The result is more expensive investigations, and the cost of such additional investigations can exceed the resulting reduction in fraud losses.

As we all learned in undergraduate auditing, controls are essentially policies and procedures designed to minimize losses due to fraud or to other events such as errors or acts of nature. Corrective controls are merely special control types involved once a loss is known to exist. With respect to fraud, an important corrective control involves the investigation of potential frauds and the investigation and recovery process from discovered frauds.

More generally speaking, fraud investigations themselves serve not only a corrective function but also detective and preventive functions. Such investigations are detective of fraud to the extent that they follow up on fraud signals or red flags in order to confirm or disconfirm the presence of fraud. But once fraud is confirmed to exist, fraud examinations shift toward gathering evidence and become corrective by assisting in recovery from the perpetrator and other sources such as from insurance. Fraud investigations are also corrective in that they can lead to the revelation and repair of heretofore unknown weaknesses.

The end result is that the fraud investigation functions to correct the original loss, and the related discovery of the fraud scenario leads to prevention of similar losses in the future. In summary, the fraud examination has served to detect, correct, and prevent fraud. However, fraud investigations are not normally thought of as detective controls. This so is because fraud investigations tend to be much more costly than standard detective controls and therefore are normally used only when there is already some predication in the form of a fraud indicator triggered by a typical detective control. Therefore, the primary functions of fraud investigations are to address existing frauds and help to prevent future ones.

In some cases, the primary benefit of a fraud investigation might be to prevent future frauds. Even when recovery is impossible or impractical (e.g., because the thief has no assets), unwinding the fraud scheme may still have the benefit of leading to the prevention of the same scheme in the future. Furthermore, a company might benefit from spending a very large sum of money to investigate and prosecute a very small theft in order to deter other individuals from defrauding the company in the same way. Many State governments have statutes specifying that every fraud affecting governmental assets, whether large or small, must be fully investigated because taxpayer funds are involved (the assets affected are public property).

There is never a guarantee that investigating a fraud indicator will lead to the discovery of fraud. Depending on the situation, an investigation might lead to nothing at all (i.e., produce a reasonable explanation for the original red flag) or to the discovery of losses due to simple errors, waste, inefficiencies, or even uncontrollable events like acts of nature. If a lender is considering a loan application, a fraud indicator might indicate nothing, fraud, or an error. On the other hand, in regard to the possible theft of raw materials in a production process, a fraud indicator just might indicate undocumented waste or scrap.

Two important factors to consider concerning the general design of a fraud detection process are not only the costs and benefits of detecting, correcting, and preventing a given fraud scenario but also the costs and benefits of detecting, correcting, and preventing errors, waste, uncontrollable events, and inefficiencies in general. Of course, the particular costs that are relevant will vary from one type of business process to another.

As a general rule, we can say that both preventive controls and detective controls cost less than corrective controls. Corrective controls tend to involve hands-on, resource-intensive investigations, and in many cases, such investigations do not result in recovering the loss. On the other hand, preventive controls can also be quite costly. Banks pay armed guards and incur costs to maintain expensive vaults and alarm systems. Companies surround their headquarters with high fences and armed guards, and use security checkpoints and biometric key card systems inside. On the information technology side, firms use sophisticated firewalls and multi-layer access controls. The costs of all these preventive measures can add up to staggering sums in large companies. Of course, losses that are not prevented or corrected in a timely fashion can lead to the ultimate corrective measure: bankruptcy. In fact, some ACFE estimates show that about one-third of all business failures relate to some form of fraudulent activity.

One positive aspect of the cost of preventive controls is that unlike detective controls, they do not generate fraud indicators that lead to costly investigations. In fact, they tend to do their job in complete silence so that management never even knows when they prevent a fraud. The thick door of a bank vault with a time lock prevents bank employees from entering the building at night to steal its contents. Similarly, passwords, pin numbers, and biometric data silently provide access to authorized individuals and prevent access from others.

The problem with preventive controls is that they are always subject to circumvention by determined and cunning fraudsters. There is no perfect solution to preventing acts of fraud, so detection is necessary as a secondary line of defense, and in some cases, as the primary line of defense. Consider a lending company that accepts online loan applications. It may be difficult or impossible to prevent fraudulent applications, but the company can certainly put a sophisticated (and expensive) system in place to analyze applications and provide indicators that suggest when an application may be fraudulent.

In general, the optimal allocation of resources to prevention versus detection depends on the particular business process under consideration. So, there is no general rule that dictates the optimal allocation of resources between prevention versus detection. But there are some general steps that can assist in making the allocation:

1. Analyze the target business process and identify threats and vulnerabilities.
2. Select reasonable preventive controls according to the business process and customs within the client’s industry.
3. Estimate fraud losses given the assumed preventive controls.
4. Identify and add a basic set of detective controls to the system.
5. For a given set of detective controls, identify the optimal mix of false negatives versus false positives. The optimal mix depends on the costs of investigations versus the costs of losses. Large losses and small investigation costs favor relatively low false negatives and high false positives for red flags.
6. Given the assumed mix of false negative and false positive errors, estimate the incremental cost associated with adding the detective (and related corrective) controls, and estimate the resulting reduction in fraud losses.
7. Compare the reduction in fraud losses with the increase in costs associated with adding the optimal mix of detection and correction controls.
8. If increase in costs is significantly lower than the related reduction in fraud losses, consider adding more detective controls. Otherwise, accept the set of detective controls under consideration.

Cloud Shapes

Just as clouds can take different shapes and be perceived differently, so too is cloud computing perceived differently by our various types of client companies. To some, the cloud looks like web-based applications, a revival of the old thin client. To others, the cloud looks like utility computing, a grid that charges metered rates for processing time. To some, the cloud could be parallel computing, designed to scale complex processes for improved efficiency. Interestingly, cloud services are wildly different. Amazon’s Elastic Compute Cloud offers full Linux machines with root access and the opportunity to run whatever apps the user chooses. Google’s App Engine will also let users run any program they want, as long as the user specifies it in a limited version of Python and uses Google’s database.

The National Institute of Standards and Technology (NIST) defines cloud computing as a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. It is also important to remember what our ACFE tells us, that the Internet itself is in fact a primitive transport cloud. Users place something on the path with an expectation that it will get to the proper destination, in a reasonable time, with all parties respecting the privacy and security of the artifact.

Cloud computing, as everyone now knows, brings many advantages to users and vendors. One of its biggest advantages is that a user may no longer have to be tethered to a traditional computer to use an application, or have to buy a version of an application that is specifically configured for a phone, a tablet or other device. Today, any device that can access the Internet can run a cloud-based application. Application services are available independent of the user’s home or office devices and network interfaces. Regardless of the device being used, users also face fewer maintenance issues. End users don’t have to worry about storage capacity, compatibility or other similar concerns.

From a fraud prevention perspective, these benefits are the result of the distributed nature of the web, which necessitates a clear separation between application and interaction logic. This is because application logic and user data reside mostly on the web cloud and manifest themselves in the form of tangible user interfaces at the point of interaction, e.g., within a web browser or mobile web client. Cloud computing is also beneficial for our client’s vendors. Businesses frequently find themselves using the vast majority of their computing capacity in a small percentage of time, leaving expensive equipment often idle. Cloud computing can act as a utility grid for vendors and optimize the use of their resources. Consider, for example, a web-based application running in Amazon’s cloud. Suppose there is a sudden surge in visitors as a result of media coverage, for example. Formerly, many web applications would fail under the load of big traffic spikes. But in the cloud, assuming that the web application has been designed intelligently, additional machine instances can be launched on demand.

With all the benefits, there are related constraints. Distrust is one of the main constraints on online environments generally. particularly in terms of consumer fraud, waste and abuse protection. Although the elements that contribute to building trust can be identified in broad terms, there are still many uncertainties in defining and establishing trust in online environments. Why should users trust cloud environments to store their personal information and to share their privacy in such a large and segregated environment? This question can be answered only by investigating these uncertainties in the context of risk assessment and by exploring the relationship between trust and the way in which the risk is perceived by stakeholders. Users are assumed to be willing to disclose personal information and have that information used subsequently to store their personal data or to create consumer profiles for business use when they perceive that fair procedures are in place to protect their individual privacy.

The changing trust paradigm represented by cloud computing means that less information is stored locally on our client’s machines and is instead being hosted elsewhere on earth. No one for the most part buys software anymore; users just rent it or receive it for free using the Software as a Service (SaaS) business model. On the personal front, cloud computing means Google is storing user’s mail, Instagram their photographs, and Dropbox their documents, not to mention what mobile phones are automatically uploading to the cloud for them. In the corporate world, enterprise customers not only are using Dropbox but also have outsourced primary business functions that would have previously been handled inside the company to SaaS providers such as Salesforce.com, Zoho.com, and Box.com.

From a crime and security perspective, the aggregation of all these data, exabytes and exabytes of it, means that user’s most personal of information is no longer likely stored solely on their local hard drives but now aggregated on computer servers around the world. By aggregating important user data, financial and otherwise, on cloud-based computer servers, the cloud has obviated the need for criminals to target everybody’s hard drive individually and instead put all the jewels in a single place for criminals and hackers to target (think Willie Sutton).

The cloud is here to stay, and at this point there is no going back. But with this move to store all available data in the cloud come additional risks. Thinking of some of the largest hacks to date, Target, Heartland Payment Systems, TJX, and Sony PlayStation Network; all of these thefts of hundreds of millions of accounts were made possible because the data were stored in the same virtual location. The cloud is equally convenient for individuals, businesses, and criminals.

The virtualization and storage of all of these data is a highly complex process and raises a wide array of security, public policy, and legal issues for all CFEs and for our clients. First, during an investigation, where exactly is this magical cloud storing my defrauded client’s data? Most users have no idea when they check their status on Facebook or upload a photograph to Pinterest where in the real world this information is actually being stored. That they do not even stop to pose the question is a testament to the great convenience, and opacity, of the system. Yet from a corporate governance and fraud prevention risk perspective, whether your client’s data are stored on a computer server in America, Russia, China, or Iceland makes a difference.

ACFE guidance emphasizes that the corporate and individual perimeters that used to protect information internally are disappearing, and the beginning and end of corporate user computer networks are becoming far less well defined. It’s making it much harder for examiners and auditors to see what data are coming and going from a company, and the task is nearly impossible on the personal front. The transition to the cloud is a game changer for anti-fraud security because it completely redefines where data are stored, moved, and accessed, creating sweeping new opportunities for criminal hackers. Moreover, the non-local storage of data raises important questions about deep dependence on cloud-based information systems. When these services go down or become unavailable i.e., a denial of service attack, or the Internet connection is lost, the data become unavailable, and your client for our CFE services is out of business.

All the major cloud service providers are routinely remotely targeted by criminal attacks, including Dropbox, Google, and Microsoft, and more such attacks occur daily. Although it may be your client’s cloud service provider that is targeted in such attack, the client is the victim, and the data taken is theirs’s. Of course, the rights reserved to the providers in their terms of service agreements (and signed by users) usually mean that provider companies bear little or no liability when data breaches occur. These attacks threaten intellectual property, customer data, and even sensitive government information.

To establish trust with end users in the cloud environment, all organizations should address these fraud related risks. They also need to align their users’ perceptions with their policies. Efforts should be made to develop a standardized approach to trust and risk assessment across different domains to reduce the burden on users who seek to better understand and compare policies and practices across cloud provider organizations. This standardized approach will also aid organizations that engage in contractual sharing of consumer information, making it easier to assess risks across organizations and monitor practices for compliance with contracts. policies and law.

During the fraud risk assessment process, CFEs need to advise their individual corporate clients to mandate a given cloud based activity in which they participate to be conducted fairly and to address their privacy concerns. By ensuring this fairness and respecting privacy, organizations give their customers the confidence to disclose personal information on the cloud and to allow that information subsequently to be used to create consumer profiles for business use. Thus, organizations that understand the roles of trust and risk should be advised to continuously monitor user perceptions to understand their relation to risk aversion and risk management. Managers should not rely solely on technical control measures. Security researchers have tended to focus on the hard issues of cryptography and system design. By contrast. issues revolving around the use of computers by lay users and the creation of active incentives to avoid fraud have been relatively neglected. Many ACFE lead studies have shown that human errors are the main cause of information security incidents.

Piecemeal approaches to control security issues related to cloud environments fail simply because they are usually driven by a haphazard occurrence; reaction to the most recent incident or the most recently publicized threat. In other words, managing information security in cloud environments requires collaboration among experts from different disciplines, including computer scientists. engineers. economists, lawyers and anti-fraud assurance professionals like CFE’s, to forge common approaches.

The Human Financial Statement

A finance professor of mine in graduate school at the University of Richmond was fond of saying, in relation to financial statement fraud, that as staff competence goes down, the risk of fraud goes up. What she meant by that was that the best operated, most flawless control ever put in place can be tested and tested and tested again and score perfectly every time. But its still no match for the employee who doesn’t know, or perhaps doesn’t even care, how to operate that control; or for the manager who doesn’t read the output correctly, or for the executive who hides part of a report and changes the numbers in the rest. That’s why CFEs and the members of any fraud risk assessment team (especially our client managers who actually own the process and its results), should always take a careful look at the human component of risk; the real-world actions, and lack thereof, taken by real-life employees in addressing the day-to-day duties of their jobs.

ACFE training emphasizes that client management must evaluate whether it has implemented anti-fraud controls that adequately address the risk that a material misstatement in the financial statements will not be prevented or detected timely and then focus on fixing or developing controls to fill any gaps. The guidance offers several specific suggestions for conducting top-down, risk-based anti-fraud focused evaluations, and many of them require the active participation of staff drawn from all over the assessed enterprise. The ACFE documentation also recommends that management consider whether a control is manual or automated, its complexity, the risk of management override, and the judgment required to operate it. Moreover, it suggests that management consider the competence of the personnel who perform the control or monitor its performance.

That’s because the real risk of financial statement misstatements lies not in a company’s processes or the controls around them, but in the people behind the processes and controls who make the organization’s control environment such a dynamic, challenging piece of the corporate puzzle. Reports and papers that analyze fraud and misstatement risk use words like “mistakes” and “improprieties.” Automated controls don’t do anything “improper.” Properly programmed record-keeping and data management processes don’t make “mistakes.” People make mistakes, and people commit improprieties. Of course, human error has always been and will always be part of the fraud examiner’s universe, and an SEC-encouraged, top-down, risk-based assessment of a company’s control environment, with a view toward targeting the control processes that pose the greatest misstatement risk, falls nicely within most CFE’s existing operational ambit. The elevated role for CFEs, whether on staff or in independent private practice, in optionally conducting fraud risk evaluations offers our profession yet another chance to show its value.

Focusing on the human element of misstatement fraud risk is one important way our client companies can make significant progress in identifying their true financial statement and other fraud exposures. It also represents an opportunity for management to identify the weak links that could ultimately result in a misstatement, as well as for CFEs to make management’s evaluation process a much simpler task. I can remember reading many articles in the trade press these last years in which commentators have opined that dramatic corporate meltdowns like Wells Fargo are still happening today, under today’s increased regulatory strictures, because the controls involved in those frauds weren’t the problem, the people were. That is certainly true. Hence, smart risk assessors are integrating the performance information they come across in their risk assessments on soft controls into management’s more quantitative, control-related evaluation data to paint a far more vivid picture of what the risks look like. Often the risks will wear actual human faces. The biggest single factor in calculating restatement risk as a result of a fraud relates to the complexity of the control(s) in question and the amount of human judgment involved. The more complex a control, the more likely it is to require complicated input data and to involve highly technical calculations that make it difficult to determine from system output alone whether something is wrong with the process itself. Having more human judgment in the mix gives rise to greater apparent risk.

A computer will do exactly what you tell it to over and over; a human may not, but that’s what makes humans special, special and risky. In the case of controls, especially fraud prevention related controls, our human uniqueness can manifest as simple afternoon sleepiness or family financial troubles that prove too distracting to put aside during the workday. So many things can result in a mistaken judgment, and simple mistakes in judgment can be extremely material to the final financial statements.

CFEs, of course, aren’t in the business of grading client employees or of even commenting to them about their performance but whether the fraud risk assessment in question is related to financial report integrity or to any other issue, CFEs in making such assessments at management’s request need to consider the experience, training, quality, and capabilities of the people performing the most critical controls.

You can have a well-designed control, but if the person in charge doesn’t know, or care, what to do, that control won’t operate. And whether such a lack of ability, or of concern, is at play is a judgment call that assessing CFEs shouldn’t be afraid to make. A negative characterization of an employee’s capability doesn’t mean that employee is a bad worker, of course. It may simply mean he or she is new to the job, or it may reveal training problems in that employee’s department. CFEs proactively involved in fraud risk assessment need to keep in mind that, in some instances, competence may be so low that it results in greater risk. Both the complexity of a control and the judgment required to operate it are important. The ability to interweave notions of good and bad judgment into the fabric of a company’s overall fraud risk comes from CFEs experience doing exactly that on fraud examinations. A critical employee’s intangibles like conscientiousness, commitment, ethics and morals, and honesty, all come into play and either contribute to a stronger fraud control environment or cause it to deteriorate. CFEs need to be able, while acting as professional risk assessors, to challenge to management the quality, integrity, and motivation of employees at all levels of the organization.

Many companies conduct fraud-specific tests as a component of the fraud prevention program, and many of the most common forms of fraud can be detected by basic controls already in place. Indeed, fraud is a common concern throughout all routine audits, as opposed to the conduct of separate fraud-only audits. It can be argued that every internal control is a fraud deterrent control. But fraud still exists.

What CFEs have to offer to the risk assessment of financial statement and other frauds is their overall proficiency in fraud detection and the reality that they are well-versed in, and cognizant of, the risk of fraud in every given business process of the company; they are, therefore, well positioned to apply their best professional judgment to the assessment of the degree of risk of financial statement misstatement that fraud represents in any given client enterprise.

Forensic Data Analysis

As a long term advocate of big data based solutions to investigative challenges, I have been interested to see the recent application of such approaches to the ever-growing problem of data beaches. More data is stored electronically than ever before, financial data, marketing data, customer data, vendor listings, sales transactions, email correspondence, and more, and evidence of fraud can be located anywhere within those mountains of data. Unfortunately, fraudulent data often looks like legitimate data when viewed in the raw. Taking a sample and testing it might not uncover fraudulent activity. Fortunately, today’s fraud examiners have the ability to sort through piles of information by using special software and data analysis techniques. These methods can identify future trends within a certain industry, and they can be configured to identify breaks in audit control programs and anomalies in accounting records.

In general, fraud examiners perform two primary functions to explore and analyze large amounts of data: data mining and data analysis. Data mining is the science of searching large volumes of data for patterns. Data analysis refers to any statistical process used to analyze data and draw conclusions from the findings. These terms are often used interchangeably. If properly used, data analysis processes and techniques are powerful resources. They can systematically identify red flags and perform predictive modeling, detecting a fraudulent situation long before many traditional fraud investigation techniques would be able to do so.

Big data are high volume, high velocity, and/or high variety information assets that require new forms of processing to enable enhanced decision making, insight discovery, and process optimization. Simply put, big data is information of extreme size, diversity, and complexity. In addition to thinking of big data as a single set of data, fraud investigators and forensic accountants are conceptualizing about the way data grow when different data sets are connected together that might not normally be connected. Big data represents the continuous expansion of data sets, the size, variety, and speed of generation of which makes it difficult for investigators and client managements to manage and analyze.

Big data can be instrumental to the evidence gathering phase of an investigation. Distilled down to its core, how do fraud examiners gather data in an investigation? They look at documents and financial or operational data, and they interview people. The challenge is that people often gravitate to the areas with which they are most comfortable. Attorneys will look at documents and email messages and then interview individuals. Forensic accounting professionals will look at the accounting and financial data (structured data). Some people are strong interviewers. The key is to consider all three data sources in unison.

Big data helps to make it all work together to bring the complete picture into focus. With the ever-increasing size of data sets, data analytics has never been more important or useful. Big data requires the use of creative and well-planned analytics due to its size and complexity. One of the main advantages of using data analytics in a big data environment is that it allows the investigator to analyze an entire population of data rather than having to choose a sample and risk drawing erroneous conclusions in the event of a sampling error.

To conduct an effective data analysis, a fraud examiner must take a comprehensive approach. Any direction can (and should) be taken when applying analytical tests to available data. The more creative fraudsters get in hiding their breach-related schemes, the more creative the fraud examiner must become in analyzing data to detect these schemes. For this reason, it is essential that fraud investigators consider both structured and unstructured data when planning their engagements.

Data are either structured or unstructured. Structured data is the type of data found in a database, consisting of recognizable and predictable structures. Examples of structured data include sales records, payment or expense details, and financial reports. Unstructured data, by contrast, is data not found in a traditional spreadsheet or database. Examples of unstructured data include vendor invoices, email and user documents, human resources files, social media activity, corporate document repositories, and news feeds. When using data analysis to conduct a fraud examination, the fraud examiner might use structured data, unstructured data, or a combination of the two. For example, conducting an analysis on email correspondence (unstructured data) among employees might turn up suspicious activity in the purchasing department. Upon closer inspection of the inventory records (structured data), the fraud examiner might uncover that an employee has been stealing inventory and covering her tracks in the record.

Recent reports of breach responses detailed in social media and the trade press indicate that those investigators deploying advanced forensic data analysis tools across larger data sets provided better insights into the penetration, which lead to more focused investigations, better root cause analysis and contributed to more effective fraud risk management. Advanced technologies that incorporate data visualization, statistical analysis and text-mining concepts, as compared to spreadsheets or relational database tools, can now be applied to massive data sets from disparate sources enhancing breach response at all organizational levels.

These technologies enable our client companies to ask new compliance questions of their data that they might not have been able to ask previously. Fraud examiners can establish important trends in business conduct or identify suspect transactions among millions of records rather than being forced to rely on smaller samplings that could miss important transactions.

Data breaches bring enhanced regulatory attention. It’s clear that data breaches have raised the bar on regulators’ expectations of the components of an effective compliance and anti-fraud program. Adopting big data/forensic data analysis procedures into the monitoring and testing of compliance can create a cycle of improved adherence to company policies and improved fraud prevention and detection, while providing additional comfort to key stakeholders.

CFEs and forensic accountants are increasingly being called upon to be members of teams implementing or expanding big data/forensic data analysis programs so as to more effectively manage data breaches and a host of other instances of internal and external fraud, waste and abuse. To build a successful big data/forensic data analysis program, your client companies would be well advised to:

— begin by focusing on the low-hanging fruit: the priority of the initial project(s) matters. The first and immediately subsequent projects, the low-hanging investigative fruit, normally incurs the largest cost associated with setting up the analytics infrastructure, so it’s important that the first few investigative projects yield tangible results/recoveries.

— go beyond usual the rule-based, descriptive analytics. One of the key goals of forensic data analysis is to increase the detection rate of internal control noncompliance while reducing the risk of false positives. From a technology perspective, client’s internal audit and other investigative groups need to move beyond rule-based spreadsheets and database applications and embrace both structured and unstructured data sources that include the use of data visualization, text-mining and statistical analysis tools.

— see that successes are communicated. Share information on early successes across divisional and departmental lines to gain broad business process support. Once validated, success stories will generate internal demand for the outputs of the forensic data analysis program. Try to construct a multi-disciplinary team, including information technology, business users (i.e., end-users of the analytics) and functional specialists (i.e., those involved in the design of the analytics and day-to-day operations of the forensic data analysis program). Communicate across multiple departments to keep key stakeholders assigned to the fraud prevention program updated on forensic data analysis progress under a defined governance program. Don’t just seek to report instances of noncompliance; seek to use the data to improve fraud prevention and response. Obtain investment incrementally based on success, and not by attempting to involve the entire client enterprise all at once.

—leadership support will gets the big data/forensic data analysis program funded, but regular interpretation of the results by experienced or trained professionals are what will make the program successful. Keep the analytics simple and intuitive; don’t try to cram too much information into any one report. Invest in new, updated versions of tools to make analytics sustainable. Develop and acquire staff professionals with the required skill sets to sustain and leverage the forensic data analysis effort over the long-term.
Finally, enterprise-wide deployment of forensic data analysis takes time; clients shouldn’t be lead to expect overnight adoption; an analytics integration is a journey, not a destination. Quick-hit projects might take four to six weeks, but the program and integration can take one to two years or more.

Our client companies need to look at a broader set of risks, incorporate more data sources, move away from lightweight, end-user, desktop tools and head toward real-time or near-real time analysis of increased data volumes. Organizations that embrace these potential areas for improvement can deliver more effective and efficient compliance programs that are highly focused on identifying and containing damage associated with hacker and other exploitation of key high fraud-risk business processes.

Authority Figures

As fraud examiners and forensic accountants intimately concerned with the on-going state of health of our client’s fraud management programs, we find ourselves constantly looking at the integrity of the critical data that’s truly (as much as financial capital) the life blood of today’s organizations. We’re constantly evaluating the network of anti-fraud controls we hope will help keep those pesky, uncontrolled, random data driven vulnerabilities to fraud to a minimum. Every little bit of critical financial information that gets mishandled or falls through the cracks, every transaction that doesn’t get recorded, every anti-fraud policy or procedure that’s misapplied has some effect on the client’s overall fraud management picture and on our challenge.

When it comes to managing its client, financial and payment data, almost every small to medium sized organization has a Sandy. Sandy’s the person to whom everyone goes to get the answers about data, and the state of system(s) that process it; quick answers that no one else ever seems to have. That’s because Sandy is an exceptional employee with years of detailed hands-on-experience in daily financial system operations and maintenance. Sandy is also an example of the extraordinary level of dependence that many organizations have today on a small handful of their key employees. The now unlamented great recession, during which enterprises relied on retaining the experienced employees they had rather than on traditional hiring and cross-training practices, only exacerbated an existing, ever growing trend. The very real threat to the Enterprise Fraud Management system that the Sandy’s of the corporate data world pose is not so much that they will commit fraud themselves (although that’s an ever-present possibility) but that they will retire or get another job across town or out of state, taking their vital knowledge of company systems and data with them.

The day after Sandy’s retirement party and, to an increasing degree thereafter, it will dawn on Sandy’s management that it’s lost a large amount of information about the true state of its data and financial processing system(s). Management will also become aware, if it isn’t already, of its lack of a large amount of system critical data documentation that’s been carried around nowhere else but in Sandy’s head. The point is that, for some smaller organizations, their reliance on a few key employees for day to day, operationally related information goes well beyond what’s appropriate and constitutes an unacceptable level of risk to their entire fraud prevention programs. Today’s newspapers and the internet are full of stories about hacking and large-scale data breeches, that only reinforce the importance of vulnerable data and of the completeness of its documentation to the on-going operational viability of our client organizations.

Anyone whose investigated frauds involving large scale financial systems (insurance claims, bank records, client payment information) is painfully aware that when the composition of data changes (field definitions or content) surprisingly little of change related information is formally documented. Most of the information is stored in the heads of some key employees, and those key employees aren’t necessarily involved in everyday, routine data management projects. There’s always a significant level of detail that’s gone undocumented, left out or to chance, and it becomes up to the analyst of the data (be s/he an auditor, a management scientist, a fraud examiner or other assurance professional) to find the anomalies and question them. The anomalies might be in the form of missing data, changes in data field definitions, or changes in the content of the fields; the possibilities are endless. Without proper, formal documentation, the immediate or future significance of these types of anomalies for the fraud management system and for the overall fraud risk assessment process itself become almost impossible to determine.

If our auditor or fraud examiner, operating under today’s typical budget or time constraints, is not very thorough and misses the identification of some of these anomalies, they can end up never being addressed. How many times as an analyst have we all tried to explain something (like apparently duplicate transactions) about the financial system that just doesn’t look right only to be told, “Oh, yeah. Sandy made that change back in February before she retired; we don’t have too many details on it.” In other words, undocumented changes to transactions and data, details of which are now only existent in Sandy’s no longer available head. When a data driven system is built on incomplete information, the system can be said to have failed in its role as a component of the origination’s fraud prevention program. The cycle of incomplete information gets propagated to future decisions, and the cost of the missing or inadequately explained data can be high. What can’t be seen, can’t ever be managed or even explained.

In summary, it’s a truly humbling to experience to be confronted with how much critical financial information resides in the fading (or absent) memories of past or present key employees; what the ACFE calls authority figures. As fraud examiners we should attempt to foster a culture among our clients supportive of the development of concurrent systems of transaction related documentation and the sharing of knowledge on a consistent basis about all systems but especially regarding the recording of changes to critical financial systems. One nice benefit of this approach, which I brought to the attention of one of my audit clients not too long ago, would be to free up the time of one of these key employees to work on more productive fraud control projects rather than serving as the encyclopedia for the rest of the operational staff.

The Client Requested Recommendation

We fraud examiners must be very circumspect about drawing conclusions. But who among us has not found him or herself in a discussion with a corporate counsel who wants a recommendation from us about how best to prevent the occurrence of a fraud in the future?  In most situations, the conclusions from a well conducted examination should be self-evident and should not need to be pointed out in the report. If the conclusions are not obvious, the report might need to be clarified. Our job as fraud examiners is to obtain sufficient relevant and reliable evidence to determine the facts with a reasonable degree of forensic certainty. Assuming facts without obtaining sufficient relevant and reliable evidence is generally inappropriate.

Opinions regarding technical matters, however, are permitted if the fraud examiner is qualified as an expert in the matter being considered (many fraud examiners are certified not only as CFE’s but also as CPA’s, CIA’s or CISA’s).  For example, a permissible expert opinion, and accompanying client requested recommendation, might address the relative adequacy of an entity’s internal controls. Another opinion (and accompanying follow-on recommendation) might discuss whether financial transactions conform to generally accepted accounting principles. So, recommended remedial measures to prevent future occurrences of similar frauds are also essentially opinions, but are acceptable in fraud examination reports.

Given that examiners should always be cautious in complying with client examination related requests for recommendations regarding future fraud prevention, there is no question that such well-considered recommendations can greatly strengthen any client’s fraud prevention program.  But requested recommendations can also become a point of contention with management, as they may suggest additional procedures for staff or offend members of management if not presented sensitively and correctly. Therefore, examiners should take care to consider ways of follow-on communication with the various effected stakeholders as to how their recommendations will help fix gaps in fraud prevention and mitigate fraud risks.  Management and the stakeholders themselves will have to evaluate whether the CFE’s recommendations being provided are worth the investment of time and resources required to implement them (cost vs. benefit).

Broadly, an examination recommendation (where included in the final report or not) is either a suggestion to fix an unacceptable scenario or a suggestion for improvement regarding a business process.  At management’s request, fraud examination reports can provide recommendations to fix unacceptable fraud vulnerabilities because they are easy to identify and are less likely to be disputed by the business process owner. However, recommendations to fix gaps in a process only take the process to where it is expected to be and not where it ideally could be. The value of the fraud examiner’s solicited recommendation can lie not only in providing solutions to existing vulnerability issues but in instigating thought-provoking discussions.  Recommendations also can include suggestions that can move the process, or the department being examined to the next level of anti-fraud efficiency.  When recommendations aimed at future prevention improvements are included, examination reports can become an additional tool in shaping the strategic fraud prevention direction of the client being examined.

An examiner can shape requested recommendations for fraud prevention improvement using sources both inside and outside the client organization. Internal sources of recommendations require a tactful approach as process owners may not be inclined to share unbiased opinions with a contracted CFE, but here, corporate counsel can often smooth the way with a well-timed request for cooperation. External sources include research libraries maintained by the ACFE, AICPA and other professional organizations.

It’s a good practice, if you expect to receive a request for improvement recommendations from management, to jot down fraud prevention recommendation ideas as soon as they come to mind, even though they may or may not find a place in the final report. Even if examination testing does not result in a specific finding, the CFE may still recommend improvements to the general fraud prevention process.

If requested, the examiner should spend sufficient time brainstorming potential recommendations and choosing their wording carefully to ensure their audience has complete understanding. Client requested recommendations should be written simply and should:

–Address the root cause if a control deficiency is the basis of the fraud vulnerability;
–Address the business process rather than a specific person;
–Include bullets or numbering if describing a process fraud vulnerability that has several steps;
–Include more than one way of resolving an issue identified in the observation, if possible. For example, sometimes a short-term manual control is suggested as an immediate fix in addition to a recommended automated control that will involve considerable time to implement;
–Position the most important observation or fraud risk first and the rest in descending order of risk;
–Indicate a suggested priority of implementation based on the risk and the ease of implementation;
–Explain how the recommendation will mitigate the fraud risk or vulnerability in question;
–List any recommendations separately that do not link directly to an examination finding but seek to improve anti-fraud processes, policies, or systems.

The ACFE warns that recommendations, even if originally requested by client management, will go nowhere if they turn out to be unvalued by that management. Therefore, the process of obtaining management feedback on proposed anti-fraud recommendations is critical to make them practical. Ultimately, process owners may agree with a recommendation, agree with part of the recommendation, and agree in principle, but technological or personnel resource constraints won’t allow them to implement it.  They also may choose to revisit the recommendation at a future date as the risk is not imminent or disagree with the recommendation because of varying perceptions of risk or mitigating controls.

It’s my experience that management in the public sector can be averse to recommendations because of public exposure of their reports. Therefore, CFEs should clearly state in their reports if their recommendations do not correspond to any examination findings but are simply suggested improvements. More proposed fraud prevention recommendations do not necessarily mean there are more faults with the process, and this should be communicated clearly to the process owners.

Management responses should be added to the recommendations with identified action items and implementation timelines whenever possible. Whatever management’s response, a recommendation should not be changed if the response tends to dilute the examiner’s objectivity and independence and becomes representative of management’s opinions and concerns. It is the examiner’s prerogative to provide recommendations that the client has requested, regardless of whether management agrees with them. Persuasive and open-minded discussions with the appropriate levels of client management are important to achieving agreeable and implementable requested fraud prevention recommendations.

The journey from a client request for a fraud prevention recommendation to a final recommendation (whether included in the examination report or not) is complex and can be influenced by every stakeholder and constraint in the examination process, be it the overall posture of the organization toward change in general, its philosophy regarding fraud prevention, the scope of the individual fraud examination itself, views  of the effected business process owner, experience and exposure of the examination staff, or available technology. However, CFEs understand that every thought may add value to the client’s fraud prevention program and deserves consideration by the examination team. The questions at the end of every examination should be, did this examination align with the organization’s anti-fraud strategy and direction? How does our examination compare with the quality of practice as seen elsewhere? And finally, to what degree have the fraud prevention recommendations we were asked to make added value?