Tag Archives: fraud prevention - Page 3

The Other Assets Dance

Studies by the ACFE and various academics have revealed over the years that, while not as common as cash schemes, employee misappropriations of other types of corporate assets than cash can sometimes prove even more disastrous than cash theft for any organization that suffers them.  The median losses associated with noncash schemes is generally higher than cash schemes, being $100,000 as opposed to $60,000.

The other asset category includes such assets as inventories of all kinds, i.e., inventory for sale, supplies and equipment and some categories of fixed assets; in short, the term inventory and other assets is generally meant to encompass misapplication schemes involving any assets held by an enterprise other than cash.  The theft of non-cash assets is generally classified by the ACFE into three groups: inventory schemes, supplies schemes and other asset schemes; of these schemes inventory related schemes account for approximately 70% of the losses while misappropriation of company supplies accounts for another 20%…the remaining losses are associated with several types of fixed assets, equipment, and corporate related information.

Those who study these types of fraud generally lump non-cash assets together for describing how these types of assets are misappropriated since the methods for misappropriation don’t vary much among the various asset types.  The asset, no matter what it is, can be misused (or “borrowed”) or it can be stolen.  Assets that are misused rather than stolen outright include company assigned vehicles, company supplies of all kinds, computers, and other office equipment.  As a very frequently occurring example, a company executive might make use of a company car when on an out of the home office assignment; false documentation (both in writing and verbally) is provided to the company by the employee regarding the nature of her use of the vehicle.  At the end of the trip, the car is returned intact and the cost to the fraudster’s company is only a few hundred dollars at most; but what we have here is, nonetheless, an instance of fraud when a false statement or declaration accompanies the use.

In contrast, the costs of inventory misuse schemes can be very costly.  To many employees, inventory fraud of some kinds is not perceived as a crime, but rather as “borrowing” and, in truth, the actual cost of borrowing a laptop to do personal computing at home may often be immaterial if the asset is returned undamaged.  On the other hand, if the employee uses the laptop to operate a side business during and after normal work hours, the consequences can be more serious for the company, especially if the employee’s business is in competition with that of the employer.  Since the employee is not performing his or her assigned work duties, the employer suffers a loss of productivity and is defrauded of that portion of the employee’s wages related to the fraud.  If the employee’s low productivity continues for any length of time, the employer might have to engage additional employees to compensate which means more capital diverted to wages.  As noted above, if the employee’s business is like that of the employer’s, lost business for the employer would be an additional cost of the scheme.  If the employee had not contracted work for his own company, the business would presumably have gone to her employer. Unauthorized use of company equipment can also mean additional wear and tear, causing company owned equipment to break down sooner than it would have under normal operating conditions.

So, what about prevention?  There are preventative measures for control of other asset related frauds which, if properly installed and operating, may help prevent employee exploits directed against all the many types of inventories maintained by a typical business:
For each type of asset inventory (for sale, supplies, equipment, etc.), the following items (as appropriate) should be pre-numbered and controlled:

–requisitions
–receiving reports
–perpetual records
–raw materials requisitions
–shipping documents
–job cost sheets

The following duties related to the distinct types of asset inventories should be handled by different employees:

–requisition of inventory
–receipt of inventory
–disbursement of inventory
–conversion of inventory to scrap
–receipt of proceeds from disposal of scrape.

Someone independent of the purchasing or warehousing function should conduct physical observation of all asset inventories according to defined schedules.  Personnel conducting physical observations of these types of assets should be knowledgeable about the inventory, i.e., what types of material it should contain, where the material should physically be, etc.  All company owned merchandise should be physically guarded and locked; and access should be limited to authorized personnel only.

Threat Assessment & Cyber Security

One rainy Richmond evening last week I attended the monthly dinner meeting of one of the professional organizations of which I’m a member.  Our guest speaker’s presentation was outstanding and, in my opinion, well worth sharing with fellow CFE’s especially as we find more and more of our client’s grappling with the reality of  ever-evolving cyber threats.

Our speaker started by indicating that, according to a wide spectrum of current thinking, technology issues in isolation should be but one facet of the overall cyber defense strategy of any enterprise. A holistic view on people, process and technology is required in any organization that wants to make its chosen defense strategy successful and, to be most successful, that strategy needs to be supplemented with a good dose of common sense creative thinking. That creative thinking proved to be the main subject of her talk.

Ironically, the sheer size, complexity and geopolitical diversity of the modern-day enterprise can constitute an inherent obstacle for its goal of achieving business objectives in a secured environment.  The source of the problem is not simply the cyber threats themselves, but threat agents. The term “threat agent,” from the Open Web Application Security Project (OWASP), is used to indicate an individual or group that can manifest a threat. Threat agents are represented by the phenomena of:

–Hacktivism;
–Corporate Espionage;
–Government Actors;
–Terrorists;
–Common Criminals (individual and organized).

Irrespective of the type of threat, the threat agent takes advantage of an identified vulnerability and exploits it in the attempt to negatively impact the value the individual business has at risk. The attempt to execute the threat in combination with the vulnerability is called hacking. When this attempt is successful, and the threat agent can negatively impact the value at risk, it can be concluded that the vulnerability was successfully exploited. So, essentially, enterprises are trying to defend against hacking and, more importantly, against the threat agent that is the hacker in his or her many guises. The ACFE identifies hacking as the single activity that has resulted in the greatest number of cyber breaches in the past decade.

While there is no one-size-fits-all standard to build and run a sustainable security defense in a generic enterprise context, most companies currently deploy something resembling the individual components of the following general framework:

–Business Drivers and Objectives;
–A Risk Strategy;
–Policies and Standards;
–Risk Identification and Asset Profiling;
–People, Process, Technology;
–Security Operations and Capabilities;
–Compliance Monitoring and Reporting.

Most IT risk and security professionals would be able to identify this framework and agree with the assertion that it’s a sustainable approach to managing an enterprise’s security landscape. Our speaker pointed out, however, that in her opinion, if the current framework were indeed working as intended, the number of security incidents would be expected to show a downward trend as most threats would fail to manifest into full-blown incidents. They could then be routinely identified by enterprises as known security problems and dealt with by the procedures operative in day-to-day security operations. Unfortunately for the existing framework, however, recent security surveys conducted by numerous organizations and trade groups clearly show an upward trend of rising security incidents and breaches (as every reader of daily press reports well knows).

The rising tide of security incidents and breaches is not surprising since the trade press also reports an average of 35 new, major security failures on each and every day of the year.  Couple this fact with the ease of execution and ready availability of exploit kits on the Dark Web and the threat grows in both probability of exploitation and magnitude of impact. With speed and intensity, each threat strikes the security structure of an enterprise and whittles away at its management credibility to deal with the threat under the routine, daily operational regimen presently defined. Hence, most affected enterprises endure a growing trend of negative security incidents experienced and reported.

During the last several years, in response to all this, many firms have responded by experimenting with a new approach to the existing paradigm. These organizations have implemented emergency response teams to respond to cyber-threats and incidents. These teams are a novel addition to the existing control structure and have two main functions: real-time response to security incidents and the collection of concurrent internal and external security intelligence to feed predictive analysis. Being able to respond to security incidents via a dedicated response team boosts the capacity of the operational organization to contain and recover from attacks. Responding to incidents, however efficiently, is, in any case, a reactive approach to deal with cyber-threats but isn’t the whole story. This is where cyber-threat intelligence comes into play. Threat intelligence is a more proactive means of enabling an organization to predict incidents. However, this approach also has a downside. The influx of a great deal of intelligence information may limit the ability of the company to render it actionable on a timely basis.

Cyber threat assessments are an effective means to tame what can be this overwhelming influx of intelligence information. Cyber threat assessment is currently recognized in the industry as red teaming, which is the practice of viewing a problem from an adversary or competitor’s perspective. As part of an IT security strategy, enterprises can use red teams to test the effectiveness of the security structure as a whole and to provide a relevance factor to the intelligence feeds on cyber threats. This can help CEOs decide what threats are relevant and have higher exposure levels compared to others. The evolution of cyber threat response, cyber threat
intelligence and cyber threat assessment (red teams) in conjunction with the existing IT risk framework can be used as an effective strategy to counter the agility of evolving cyber threats. The cyber threat assessment process assesses and challenges the structure of existing enterprise security systems, including designs, operational-level controls and the overall cyber threat response and intelligence process to ensure they remain capable of defending against current relevant exploits.

Cyber threat assessment exercises can also be extremely helpful in highlighting the most relevant attacks and in quantifying their potential impacts. The word “adversary” in the definition of the term ‘red team’ is key in that it emphasizes the need to independently challenge the security structure from the view point of an attacker.  Red team exercises should be designed to be independent of the scope, asset profiling, security, IT operations and coverage of existing security policies. Only then can enterprises realistically apply the attacker’s perspective, measure the success of its risk strategy and see how it performs when challenged. It’s essential that red team exercises have the freedom to treat the complete security structure and to point to flaws in all components of the IT risk framework. It’s a common notion that a red team exercise is a penetration test. This is not the case. Use of penetration test techniques by red teams is a means to identify the information required to replicate cyber threats and to create a controlled security incident. The technical shortfalls that are identified during standard penetration testing are mere symptoms of gaps that may exist in the governance of people, processes and technology. Hence, to make the organization more resilient against cyber threats, red team focus should be kept on addressing the root cause and not merely on fixing the security flaws discovered during the exercise. Another key point is to include cyber threat response and threat monitoring in the scope of such assessments. This demands that red team exercises be executed, and partially announced, with CEO-level approval. This ensures that enterprises challenge the end-to-end capabilities of an enterprise to cope with a real-time security incident. Lessons learned from red teaming can be documented to improve the overall security posture of the organization and as an aid in dealing with future threats.

Our speaker concluded by saying that as cyber threats evolve, one-hundred percent security for an active business is impossible to achieve. Business is about making optimum use of existing resources to derive the desired value for stakeholders. Cyber-defense cannot be an exception to this rule. To achieve optimized use of their security investments, CEOs should ensure that security spending for their organization is mapped to the real emerging cyber threat landscape. Red teaming is an effective tool to challenge the status quo of an enterprise’s security framework and to make informed judgements about the actual condition of its actual security posture today. Not only can the judgements resulting from red team exercises be used to improve cyber threat defense, they can also prove an effective mechanism to guide a higher return on cyber-defense investment.

A CDC for Cyber

I remember reading somewhere a few years back that Microsoft had commissioned a report which recommended that the U.S. government set up an entity akin to its Center for Disease Control but for cyber security.  An intriguing idea.  The trade press talks about malware and computer viruses and infections to describe self -replicating malicious code in the same way doctors talk about metastasizing cancers or the flu; likewise, as with public health, rather than focusing on prevention and detection, we often blame those who have become infected and try to retrospectively arrest/prosecute (cure) those responsible (the cancer cells, hackers) long after the original harm is done. Regarding cyber, what if we extended this paradigm and instead viewed global cyber security as an exercise in public health?

As I recall, the report pointed out that organizations such as the Centers for Disease Control in Atlanta and the World Health Organization in Geneva have over decades developed robust systems and objective methodologies for identifying and responding to public health threats; structures and frameworks that are far more developed than those existent in today’s cyber-security community. Given the many parallels between communicable human diseases and those affecting today’s technologies, there is also much fraud examiners and security professionals can learn from the public health model, an adaptable system capable of responding to an ever-changing array of pathogens around the world.

With cyber as with matters of public health, individual actions can only go so far. It’s great if an individual has excellent techniques of personal hygiene, but if everyone in that person’s town has the flu, eventually that individual will probably succumb as well. The comparison is relevant to the world of cyber threats. Individual responsibility and action can make an enormous difference in cyber security, but ultimately the only hope we have as a nation in responding to rapidly propagating threats across this planetary matrix of interconnected technologies is to construct new institutions to coordinate our response. A trusted, international cyber World Health Organization could foster cooperation and collaboration across companies, countries, and government agencies, a crucial step required to improve the overall public health of the networks driving the critical infrastructures in both our online and our off-line worlds.

Such a proposed cyber CDC could go a long way toward counteracting the technological risks our country faces today and could serve a critical role in improving the overall public health of the networks driving the critical infrastructures of our world. A cyber CDC could fulfill many roles that are carried out today only on an ad hoc basis, if at all, including:

• Education — providing members of the public with proven methods of cyber hygiene to protect themselves;
• Network monitoring — detection of infection and outbreaks of malware in cyberspace;
• Epidemiology — using public health methodologies to study digital cyber disease propagation and provide guidance on response and remediation;
• Immunization — helping to ‘vaccinate’ companies and the public against known threats through software patches and system updates;
• Incident response — dispatching experts as required and coordinating national and global efforts to isolate the sources of online infection and treat those affected.

While there are many organizations, both governmental and non-governmental, that focus on the above tasks, no single entity owns them all. It is through these gaps in effort and coordination that cyber risks continue to mount. An epidemiological approach to our growing technological risks is required to get to the source of malware infections, as was the case in the fight against malaria. For decades, all medical efforts focused in vain on treating the disease in those already infected. But it wasn’t until epidemiologists realized the malady was spread by mosquitoes breeding in still pools of water that genuine progress was made in the fight against the disease. By draining the pools where mosquitoes and their larvae grow, epidemiologists deprived them of an important breeding ground, thus reducing the spread of malaria. What stagnant pools can we drain in cyberspace to achieve a comparable result? The answer represents the yet unanswered challenge.

There is another major challenge a cyber CDC would face: most of those who are sick have no idea they are walking around infected, spreading disease to others. Whereas malaria patients develop fever, sweats, nausea, and difficulty breathing, important symptoms of their illness, infected computer users may be completely asymptomatic. This significant difference is evidenced by the fact that the overwhelming majority of those with infected devices have no idea there is malware on their machines nor that they might have even joined a botnet army. Even in the corporate world, with the average time to detection of a network breach now at 210 days, most companies have no idea their most prized assets, whether intellectual property or a factory’s machinery, have been compromised. The only thing worse than being hacked is being hacked and not knowing about it. If you don’t know you’re sick, how can you possibly get treatment? Moreover, how can we prevent digital disease propagation if carriers of these maladies don’t realize they are infecting others?

Addressing these issues could be a key area of import for any proposed cyber CDC and fundamental to future communal safety and that of critical information infrastructures. Cyber-security researchers have pointed out the obvious Achilles’ heel of the modern technology infused world, the fact that today everything is either run by computers (or will be) and that everything is reliant on these computers continuing to work. The challenge is that we must have some way of continuing to work even if all the computers fail. Were our information systems to crash on a mass scale, there would be no trading on financial markets, no taking money from ATMs, no telephone network, and no pumping gas. If these core building blocks of our society were to suddenly give way, what would humanity’s backup plan be? The answer is simply, we don’t now have one.

Complicating all this from a law enforcement and fraud investigation perspective is that black hats generally benefit from technology long before defenders and investigators ever do. The successful ones have nearly unlimited budgets and don’t have to deal with internal bureaucracies, approval processes, or legal constraints. But there are other systemic issues that give criminals the upper hand, particularly around jurisdiction and international law. In a matter of minutes, the perpetrator of an online crime can virtually visit six different countries, hopping from server to server and continent to continent in an instant. But what about the police who must follow the digital evidence trail to investigate the matter?  As with all government activities, policies, and procedures, regulations must be followed. Trans-border cyber-attacks raise serious jurisdictional issues, not just for an individual police department, but for the entire institution of policing as currently formulated. A cop in Baltimore has no authority to compel an ISP in Paris to provide evidence, nor can he make an arrest on the right bank. That can only be done by request, government to government, often via mutual legal assistance treaties. The abysmally slow pace of international law means it commonly takes years for police to get evidence from overseas (years in a world in which digital evidence can be destroyed in seconds). Worse, most countries still do not even have cyber-crime laws on the books, meaning that criminals can act with impunity making response through a coordinating entity like a cyber-CDC more valuable to the U.S. specifically and to the world in general.

Experts have pointed out that we’re engaged in a technological arms race, an arms race between people who are using technology for good and those who are using it for ill. The challenge is that nefarious uses of technology are scaling exponentially in ways that our current systems of protection have simply not matched.  The point is, if we are to survive the progress offered by our technologies and enjoy their benefits, we must first develop adaptive mechanisms of security that can match or exceed the exponential pace of the threats confronting us. On this most important of imperatives, there is unambiguously no time to lose.

The Initially Immaterial Financial Fraud

At one point during our recent two-day seminar ‘Conducting Internal Investigations’ an attendee asked Gerry Zack, our speaker, why some types of frauds, but specifically financial frauds can go on so long without detection. A very good question and one that Gerry eloquently answered.

First, consider the audit committee. Under modern systems of internal control and corporate governance, it’s the audit committee that’s supposed to be at the vanguard in the prevention and detection of financial fraud. What kinds of failures do we typically see at the audit committee level when financial fraud is given an opportunity to develop and grow undetected? According to Gerry, there is no single answer, but several audit committee inadequacies are candidates. One inadequacy potentially stems from the fact that the members of the audit committee are not always genuinely independent. To be sure, they’re required by the rules to attain some level of technical independence, but the subtleties of human interaction cannot always be effectively governed by rules. Even where technical independence exists, it may be that one or more members in substance, if not in form, have ties to the CEO or others that make any meaningful degree of independence awkward if not impossible.

Another inadequacy is that audit committee members are not always terribly knowledgeable, particularly in the ways that modern (often on-line, cloud based) financial reporting systems can be corrupted. Sometimes, companies that are most susceptible to the demands of analyst earnings expectations are new, entrepreneurial companies that have recently gone public and that have engaged in an epic struggle to get outside analysts just to notice them in the first place. Such a newly hatched public company may not have exceedingly sophisticated or experienced fiscal management, let alone the luxury of sophisticated and mature outside directors on its audit committee. Rather, the audit committee members may have been added to the board in the first place because of industry expertise, because they were friends or even relatives of management, or simply because they were available.

A third inadequacy is that audit committee members are not always clear on exactly what they’re supposed to do. Although modern audit committees seem to have a general understanding that their focus should be oversight of the financial reporting system, for many committee members that “oversight” can translate into listening to the outside auditor several times a year. A complicating problem is a trend in corporate governance involving the placement of additional responsibilities (enterprise risk management is a timely example) upon the shoulders of the audit committee even though those responsibilities may be only tangentially related, or not at all related, to the process of financial reporting.

Again, according to Gerry, some or all the previously mentioned audit committee inadequacies may be found in companies that have experienced financial fraud. Almost always there will be an additional one. That is that the audit committee, no matter how independent, sophisticated, or active, will have functioned largely in ignorance. It will not have had a clue as to what was happening within the organization. The reason is that a typical audit committee (and the problem here is much broader than newly public startups) will get most of its information from management and from the outside auditor. Rarely is management going to voluntarily reveal financial manipulations. And, relying primarily on the outside auditor for the discovery of fraud is chancy at best. Even the most sophisticated and attentive of audit committee members have had the misfortune of accounting irregularities that have unexpectedly surfaced on their watch. This unfortunate lack of access to candid information on the part of the audit committee directs attention to the second in the triumvirate of fraud preventers, the internal audit department.

It may be that the internal audit department has historically been one of the least understood, and most ineffectively used, of all vehicles to combat financial fraud. Theoretically, internal audit is perfectly positioned to nip in the bud an accounting irregularity problem. The internal auditors are trained in financial reporting and accounting. The internal auditors should have a vivid understanding as to how financial fraud begins and grows. Unlike the outside auditor, internal auditors work at the company full time. And, theoretically, the internal auditors should be able to plug themselves into the financial reporting environment and report directly to the audit committee the problems they have seen and heard. The reason these theoretical vehicles for the detection and prevention of financial fraud have not been effective is that, where massive financial frauds have surfaced, the internal audit department has often been somewhere between nonfunctional and nonexistent.. Whatever the explanation, (lack of independence, unfortunate reporting arrangements, under-staffing or under-funding) in many cases where massive financial fraud has surfaced, a viable internal audit function is often nowhere to be found.

That, of course, leaves the outside auditor, which, for most public companies, means some of the largest accounting firms in the world. Indeed, it is frequently the inclination of those learning of an accounting irregularity problem to point to a failure by the outside auditor as the principal explanation. Criticisms made against the accounting profession have included compromised independence, a transformation in the audit function away from data assurance, the use of immature and inexperienced audit staff for important audit functions, and the perceived use by the large accounting firms of audit as a loss leader rather than a viable professional engagement in itself. Each of these reasons is certainly worthy of consideration and inquiry, but the fundamental explanation for the failure of the outside auditor to detect financial fraud lies in the way that fraudulent financial reporting typically begins and grows. Most important is the fact that the fraud almost inevitably starts out very small, well beneath the radar screen of the materiality thresholds of a normal audit, and almost inevitably begins with issues of quarterly reporting. Quarterly reporting has historically been a subject of less intense audit scrutiny, for the auditor has been mainly concerned with financial performance for the entire year. The combined effect of the small size of an accounting irregularity at its origin and the fact that it begins with an allocation of financial results over quarters almost guarantees that, at least at the outset, the fraud will have a good chance of escaping outside auditor detection.

These two attributes of financial fraud at the outset are compounded by another problem that enables it to escape auditor detection. That problem is that, at root, massive financial fraud stems from a certain type of corporate environment. Thus, detection poses a challenge to the auditor. The typical audit may involve fieldwork at the company once a year. That once-a-year period may last for only a month or two. During the fieldwork, the individual accountants are typically sequestered in a conference room. In dealing with these accountants, moreover, employees are frequently on their guard. There exists, accordingly, limited opportunity for the outside auditor to get plugged into the all-important corporate environment and culture, which is where financial fraud has its origins.

As the fraud inevitably grows, of course, its materiality increases as does the number of individuals involved. Correspondingly, also increasing is the susceptibility of the fraud to outside auditor detection. However, at the point where the fraud approaches the thresholds at which outside auditor detection becomes a realistic possibility, deception of the auditor becomes one of the preoccupations of the perpetrators. False schedules, forged documents, manipulated accounting entries, fabrications and lies at all levels, each of these becomes a vehicle for perpetrating the fraud during the annual interlude of audit testing. Ultimately, the fraud almost inevitably becomes too large to continue to escape discovery, and auditor detection at some point is by no means unusual. The problem is that, by the time the fraud is sufficiently large, it has probably gone on for years. That is not to exonerate the audit profession, and commendable reforms have been put in place over the last decade. These include a greater emphasis on fraud, involvement of the outside auditor in quarterly data, the reduction of materiality thresholds, and a greater effort on the part of the profession to assess the corporate culture and environment. Nonetheless, compared to, say, the potential for early fraud detection possessed by the internal audit department, the outside auditor is at a noticeable disadvantage.

Having been missed for so long by so many, how does the fraud typically surface? There are several ways. Sometimes there’s a change in personnel, from either a corporate acquisition or a change in management, and the new hires stumble onto the problem. Sometimes the fraud, which quarter to quarter is mathematically incapable of staying the same, grows to the point where it can no longer be hidden from the outside auditor. Sometimes detection results when the conscience of one of the accounting department people gets the better of him or her. All along s/he wanted to tell somebody, and it gets to the point where s/he can’t stand it anymore and s/he does. Then you have a whistleblower. There are exceptions to all of this. But in almost any large financial fraud, as Gerry told us, one will see some or all these elements. We need only change the names of the companies and of the industry.

Financing Death One BitCoin at a Time

Over the past decade, fanatic religious ideologists have evolved to become hybrid terrorists demonstrating exceptional versatility, innovation, opportunism, ruthlessness, and cruelty. Hybrid terrorists are a new breed of organized criminal. Merriam-Webster defines hybrid as “something that is formed by combining two or more things”. In the twentieth century, the military, intelligence forces, and law enforcement agencies each had a specialized skill-set to employ in response to respective crises involving insurgency, international terrorism, and organized crime. Military forces dealt solely with international insurgent threats to the government; intelligence forces dealt solely with international terrorism; and law enforcement agencies focused on their respective country’s organized crime entities. In the twenty-first century, greed, violence, and vengeance motivate the various groups of hybrid terrorists. Hybrid terrorists rely on organized crime such as money laundering, wire transfer fraud, drug and human trafficking, shell companies, and false identification to finance their organizational operations.

Last week’s horrific terror bombing in Manchester brings to the fore, yet again, the issue of such terrorist financing and the increasing role of forensic accountants in combating it. Two of the main tools of modern terror financing schemes are money laundering and virtual currency.

Law enforcement and government agencies in collaboration with forensic accountants play key roles in tracing the source of terrorist financing to the activities used to inflict terror on local and global citizens. Law enforcement agencies utilize investigative and predictive analytics tools to gather, dissect, and convey data to distinguish patterns leading to future terrorist events. Government agencies employ database inquiries of terrorist-related financial information to evaluate the possibilities of terrorist financing and activities. Forensic accountants review the data for patterns related to previous transactions by utilizing data analysis tools, which assist in tracking the source of the funds.

As we all know, forensic accountants use a combination of accounting knowledge combined with investigative skills in litigation support and investigative accounting settings. Several types of organizations, agencies, and companies frequently employ forensic accountants to provide investigative services. Some of these organizations are public accounting firms, law firms, law enforcement agencies, The Internal Revenue Service (IRS), The Central Intelligence Agency (CIA), and The Federal Bureau of Investigations (FBI).

Locating and halting the source of terrorist financing involves two tactics, following the money and drying up the money. Obstructing terrorist financing requires an understanding of both the original and supply source of the illicit funds. As the financing is derived from both legal and illegal funding sources, terrorists may attempt to evade detection by funneling money through legitimate businesses thus making it difficult to trace. Charitable organizations and reputable companies provide a legitimate source through which terrorists may pass money for illicit activities without drawing the attention of law enforcement agencies. Patrons of legitimate businesses are often unaware that their personal contributions may support terrorist activities. However, terrorists also obtain funds from obvious illegal sources, such as kidnapping, fraud, and drug trafficking. Terrorists often change daily routines to evade law enforcement agencies as predictable patterns create trails that are easy for skilled investigators to follow. Audit trails can be traced from the donor source to the terrorist by forensic accountants and law enforcement agencies tracking specific indicators. Audit trails reveal where the funds originate and whether the funds came from legal or illegal sources. The ACFE tells us that basic money laundering is a specific type of illegal funding source, which provides a clear audit trail.

Money laundering is the process of obtaining and funneling illicit funds to disguise the connection with the original unlawful activity. Terrorists launder money to spend the unlawfully obtained money without drawing attention to themselves and their activities. To remain undetected by regulatory authorities, the illicit funds being deposited or spent need to be washed to give the impression that the money came from a seemingly reputable source. There are types of unusual transactions that raise red flags associated with money laundering in financial institutions. The more times an unusual transaction occurs, the greater the probability it is the product of an illicit activity. Money laundering may be quite sophisticated depending on the strategies employed to avoid detection. Some identifiers indicating a possible money-laundering scheme are: lack of identification, money wired to new locations, customer closes account after wiring or transferring copious amounts of money, executed out-of-the-ordinary business transactions, executed transactions involving the customer’s own business or occupation, and executed transactions falling just below the threshold trigger requiring the financial institution to file a report.

Money laundering takes place in three stages: placement, layering, and integration. In the placement stage, the cash proceeds from criminal activity enter the financial system by deposit. During the layering stage, the funds transfer into other accounts, usually offshore financial institutions, thus creating greater distance between the source and origin of the funds and its current location. Legitimate purchases help funnel the money back into the economy during the integration stage, the final stage.

Complicating all this is for the investigator is virtual currency. Virtual currency, unlike traditional forms of money, does not leave a clear audit trail for forensic accountants to trace and investigate. Cases involving the use of virtual currency, i.e. Bitcoins and several rival currencies, create anonymity for the perpetrator and create obstacles for investigators. Bitcoins have no physical form and provide a unique opportunity for terrorists to launder money across international borders without detection by law enforcement or government agencies. Bitcoins are long strings of numbers and letters linked by mathematical encryption algorithms. A consumer uses a mobile phone or computer to create an online wallet with one or more Bitcoin addresses before commencing electronic transactions. Bitcoins may also be used to make legitimate purchases through various, established online retailers.

Current international anti-money laundering laws aid in fighting the war against terrorist financing; however, international laws require actual cash shipments between countries and criminal networks (or at the very least funds transfers between banks). International laws are not applicable to virtual currency transactions, as they do not consist of actual cash shipments. According to the website Bitcoin.org, “Bitcoin uses peer-to-peer technology to operate with no central authority or banks”.

In summary, terrorist organizations find virtual currency to be an effective method for raising illicit funds because, unlike cash transactions, cyber technology offers anonymity with less regulatory oversight. Due to the anonymity factor, Bitcoins are an innovative and convenient way for terrorists to launder money and sell illegal goods. Virtual currencies are appealing for terrorist financiers since funds can be swiftly sent across borders in a secure, cheap, and highly secretive manner. The obscurity of Bitcoin allows international funding sources to conduct exchanges without a trace of evidence. This co-mingling effect is like traditional money laundering but without the regulatory oversight. Government and law enforcement agencies must, as a result, be able to share information with public regulators when they become suspicious of terrorist financing.

Forensic accounting technology is most beneficial when used in conjunction with the analysis tools of law enforcement agencies to predict and analyze future terrorist activity. Even though some of the tools in a forensic accountant’s arsenal are useful in tracking terrorist funds, the ability to identify conceivable terrorist threats is limited. To identify the future activities of terrorist groups, forensic accountants, and law enforcement agencies should cooperate with one another by mutually incorporating the analytical tools utilized by each. Agencies and government officials should become familiar with virtual currency like Bitcoins. Because of the anonymity and lack of regulatory oversight, virtual currency offers terrorist groups a useful means to finance illicit activities on an international scale. In the face of the challenge, new governmental entities may be needed to tie together all the financial forensics efforts of the different stake holder organizations so that information sharing is not compartmentalized.

Industrialized Theft

In at least one way you have to hand it to Ethically Challenged, Inc.;  it sure knows how to innovate, and the recent spate of ransomware attacks proves they also know how to make what’s old new again. Although society’s criminal opponents engage in constant business process improvement, they’ve proven again and again that they’re not just limited to committing new crimes from scratch every time. In the age of Moore’s law, these tasks have been readily automated and can run in the background at scale without the need for significant human intervention. Crime automations like the WannaCry virus allow transnational organized crime groups to gain the same efficiencies and cost savings that multinational corporations obtained by leveraging technology to carry out their core business functions. That’s why today it’s possible for hackers to rob not just one person at a time but 100 million or more, as the world saw with the Sony PlayStation and Target data breaches and now with the WannaCry worm.

As covered in our Chapter’s training event of last year, ‘Investigating on the Internet’, exploit tool kits like Blackhole and SpyEye commit crime “automagically” by minimizing the need for human labor, thereby dramatically reducing criminal costs. They also allow hackers to pursue the “long tail” of opportunity, committing millions of thefts in small amounts so that (in many cases) victims don’t report them and law enforcement has no way to track them. While high-value targets (companies, nations, celebrities, high-net-worth individuals) are specifically and individually targeted, the way the majority of the public is hacked is by automated scripted computer malware, one large digital fishing net that scoops up anything and everything online with a vulnerability that can be exploited. Given these obvious advantages, as of 2016 an estimated 61 percent of all online attacks were launched by fully automated crime tool kits, returning phenomenal profits for the Dark Web overlords who expertly orchestrated them. Modern crime has become reduced and distilled to a software program that anybody can run at tremendous profit.

Not only can botnets and other tools be used over and over to attack and offend, but they’re now enabling the commission of much more sophisticated crimes such as extortion, blackmail, and shakedown rackets. In an updated version of the old $500 million Ukrainian Innovative Marketing solutions “virus detected” scam, fraudsters have unleashed a new torrent of malware that hold the victim’s computer hostage until a ransom is paid and an unlock code is provided by the scammer to regain access to the victim’s own files. Ransomware attack tools are included in a variety of Dark Net tool kits, such as WannaCry and Gameover Zeus. According to the ACFE, there are several varieties of this scam, including one that purports to come from law enforcement. Around the world, users who become infected with the Reveton Trojan suddenly have their computers lock up and their full screens covered with a notice, allegedly from the FBI. The message, bearing an official-looking large, full-color FBI logo, states that the user’s computer has been locked for reasons such as “violation of the federal copyright law against illegally downloaded material” or because “you have been viewing or distributing prohibited pornographic content.”

In the case of the Reveton Trojan, to unlock their computers, users are informed that they must pay a fine ranging from $200 to $400, only accepted using a prepaid voucher from Green Dot’s MoneyPak, which victims are instructed they can buy at their local Walmart or CVS; victims of WannaCry are required to pay in BitCoin. To further intimidate victims and drive home the fact that this is a serious police matter, the Reveton scammers prominently display the alleged violator’s IP address on their screen as well as snippets of video footage previously captured from the victim’s Webcam. As with the current WannaCry exploit, the Reveton scam has successfully targeted tens of thousands of victims around the world, with the attack localized by country, language, and police agency. Thus, users in the U.K. see a notice from Scotland Yard, other Europeans get a warning from Europol, and victims in the United Arab Emirates see the threat, translated into Arabic, purportedly from the Abu Dhabi Police HQ.

WannaCry is even more pernicious than Reveton though in that it actually encrypts all the files on a victim’s computer so that they can no longer be read or accessed. Alarmingly, variants of this type of malware often present a ticking-bomb-type countdown clock advising users that they only have forty-eight hours to pay $300 or all of their files will be permanently destroyed. Akin to threatening “if you ever want to see your files alive again,” these ransomware programs gladly accept payment in Bitcoin. The message to these victims is no idle threat. Whereas previous ransomware might trick users by temporarily hiding their files, newer variants use strong 256-bit Advanced Encryption Standard cryptography to lock user files so that they become irrecoverable. These types of exploits earn scores of millions of dollars for the criminal programmers who develop and sell them on-line to other criminals.

Automated ransomware tools have even migrated to mobile phones, affecting Android handset users in certain countries. Not only have individuals been harmed by the ransomware scourge, so too have companies, nonprofits, and even government agencies, the most infamous of which was the Swansea Police Department in Massachusetts some years back, which became infected when an employee opened a malicious e-mail attachment. Rather than losing its irreplaceable police case files to the scammers, the agency was forced to open a Bitcoin account and pay a $750 ransom to get its files back. The police lieutenant told the press he had no idea what a Bitcoin was or how the malware functioned until his department was struck in the attack.

As the ACFE and other professional organizations have told us, within its world, cybercrime has evolved highly sophisticated methods of operation to sell everything from methamphetamine to child sexual abuse live streamed online. It has rapidly adopted existing tools of anonymity such as the Tor browser to establish Dark Net shopping malls, and criminal consulting services such as hacking and murder for hire are all available at the click of a mouse. Untraceable and anonymous digital currencies, such as Bitcoin, are breathing new life into the underground economy and allowing for the rapid exchange of goods and services. With these additional revenues, cyber criminals are becoming more disciplined and organized, significantly increasing the sophistication of their operations. Business models are being automated wherever possible to maximize profits and botnets can threaten legitimate global commerce, easily trained on any target of the scammer’s choosing. Fundamentally, it’s been done. As WannaCry demonstrates, the computing and Internet based crime machine has been built. With these systems in place, the depth and global reach of cybercrime, mean that crime now scales, and it scales exponentially. Yet, as bad as this threat is today, it is about to become much worse, as we hand such scammers billions of more targets for them to attack as we enter the age of ubiquitous computing and the Internet of Things.

Analytics & Fraud Prevention

During our Chapter’s live training event last year, ‘Investigating on the Internet’, our speaker Liseli Pennings, pointed out that, according to the ACFE’s 2014 Report to the Nations on Occupational Fraud and Abuse, organizations that have proactive, internet oriented, data analytics in place have a 60 percent lower median loss because of fraud, roughly $100,000 lower per incident, than organizations that don’t use such technology. Further, the report went on, use of proactive data analytics cuts the median duration of a fraud in half, from 24 months to 12 months.

This is important news for CFE’s who are daily confronting more sophisticated frauds and criminals who are increasingly cyber based.  It means that integrating more mature forensic data analytics capabilities into a fraud prevention and compliance monitoring program can improve risk assessment, detect potential misconduct earlier, and enhance investigative field work. Moreover, forensic data analytics is a key component of effective fraud risk management as described in The Committee of Sponsoring Organizations of the Treadway Commission’s most recent Fraud Risk Management Guide, issued in 2016, particularly around the areas of fraud risk assessment, prevention, and detection.  It also means that, according to Pennings, fraud prevention and detection is an ideal big data-related organizational initiative. With the growing speed at which they generate data, specifically around their financial reporting and sales business processes, our larger CFE client organizations need ways to prioritize risks and better synthesize information using big data technologies, enhanced visualizations, and statistical approaches to supplement traditional rules-based investigative techniques supported by spreadsheet or database applications.

But with this analytics and fraud prevention integration opportunity comes a caution.  As always, before jumping into any specific technology or advanced analytics technique, it’s crucial to first ask the right risk or control-related questions to ensure the analytics will produce meaningful output for the business objective or risk being addressed. What business processes pose a high fraud risk? High-risk business processes include the sales (order-to-cash) cycle and payment (procure-to-pay) cycle, as well as payroll, accounting reserves, travel and entertainment, and inventory processes. What high-risk accounts within the business process could identify unusual account pairings, such as a debit to depreciation and an offsetting credit to a payable, or accounts with vague or open-ended “catch all” descriptions such as a “miscellaneous,” “administrate,” or blank account names?  Who recorded or authorized the transaction? Posting analysis or approver reports could help detect unauthorized postings or inappropriate segregation of duties by looking at the number of payments by name, minimum or maximum accounts, sum totals, or statistical outliers. When did transactions take place? Analyzing transaction activities over time could identify spikes or dips in activity such as before and after period ends or weekend, holiday, or off-hours activities. Where does the CFE see geographic risks, based on previous events, the economic climate, cyber threats, recent growth, or perceived corruption? Further segmentation can be achieved by business units within regions and by the accounting systems on which the data resides.

The benefits of implementing a forensic data analytics program must be weighed against challenges such as obtaining the right tools or professional expertise, combining data (both internal and external) across multiple systems, and the overall quality of the analytics output. To mitigate these challenges and build a successful program, the CFE should consider that the priority of the initial project matters. Because the first project often is used as a pilot for success, it’s important that the project address meaningful business or audit risks that are tangible and visible to client management. Further, this initial project should be reasonably attainable, with minimal dollar investment and actionable results. It’s best to select a first project that has big demand, has data that resides in easily accessible sources, with a compelling, measurable return on investment. Areas such as insider threat, anti-fraud, anti-corruption, or third-party relationships make for good initial projects.

In the health care insurance industry where I worked for many years, one of the key goals of forensic data analytics is to increase the detection rate of health care provider billing non-compliance, while reducing the risk of false positives. From a capabilities perspective, organizations need to embrace both structured and unstructured data sources that consider the use of data visualization, text mining, and statistical analysis tools. Since the CFE will usually be working as a member of a team, the team should demonstrate the first success story, then leverage and communicate that success model widely throughout the organization. Results should be validated before successes are communicated to the broader organization. For best results and sustainability of the program, the fraud prevention team should be a multidisciplinary one that includes IT, business users, and functional specialists, such as management scientists, who are involved in the design of the analytics associated with the day-to-day operations of the organization and hence related to the objectives of  the fraud prevention program. It helps to communicate across multiple departments to update key stakeholders on the program’s progress under a defined governance regime. The team shouldn’t just report noncompliance; it should seek to improve the business by providing actionable results.

The forensic data analytics functional specialists should not operate in a vacuum; every project needs one or more business champions who coordinate with IT and the business process owners. Keep the analytics simple and intuitive, don’t include too much information in one report so that it isn’t easy to understand. Finally, invest time in automation, not manual refreshes, to make the analytics process sustainable and repeatable. The best trends, patterns, or anomalies often come when multiple months of vendor, customer, or employee data are analyzed over time, not just in the aggregate. Also, keep in mind that enterprise-wide deployment takes time. While quick projects may take four to six weeks, integrating the entire program can easily take more than one or two years. Programs need to be refreshed as new risks and business activities change, and staff need updates to training, collaboration, and modern technologies.

Research findings by the ACFE and others are providing more and more evidence of the benefits of integrating advanced forensic data analytics techniques into fraud prevention and detection programs. By helping increase their client organization’s maturity in this area, CFE’s can assist in delivering a robust fraud prevention program that is highly focused on preventing and detecting fraud risks.

The Internet & the Unforeseen

Liseli Pennings, last year’s speaker for our Central Virginia Chapter’s training event, ‘Investigating on the Internet’, made the comment during her presentation that on-line investigative tools are outstanding for working unforeseen fraud events.  When a potential fraud risk has been identified through routine risk assessment, what its effects would be can be discussed and hypothetically anticipated to some degree as part of the assessment.  However, Liseli pointed out, when catastrophic fraud events occur without warning, seemingly out of the blue, and no mitigation has been discussed or is even immediately possible, the results can be devastating to our clients. When these types of sudden, unforeseen fraud events occur, rapid information gathering can be critical to a successful investigative outcome and that’s where skillful use of the internet comes in.

Liseli’s comment got me to thinking about a key question.  Are these types of fraud events truly unforeseeable or are they caused by a failure to gather adequate information on the front end to anticipate them and their effects? Unanticipated fraud events and their effects typically are associated with financial factors. However, as we’ve often discussed on this blog, some of the most catastrophic events can be non-financial in nature, such as damage to reputation, which also can lead to financial losses. As part of their proactive risk assessment processes, fraud examiners can play a vital role in monitoring the client’s environment and providing valuable information to management to help identify and mitigate these types of risks.  If an organization is not prepared for these types of sudden, catastrophic fraud events, the losses can sink the organization; only look at what happened to Martha Stewart Enterprises because of her trading scandal and to Target because of the overnight revelation of the hacking of its customer accounts as well as to a host of others.

Viewed narrowly in hindsight, there seems to have been little these companies could realistically have done on the front end to mitigate the effects of such unforeseen events.  The only way to manage such events effectively is to convert them from unforeseen to foreseeable events with potential for catastrophic losses that can be mitigated through anticipation and preparation. Anticipating the potential for such events is critical, requiring information that is current, forward-looking, frequent, comprehensive, reliable, and diversified and available, to an ever-growing extent, to the CFE on the public internet.  Systematic use of the internet to broaden the scope of fraud risk assessment is a trend only now firmly taking hold.

Fraud prevention and mitigation related decision-making takes place in the present and affects the present but, more importantly, it affects the future. Historic information is valuable for some decisions but, to be effective, the information gathered for most decisions must be current and updated continuously. In this respect, CFE’s and risk managers should consider the nature of the information source and the frequency with which it is updated. For example, printed encyclopedias become dated quickly. Web and mobile sources may be considered the most current, but, as Liseli pointed out last year, this is not always the case. The very abundance of internet related resources requires of those gathering on-line information that they exercise extra care in specifying how information is verified and how often as well as when and under what circumstances it is updated.  To have comprehensive and diversified information, examiners must accept that some information they uncover won’t be completely reliable. Knowing that, they must have a methodology for evaluating the degree of reliability of each source, gathering corroborating and refuting information, and discerning the truth among the conflicting information.

When assessing the probability potential for unforeseen fraud events within the context of a client environment, CFE’s and loss prevention managers should avoid the tendency to plan and act based solely on past events and risks. Internet based scanning and assessment systems and processes ideally should be developed to anticipate the next wave of risks that might be carrying unforeseen events ever closer to the organization. It would be simple if dealing with one unforeseen fraud event eliminated all others but fraud examiners especially are aware of how often one fraud spawns another.

In casting a wider, on-line based, risk assessment net forward looking examiners might ask questions like:

–What is the next wave of technological, societal, industrial, and environmental changes that could affect my client organization, and what will be their implications for the organization?

–Have organizations that have a “bring-your-own-device” policy for cell phones, tablets, and other devices considered all the potential implications of such a policy, including privacy issues and the potential risk to proprietary information?

–What information on these devices is discoverable in legal cases?

–Are these sources included in the fraud assessment process?

–How quickly are events changing within the organization and its environment?

How do CFE’s sift through this deluge of information to glean what is relevant to the organization? What filters are available within the media in use? Which sources have features available that push the information to the user based on chosen criteria?

Some such sources are …

–Industry and trade organizations, especially including websites, magazines, newsletters, forums, and roundtables.
–Social media.
–News outlets such as print, Internet, and cable television.
–Think tanks and consultants.
–Governmental and quasi-governmental organizations.
–Personnel using cutting-edge technology.

Unforeseen financial related fraud events most often arise from a lack of information.  To be effective, information gathering must expand beyond those sources that are most familiar to risk assessment professionals and to others like CFEs involved in risk management; the more diverse the sources, the more effective the information gathering. Gathering information from only neutral sources may seem on the surface to be the most effective strategy; but this can create a severe deficit of information. Information from sources in competition with or in opposition to the client organization should be included. This will include information from sources that have a different political stance, moral compass, or divergent viewpoint. Gathering information from governmental organizations should include a wide variety of domestic and international sources. Information gatherers must evaluate the political purpose behind the information, its slant, and the reliability of the information.

Unforeseen fraud events can be devastating to an organization, not just because they are catastrophic, but because they are unexpected and initially mysterious in nature. But like all events, if they can be better understood and anticipated, their effects can be managed and mitigated so they will not be as damaging to the organization.  The use of as many information sources as possible, including those internet based,  is key to assessing their risk and potential impact.

Zack is Back on Internal Investigations!

Our Chapter is looking forward with anticipation to our next two-day training event (May 17th and 18th) when we will again have Gerry Zack, one of the ACFE’s best speakers, presenting on the topic ‘Conducting Internal Investigations’.  Gerry was last with us several years ago, when he taught ‘Introduction to Fraud Examination’ to an overflow crowd; judging from the number of early registrations, it looks like this year’s event will be an attendance repeat!

One of the training event segments Gerry presented in detail last time dealt with related party transactions internal to the organization and some of the unique challenges they can pose for fraud examiners.  Such ethical lapses take the form of schemes where individuals who approve one or more transactions for their organizations also benefit personally from them.  Per the ACFE, the business processes most affected by such scenarios are the loan function, the sales function and corporate purchases.

Regarding loan schemes, the key risks fraud examiners should look for are:

— The provision of loans to senior management, other employees, or board members at below-market interest rates or under terms not available in the marketplace;
— Failure to disclose the related party nature of the loan;
— The client organization providing guarantees for private loans made by employees or board members.

In these scenarios, the favorable terms benefit the employee at the expense of the employing organization.  To identify undisclosed loans to senior management, board members, and employees, the CFE could search for related-party loans using data analysis to compare the names on all notes receivables and accounts receivables with employee names from payroll records and board member names from board minutes. If a match occurs, the CFE should assess whether the related-party transaction was appropriately authorized and disclosed in the accounting records and financial statements.  Examiners can also search for undisclosed related-party loans by examining the interest rate, due dates, and collateral terms for notes receivables.  Notes receivable containing zero or unusually low interest rates, or requiring no due dates or insufficient collateral, may indicate related-party transactions.  The CFE can also examine advances made to customers or others who owe money to her client organization. Organizations generally do not advance money to others who owe them money unless a related-party relationship exists.

Gerry’s presentation for related party sales pin-pointed red flags like employees:

— Selling products or services significantly below market price or providing beneficial sales terms that ordinarily would not be granted to arms-length customers.
— Inflating sales for bonuses or stock options using related parties to perpetrate the scheme. Either a sale really has not taken place because the goods were not shipped or there was an obligation to repurchase the goods sold so the sale was incomplete.
— Approving excessive sales allowances or returns as well as accounts receivable adjustments or write-offs for related parties.

To cover up the related-party transaction, employees may deny reviewers access to customers to impede them from acquiring evidence concerning the related-party relationship.  Where the CFE suspects related party sales, s/he should perform analytical procedures to compare price variations among customers to identify those who pay significantly below the average sales price. Examiners can also attempt to identify any customer who pays prices that differ from the approved price sheet. Customer contracts can be directly analyzed for unusual rights of return, obligations to repurchase goods sold, and unusual extended repayment terms. Analytical procedures to identify customers with excessive returns, sales allowances, account receivable adjustments, or write-off’s may also be performed. Any variances in these areas might indicate undisclosed related-party transactions. Gerry also point out that data analysis can be used to efficiently compare employee addresses, telephone numbers, tax identification numbers, and birthdays with customer addresses, telephone numbers, tax identification numbers, and company organization dates. When creating a shell company, many individuals use their own contact information for convenience and their own birth date as the organization date because it is easy to remember. Any matches could indicate a related-party association and should be investigated minutely.

For third party purchases schemes, some of the key red flags are:

— the company paying prices significantly above market for goods or services;
— the company receiving significantly below average quality goods or services that are purchased at market prices for high quality goods or services;
— the company never actually receiving the purchased goods or services.

CFE’s should consider comparing cost variations among vendors to identify those whose costs significantly exceed the average cost. For identified variances, examiners should discover why the cost variations occurred to assess whether a related-party relationship exists. Like the examination steps for customers, it’s important to compare the employee’s address, telephone number, tax identification number, and birth date to vendors’ information to see if a relationship exists. CFE’s can also assess the use of sales intermediaries for products they can purchase directly from the manufacturer at lower costs.

For the comprehensive review of all this information, Gerry stressed that the level and quality of client company documentation is critical.  In reviewing their client organization’s documentation, the CFE may find that the organization does not have in place any policies or procedures prohibiting related-party relationships or transactions without prior approval. The organization also may not provide training to employees around related-party relationships and transactions, or require employees even to certify whether they are involved in any conflicts of interest with the organization. CFE’s should recommend, as a component of the fraud prevention program, that their client organization maintain written policies and procedures defining the process for obtaining approval for related-party relationships and transactions.

Key risks exist if:

— Written related-party policy and procedures are nonexistent or insufficient;
— Employees are not required to certify regularly whether they have a conflict of interest;
— Related-party transactions are not approved in accordance with established organizational policies and procedures;
— Related-party transactions are approved with exceptions to organizational policies and procedures.

The CFE should review approved related-party policies and procedures documentation. If related-party policies or procedures don’t exist or if they don’t sufficiently mitigate the risk of unauthorized or inappropriate related-party relationships or transactions, the examiner should consult with senior management and the board, if necessary, to offer guidance on a pro-active basis toward the development of such policies and procedures as a key fraud prevention measure.  The CFE should also review conflict of interest statements. If an employee documents a conflict of interest in his or her statement, the examiner should assess whether the conflict of interest was appropriately authorized and whether the process recognizes and discloses conflicts of interest.

Third party transactions are but a single topic of many to be covered by Gerry in our May event.  If you are called upon by your employer to investigate instances of fraud, waste and abuse both within your parent company and within related business affiliates, this is a seminar for you.  A well run internal investigation can enhance an enterprise’s well-being and can help detect the source of lost funds, identify responsible parties and recover losses. It can also provide a defense to legal charges by terminated or disgruntled employees. But perhaps most importantly, an internal investigation will signal to other employees that the company will not tolerate fraud. This seminar will prepare you for every step of an internal investigation into potential fraud, from receiving the initial allegation to testifying as a witness. Learn to lead an internal investigation with accuracy and confidence by gaining knowledge about key topics, such as relevant legal aspects of internal investigations, using computers in an investigation, collecting and analyzing internal information, interviewing witnesses and writing reports.

There are only 70 training slots available and our seminars fill up fast!  If you are interested in this vital investigative topic, you can find the seminar agenda, venue information, speaker bio and registration information at http://rvacfes.com/events/conducting-internal-investigations/.

Beyond the Sniff Test

Many years ago, I worked with a senior auditor colleague (who was also an attorney) who was always talking about applying what he called “the sniff test” to any financial transaction that might represent an ethical challenge.   Philosophical theories provide the bases for useful practical decision approaches and aids like my friend’s sniff test, although we can expect that most of the executives and professional accountants we work with as CFEs are unaware of exactly how and why this is so. Most seasoned directors, executives, and professional accountants, however, have developed tests and commonly used rules of thumb that can be used to assess the ethicality of decisions on a preliminary basis. To their minds, if these preliminary tests give rise to concerns, a more thorough analysis should be performed using any number of defined approaches and techniques.

After having heard him use the term several times, I asked my friend him if he could define it.  He thought about it that morning and later, over lunch, he boiled it down to a series of questions he would ask himself:

–Would I be comfortable as a professional if this action or decision of my client were to appear on the front page of a national newspaper tomorrow morning?
–Will my client be proud of this decision tomorrow?
–Would my client’s mother be proud of this decision?
–Is this action or decision in accord with the client corporation’s mission and code?
–Does this whole thing, in all its apparent aspects and ramifications, feel right to me?

Unfortunately, for their application in actual practice, although sniff tests and commonly used rules are based on ethical principles and are often preliminarily useful, they rarely, by themselves, represent a sufficiently comprehensive examination of the decision in question and so can leave the individuals and client corporations involved vulnerable to making unethical decisions.  For this reason, more comprehensive techniques involving the impact on client stakeholders should be employed whenever a proposed decision is questionable or likely to have significant consequences.

The ACFE tells us that many individual decision makers still don’t recognized the importance of stakeholder’s expectations of rightful conduct. If they did, the decisions made by corporate executives and by accountants and lawyers involved in the Enron, Arthur Andersen, WorldCom, Tyco, Adephia, and a whole host of others right up to the present day, might have avoided the personal and organizational tragedies that occurred. Some executives were motivated by greed rather than by enlightened self-interest focused on the good of all. Others went along with unethical decisions because they did not recognize that they were expected to behave differently and had a duty to do so. Some reasoned that because everyone else was doing something similar, how could it be wrong? The point is that they forgot to consider sufficiently the ethical practice (and duties) they were expected to demonstrate. Where a fiduciary duty was owed to future shareholders and other stakeholders, the public and personal virtues expected (character traits such as integrity, professionalism, courage, and so on), were not sufficiently considered. In retrospect, it would have been wise to include the assessment of ethical expectations as a separate step in any Enterprise Risk Management (ERM) process to strengthen governance and risk management systems and guard against unethical, short-sighted decisions.

It’s also evident that employees who continually make decisions for the wrong reasons, even if the right consequences result, can represent a high governance risk.  Many examples exist where executives motivated solely by greed have slipped into unethical practices, and others have been misled by faulty incentive systems. Sears Auto Center managers were selling repair services that customers did not need to raise their personal commission remuneration, and ultimately caused the company to lose reputation and future revenue.  Many of the classic financial scandals of recent memory were caused by executives who sought to manipulate company profits to support or inflate the company’s share price to boost their own stock option gains. Motivation based too narrowly on self-interest can result in unethical decisions when proper self-guidance and/or external monitoring is lacking. Because external monitoring is unlikely to capture all decisions before implementation, it is important for all employees to clearly understand the broad motivation that will lead to their own and their organization’s best interest from a stakeholder perspective.

Consequently, decision makers should take motivations and behavior expected by stakeholders into account specifically in any comprehensive ERM approach, and organizations should require accountability by employees for those expectations through governance mechanisms. Several aspects of ethical behavior have been identified as being indicative of mens rea (a guilty mind).  If personal or corporate behavior does not meet shareholder ethical expectations, there will probably be a negative impact on reputation and the ability to reach strategic objectives on a sustained basis in the medium and long term.

The stakeholder impact assessment broadens the criteria of the preliminary sniff test by offering an opportunity to assess the motivations that underlie the proposed decision or action. Although it is unlikely that an observer will be able to know with precision the real motivations that go through a decision maker’s mind, it is quite possible to project the perceptions that stakeholders will have of the action. In the minds of stakeholders, perceptions will determine reputational impacts whether those perceptions are correct or not. Moreover, it is possible to infer from remuneration and other motivational systems in place whether the decision maker’s motivation is likely to be ethical or not. To ensure a comprehensive ERM approach, in addition to projecting perceptions and evaluating motivational systems, the decisions or actions should be challenged by asking such questions as:

Does the decision or action involve and exhibit the integrity, fairness, and courage expected? Alternatively, does the decision or action involve and exhibit the motivation, virtues, and character expected?

Beyond the simple sniff test, stakeholder impact analysis offers a formal way of bringing into a decision the needs of an organization and its individual constituents (society). Trade-offs are difficult to make, and can benefit from such advances in technique. It is important not to lose sight of the fact that the concepts of stakeholder impact analysis need to be applied together as a set, not as stand-alone techniques. Only then will a comprehensive analysis be achieved and an ethical decision made.

Depending on the nature of the decision to be faced, and the range of stakeholders to be affected, a proper analysis could be based on any of the historical approaches to ethical decision making as elaborated by ACFE training and discussed so often in this blog.  A professional CFE can use stakeholder analysis in making decisions about financial fraud investigations, fraud related accounting issues, auditing procedures, and general practice matters, and should be ready to prepare or assist in such analyses for employers or clients just as is currently the case in other areas of fraud examination. Although many hard-numbers-oriented executives and accountants will be wary of becoming involved with the “soft” subjective analysis that typifies stakeholder and ethical expectations analysis, they should bear in mind that the world is changing to put a much higher value on non-numerical information. They should be wary of placing too much weight on numerical analysis lest they fall into the trap of the economist, who, as Oscar Wilde put it: “knew the price of everything and the value of nothing.”