Category Archives: Analytical Techniques

New Rules for New Tools

I’ve been struck these last months by several articles in the trade press about CFE’s increasingly applying advanced analytical techniques in support of their work as full-time employees of private and public-sector enterprises.  This is gratifying to learn because CFE’s have been bombarded for some time now about the risks presented by cloud computing, social media, big data analytics, and mobile devices, and told they need to address those risk in their investigative practice.  Now there is mounting evidence of CFEs doing just that by using these new technologies to change the actual practice of fraud investigation and forensic accounting by using these innovative techniques to shape how they understand and monitor fraud risk, plan and manage their work, test transactions against fraud scenarios, and report the results of their assessments and investigations to management; demonstrating what we’ve all known, that CFEs, especially those dually certified as CPAs, CIAs, or CISA’s can bring a unique mix of leveraged skills to any employer’s fraud prevention or detection program.

Some examples …

Social Media — following a fraud involving several of the financial consultants who work in its branches and help customers select accounts and other investments, a large multi-state bank requested that a staff CFE determine ways of identifying disgruntled employees who might be prone to fraud. The effort was important to management not only because of fraud prevention but because when the bank lost an experienced financial consultant for any reason, it also lost the relationships that individual had established with the bank’s customers, affecting revenue adversely. The staff CFE suggested that the bank use social media analytics software to mine employees’ email and posts to its internal social media groups. That enabled the bank to identify accurately (reportedly about 33 percent) the financial consultants who were not currently satisfied with their jobs and were considering leaving. Management was able to talk individually with these employees and address their concerns, with the positive outcome of retaining many of them and rendering them less likely to express their frustration by ethically challenged behavior.  Our CFE’s awareness that many organizations use social media analytics to monitor what their customers say about them, their products, and their services (a technique often referred to as sentiment analysis or text analytics) allowed her to suggest an approach that rendered value. This text analytics effort helped the employer gain the experience to additionally develop routines to identify email and other employee and customer chatter that might be red flags for future fraud or intrusion attempts.

Analytics — A large international bank was concerned about potential money laundering, especially because regulators were not satisfied with the quality of their related internal controls. At a CFE employee’s recommendation, it invested in state-of-the-art business intelligence solutions that run “in-memory”, a new technique that enables analytics and other software to run up to 300,000 times faster, to monitor 100 percent of its transactions, looking for the presence of patterns and fraud scenarios indicating potential problems.

Mobile — In the wake of an identified fraud on which he worked, an employed CFE recommended that a global software company upgrade its enterprise fraud risk management system so senior managers could view real-time strategy and risk dashboards on their mobile devices (tablets and smartphones). The executives can monitor risks to both the corporate and to their personal objectives and strategies and take corrective actions as necessary. In addition, when a risk level rises above a defined target, the managers and the risk officer receive an alert.

Collaboration — The fraud prevention and information security team at a U.S. company wanted to increase the level of employee acceptance and compliance with its fraud prevention – information security policy. The CFE certified Security Officer decided to post a new policy draft to a collaboration area available to every employee and encouraged them to post comments and suggestions for upgrading it. Through this crowd-sourcing technique, the company received multiple comments and ideas, many of which were incorporated into the draft. When the completed policy was published, the company found that its level of acceptance increased significantly, its employees feeling that they had part ownership.

As these examples demonstrate, there is a wonderful opportunity for private and public sector employed CFE’s to join in the use of enterprise applications to enhance both their and their employer’s investigative efficiency and effectiveness.  Since their organizations are already investing heavily in a wide variety of innovative technologies to transform the way in which they deliver products to and communicate with customers, as well as how they operate, manage, and direct the business, there is no reason that CFE’s can’t use these same tools to transform each stage of their examination and fraud prevention work.

A risk-based fraud prevention approach requires staff CFEs to build and maintain the fraud prevention plan, so it addresses the risks that matter to the organization, and then update that plan as risks change. In these turbulent times, dominated by cyber, risks change frequently, and it’s essential that fraud prevention teams understand the changes and ensure their approach for addressing them is updated continuously. This requires monitoring to identify and assess both new risks and changes in previously identified risks.  Some of the recent technologies used by organizations’ financial and operational analysts, marketing and communications professionals, and others to understand both changes within and outside the business can also be used to great advantage by loss prevention staff for risk monitoring. The benefits of leveraging this same software are that the organization has existing experts in place to teach CFE’s how to use it, the IT department already is providing technical support, and the software is currently used against the very data enterprise fraud prevention professionals like staff CFEs want to analyze.  A range of enhanced analytics software such as business intelligence, analytics (including predictive and mobile analytics), visual intelligence, sentiment analysis, and text analytics enable fraud prevention to monitor and assess risk levels. In some cases, the software monitors transactions against predefined rules to identify potential concerns such as heightened fraud risks in any given business process or in a set of business processes (the inventory or financial cycles).  For example, a loss prevention team headed by a staff CFE can monitor credit memos in the first month of each quarter to detect potential revenue accounting fraud. Another use is to identify trends associated with known fraud scenarios, such as changes in profit margins or the level of employee turnover, that might indicate changes in risk levels. For example, the level of emergency changes to enterprise applications can be analyzed to identify a heightened risk of poor testing and implementation protocols associated with a higher vulnerability to cyber penetration.

Finally, innovative staff CFEs have used some interesting techniques to report fraud risk assessments and examination results to management and to boards. Some have adopted a more visually appealing representation in a one-page assessment report; others have moved to the more visual capabilities of PowerPoint from the traditional text presentation of Microsoft Word.  New visualization technology, sometimes called visual analytics when allied with analytics solutions, provides more options for fraud prevention managers seeking to enhance or replace formal reports with pictures, charts, and dashboards.  The executives and boards of their employing organizations are already managing their enterprise with dashboards and trend charts; effective loss prevention communications can make effective use of the same techniques. One CFE used charts and trend lines to illustrate how the time her employing company was taking to process small vendor contracts far exceeded acceptable levels, had contributed to fraud risk and was continuing to increase. The graphic, generated by a combination of a business intelligence analysis and a visual analytics tool to build the chart, was inserted into a standard monthly loss prevention report.

CFE headed loss prevention departments and their allied internal audit and IT departments have a rich selection of technologies that can be used by them individually or in combination to make them all more effective and efficient. It is questionable whether these three functions can remain relevant in an age of cyber, addressing and providing assurance on the risks that matter to the organization, without an ever wider use of modern technology. Technology can enable the an internal CFE to understand the changing business environment and the risks that can affect the organization’s ability to achieve its fraud prevention related objectives.

The world and its risks are evolving and changing all the time, and assurance professionals need to address the issues that matter now. CFEs need to review where the risk is going to be, not where it was when the anti-fraud plan was built. They increasingly need to have the ability to assess cyber fraud risk quickly and to share the results with the board and management in ways that communicate assurance and stimulate necessary change.

Technology must be part of the solution to that need. Technological tools currently utilized by CFEs will continue to improve and will be joined by others over time. For example, solutions for augmented or virtual reality, where a picture or view of the physical world is augmented by data about that picture or view enables loss prevention professionals to point their phones at a warehouse and immediately access operational, personnel, safety, and other useful information; representing that the future is a compound of both challenge and opportunity.

Threat Assessment & Cyber Security

One rainy Richmond evening last week I attended the monthly dinner meeting of one of the professional organizations of which I’m a member.  Our guest speaker’s presentation was outstanding and, in my opinion, well worth sharing with fellow CFE’s especially as we find more and more of our client’s grappling with the reality of  ever-evolving cyber threats.

Our speaker started by indicating that, according to a wide spectrum of current thinking, technology issues in isolation should be but one facet of the overall cyber defense strategy of any enterprise. A holistic view on people, process and technology is required in any organization that wants to make its chosen defense strategy successful and, to be most successful, that strategy needs to be supplemented with a good dose of common sense creative thinking. That creative thinking proved to be the main subject of her talk.

Ironically, the sheer size, complexity and geopolitical diversity of the modern-day enterprise can constitute an inherent obstacle for its goal of achieving business objectives in a secured environment.  The source of the problem is not simply the cyber threats themselves, but threat agents. The term “threat agent,” from the Open Web Application Security Project (OWASP), is used to indicate an individual or group that can manifest a threat. Threat agents are represented by the phenomena of:

–Hacktivism;
–Corporate Espionage;
–Government Actors;
–Terrorists;
–Common Criminals (individual and organized).

Irrespective of the type of threat, the threat agent takes advantage of an identified vulnerability and exploits it in the attempt to negatively impact the value the individual business has at risk. The attempt to execute the threat in combination with the vulnerability is called hacking. When this attempt is successful, and the threat agent can negatively impact the value at risk, it can be concluded that the vulnerability was successfully exploited. So, essentially, enterprises are trying to defend against hacking and, more importantly, against the threat agent that is the hacker in his or her many guises. The ACFE identifies hacking as the single activity that has resulted in the greatest number of cyber breaches in the past decade.

While there is no one-size-fits-all standard to build and run a sustainable security defense in a generic enterprise context, most companies currently deploy something resembling the individual components of the following general framework:

–Business Drivers and Objectives;
–A Risk Strategy;
–Policies and Standards;
–Risk Identification and Asset Profiling;
–People, Process, Technology;
–Security Operations and Capabilities;
–Compliance Monitoring and Reporting.

Most IT risk and security professionals would be able to identify this framework and agree with the assertion that it’s a sustainable approach to managing an enterprise’s security landscape. Our speaker pointed out, however, that in her opinion, if the current framework were indeed working as intended, the number of security incidents would be expected to show a downward trend as most threats would fail to manifest into full-blown incidents. They could then be routinely identified by enterprises as known security problems and dealt with by the procedures operative in day-to-day security operations. Unfortunately for the existing framework, however, recent security surveys conducted by numerous organizations and trade groups clearly show an upward trend of rising security incidents and breaches (as every reader of daily press reports well knows).

The rising tide of security incidents and breaches is not surprising since the trade press also reports an average of 35 new, major security failures on each and every day of the year.  Couple this fact with the ease of execution and ready availability of exploit kits on the Dark Web and the threat grows in both probability of exploitation and magnitude of impact. With speed and intensity, each threat strikes the security structure of an enterprise and whittles away at its management credibility to deal with the threat under the routine, daily operational regimen presently defined. Hence, most affected enterprises endure a growing trend of negative security incidents experienced and reported.

During the last several years, in response to all this, many firms have responded by experimenting with a new approach to the existing paradigm. These organizations have implemented emergency response teams to respond to cyber-threats and incidents. These teams are a novel addition to the existing control structure and have two main functions: real-time response to security incidents and the collection of concurrent internal and external security intelligence to feed predictive analysis. Being able to respond to security incidents via a dedicated response team boosts the capacity of the operational organization to contain and recover from attacks. Responding to incidents, however efficiently, is, in any case, a reactive approach to deal with cyber-threats but isn’t the whole story. This is where cyber-threat intelligence comes into play. Threat intelligence is a more proactive means of enabling an organization to predict incidents. However, this approach also has a downside. The influx of a great deal of intelligence information may limit the ability of the company to render it actionable on a timely basis.

Cyber threat assessments are an effective means to tame what can be this overwhelming influx of intelligence information. Cyber threat assessment is currently recognized in the industry as red teaming, which is the practice of viewing a problem from an adversary or competitor’s perspective. As part of an IT security strategy, enterprises can use red teams to test the effectiveness of the security structure as a whole and to provide a relevance factor to the intelligence feeds on cyber threats. This can help CEOs decide what threats are relevant and have higher exposure levels compared to others. The evolution of cyber threat response, cyber threat
intelligence and cyber threat assessment (red teams) in conjunction with the existing IT risk framework can be used as an effective strategy to counter the agility of evolving cyber threats. The cyber threat assessment process assesses and challenges the structure of existing enterprise security systems, including designs, operational-level controls and the overall cyber threat response and intelligence process to ensure they remain capable of defending against current relevant exploits.

Cyber threat assessment exercises can also be extremely helpful in highlighting the most relevant attacks and in quantifying their potential impacts. The word “adversary” in the definition of the term ‘red team’ is key in that it emphasizes the need to independently challenge the security structure from the view point of an attacker.  Red team exercises should be designed to be independent of the scope, asset profiling, security, IT operations and coverage of existing security policies. Only then can enterprises realistically apply the attacker’s perspective, measure the success of its risk strategy and see how it performs when challenged. It’s essential that red team exercises have the freedom to treat the complete security structure and to point to flaws in all components of the IT risk framework. It’s a common notion that a red team exercise is a penetration test. This is not the case. Use of penetration test techniques by red teams is a means to identify the information required to replicate cyber threats and to create a controlled security incident. The technical shortfalls that are identified during standard penetration testing are mere symptoms of gaps that may exist in the governance of people, processes and technology. Hence, to make the organization more resilient against cyber threats, red team focus should be kept on addressing the root cause and not merely on fixing the security flaws discovered during the exercise. Another key point is to include cyber threat response and threat monitoring in the scope of such assessments. This demands that red team exercises be executed, and partially announced, with CEO-level approval. This ensures that enterprises challenge the end-to-end capabilities of an enterprise to cope with a real-time security incident. Lessons learned from red teaming can be documented to improve the overall security posture of the organization and as an aid in dealing with future threats.

Our speaker concluded by saying that as cyber threats evolve, one-hundred percent security for an active business is impossible to achieve. Business is about making optimum use of existing resources to derive the desired value for stakeholders. Cyber-defense cannot be an exception to this rule. To achieve optimized use of their security investments, CEOs should ensure that security spending for their organization is mapped to the real emerging cyber threat landscape. Red teaming is an effective tool to challenge the status quo of an enterprise’s security framework and to make informed judgements about the actual condition of its actual security posture today. Not only can the judgements resulting from red team exercises be used to improve cyber threat defense, they can also prove an effective mechanism to guide a higher return on cyber-defense investment.

Sock Puppets

The issue of falsely claimed identity in all its myriad forms has shadowed the Internet since the beginning of the medium.  Anyone who has used an on-line dating or auction site is all too familiar with the problem; anyone can claim to be anyone.  Likewise, confidence games, on or off-line, involve a range of fraudulent conduct committed by professional con artists against unsuspecting victims. The victims can be organizations, but more commonly are individuals. Con artists have classically acted alone, but now, especially on the Internet, they usually group together in criminal organizations for increasingly complex criminal endeavors. Con artists are skilled marketers who can develop effective marketing strategies, which include a target audience and an appropriate marketing plan: crafting promotions, product, price, and place to lure their victims. Victimization is achieved when this marketing strategy is successful. And falsely claimed identities are always an integral component of such schemes, especially those carried out on-line.

Such marketing strategies generally involve a specific target market, which is usually made up of affinity groups consisting of individuals grouped around an objective, bond, or association like Facebook or LinkedIn Group users. Affinity groups may, therefore, include those associated through age, gender, religion, social status, geographic location, business or industry, hobbies or activities, or professional status. Perpetrators gain their victims’ trust by affiliating themselves with these groups.  Historically, various mediums of communication have been initially used to lure the victim. In most cases, today’s fraudulent schemes begin with an offer or invitation to connect through the Internet or social network, but the invitation can come by mail, telephone, newspapers and magazines, television, radio, or door-to-door channels.

Once the mark receives and accepts the offer to connect, some sort of response or acceptance is requested. The response will typically include (in the case of Facebook or LinkedIn) clicking on a link included in a fraudulent follow-up post to visit a specified web site or to call a toll-free number.

According to one of Facebook’s own annual reports, up to 11.2 percent of its accounts are fake. Considering the world’s largest social media company has 1.3 billion users, that means up to 140 million Facebook accounts are fraudulent; these users simply don’t exist. With 140 million inhabitants, the fake population of Facebook would be the tenth-largest country in the world. Just as Nielsen ratings on television sets determine different advertising rates for one television program versus another, on-line ad sales are determined by how many eyeballs a Web site or social media service can command.

Let’s say a shyster want 3,000 followers on Twitter to boost the credibility of her scheme? They can be hers for $5. Let’s say she wants 10,000 satisfied customers on Facebook for the same reason? No problem, she can buy them on several websites for around $1,500. A million new friends on Instagram can be had for only $3,700. Whether the con man wants favorites, likes, retweets, up votes, or page views, all are for sale on Web sites like Swenzy, Fiverr, and Craigslist. These fraudulent social media accounts can then be freely used to falsely endorse a product, service, or company, all for just a small fee. Most of the work of fake account set up is carried out in the developing world, in places such as India and Bangladesh, where actual humans may control the accounts. In other locales, such as Russia, Ukraine, and Romania, the entire process has been scripted by computer bots, programs that will carry out pre-encoded automated instructions, such as “click the Like button,” repeatedly, each time using a different fake persona.

Just as horror movie shape-shifters can physically transform themselves from one being into another, these modern screen shifters have their own magical powers, and organizations of men are eager to employ them, studying their techniques and deploying them against easy marks for massive profit. In fact, many of these clicks are done for the purposes of “click fraud.” Businesses pay companies such as Facebook and Google every time a potential customer clicks on one of the ubiquitous banner ads or links online, but organized crime groups have figured out how to game the system to drive profits their way via so-called ad networks, which capitalize on all those extra clicks.

Painfully aware of this, social media companies have attempted to cut back on the number of fake profiles. As a result, thousands and thousands of identities have disappeared over night among the followers of many well know celebrities and popular websites. If Facebook has 140 million fake profiles, there is no way they could have been created manually one by one. The process of creation is called sock puppetry and is a reference to the children’s toy puppet created when a hand is inserted into a sock to bring the sock to life. In the online world, organized crime groups create sock puppets by combining computer scripting, web automation, and social networks to create legions of online personas. This can be done easily and cheaply enough to allow those with deceptive intentions to create hundreds of thousands of fake online citizens. One only needs to consult a readily available on-line directory of the most common names in any country or region. Have a scripted bot merely pick a first name and a last name, then choose a date of birth and let the bot sign up for a free e-mail account. Next, scrape on-line photo sites such as Picasa, Instagram, Facebook, Google, and Flickr to choose an age-appropriate image to represent your new sock puppet.

Armed with an e-mail address, name, date of birth, and photograph, you sign up your fake persona for an account on Facebook, LinkedIn, Twitter, or Instagram. As a last step, you teach your puppets how to talk by scripting them to reach out and send friend requests, repost other people’s tweets, and randomly like things they see Online. Your bots can even communicate and cross-post with one another. Before the fraudster knows it, s/he has thousands of sock puppets at his disposal for use as he sees fit. It is these armies of sock puppets that criminals use as key constituents in their phishing attacks, to fake on-line reviews, to trick users into downloading spyware, and to commit a wide variety of financial frauds, all based on misplaced and falsely claimed identity.

The fraudster’s environment has changed and is changing over time, from a face-to-face physical encounter to an anonymous on-line encounter in the comfort of the victim’s own home. While some consumers are unaware that a weapon is virtually right in front of them, others are victims who struggle with the balance of the many wonderful benefits offered by advanced technology and the painful effects of its consequences. The goal of law enforcement has not changed over the years; to block the roads and close the loopholes of perpetrators even as perpetrators continue to strive to find yet another avenue to commit fraud in an environment in which they can thrive. Today, the challenge for CFEs, law enforcement and government officials is to stay on the cutting edge of technology, which requires access to constantly updated resources and communication between organizations; the ability to gather information; and the capacity to identify and analyze trends, institute effective policies, and detect and deter fraud through restitution and prevention measures.

Now is the time for CFEs and other assurance professionals to continuously reevaluate all we for take for granted in the modern technical world and to increasingly question our ever growing dependence on the whole range of ubiquitous machines whose potential to facilitate fraud so few of our clients and the general public understand.

Bye-Bye Money

Miranda had responsibility for preparing personnel files for new hires, approval of wages, verification of time cards, and distribution of payroll checks. She “hired” fictitious employees, faked their records, and ordered checks through the payroll system. She deposited some checks in several personal bank accounts and cashed others, endorsing all of them with the names of the fictitious employees and her own. Her company’s payroll function created a large paper trail of transactions among which were individual earnings records, W-2 tax forms, payroll deductions for taxes and insurance, and Form 941 payroll tax reports. She mailed all the W-2 forms to the same post office box.

Miranda stole $160,000 by creating some “ghosts,” usually 3 to 5 out of 112 people on the payroll and paying them an average of $650 per week for three years. Sometimes the ghosts quit and were later replaced by others. But she stole “only” about 2 percent of the payroll funds during the period.

A tip from a fellow employee received by the company hotline resulted in the engagement of Tom Hudson, CFE.  Tom’s objective was to obtain evidence of the existence and validity of payroll transactions on the control premise that different people should be responsible for hiring (preparing personnel files), approving wages, and distributing payroll checks. “Thinking like a crook” lead Tom to readily see that Miranda could put people on the payroll and obtain their checks just as the hotline caller alleged. In his test of controls Tom audited for transaction authorization and validity. In this case random sampling was less likely to work because of the small number of alleged ghosts. So, Tom looked for the obvious. He selected several weeks’ check blocks, accounted for numerical sequence (to see whether any checks had been removed), and examined canceled checks for two endorsements.

Tom reasoned that there may be no “balance” to audit for existence/occurrence, other than the accumulated total of payroll transactions, and that the total might not appear out of line with history because the tipster had indicated that the fraud was small in relation to total payroll and had been going on for years.  He decided to conduct a surprise payroll distribution, then followed up by examining prior canceled checks for the missing employees and then scan personnel files for common addresses.

Both the surprise distribution and the scan for common addresses quickly provided the names of 2 or 3 exceptions. Both led to prior canceled checks (which Miranda had not removed and the bank reconciler had not noticed), which carried Miranda’s own name as endorser. Confronted, she confessed.

The major risks in any payroll business cycle are:

•Paying fictitious “employees” (invalid transactions, employees do not exist);

• Overpaying for time or production (inaccurate transactions, improper valuation);

•Incorrect accounting for costs and expenses (incorrect classification, improper or inconsistent presentation and disclosure).

The assessment of payroll system control risk normally takes on added importance because most companies have fairly elaborate and well-controlled personnel and payroll functions. The transactions in this cycle are numerous during the year yet result in lesser amounts in balance sheet accounts at year-end. Therefore, in most routine outside auditor engagements, the review of controls, test of controls and audit of transaction details constitute the major portion of the evidence gathered for these accounts. On most annual audits, the substantive audit procedures devoted to auditing the payroll-related account balances are very limited which enhances fraud risk.

Control procedures for proper segregation of responsibilities should be in place and operating. Proper segregation involves authorization (personnel department hiring and firing, pay rate and deduction authorizations) by persons who do not have payroll preparation, paycheck distribution, or reconciliation duties. Payroll distribution (custody) is in the hands of persons who do not authorize employees’ pay rates or time, nor prepare the payroll checks. Recordkeeping is performed by payroll and cost accounting personnel who do not make authorizations or distribute pay. Combinations of two or more of the duties of authorization, payroll preparation and recordkeeping, and payroll distribution in one person, one office, or one computerized system may open the door for errors and frauds. In addition, the control system should provide for detail control checking activities.  For example: (1) periodic comparison of the payroll register to the personnel department files to check hiring authorizations and for terminated employees not deleted, (2) periodic rechecking of wage rate and deduction authorizations, (3) reconciliation of time and production paid to cost accounting calculations, (4) quarterly reconciliation of YTD earnings records with tax returns, and (5) payroll bank account reconciliation.

Payroll can amount to 40 percent or more of an organization’s total annual expenditures. Payroll taxes, Social Security, Medicare, pensions, and health insurance can add several percentage points in variable costs on top of wages. So, for every payroll dollar saved through forensic identification, bonus savings arise automatically from the on-top costs calculated on base wages. Different industries will exhibit different payroll risk profiles. For example, firms whose culture involves salaried employees who work longer hours may have a lower risk of payroll fraud and may not warrant a full forensic approach. Organizations may present greater opportunity for payroll fraud if their workforce patterns entail night shift work, variable shifts or hours, 24/7 on-call coverage, and employees who are mobile, unsupervised, or work across multiple locations. Payroll-related risks include over-claimed allowances, overused extra pay for weekend or public holiday work, fictitious overtime, vacation and sick leave taken but not deducted from leave balances, continued payment of employees who have left the organization, ghost employees arising from poor segregation of duties, and the vulnerability of data output to the bank for electronic payment, and roster dysfunction. Yet the personnel assigned to administer the complexities of payroll are often qualified by experience than by formal finance, legal, or systems training, thereby creating a competency bias over how payroll is managed. On top of that, payroll is normally shrouded in secrecy because of the inherently private nature of employee and executive pay. Underpayment errors are less probable than overpayment errors because they are more likely to be corrected when the affected employees complain; they are less likely to be discovered when employees are overpaid. These systemic biases further increase the risk of unnoticed payroll error and fraud.

Payroll data analysis can reveal individuals or entire teams who are unusually well-remunerated because team supervisors turn a blind eye to payroll malpractice, as well as low-remunerated personnel who represent excellent value to the organization. For example, it can identify the night shift worker who is paid extra for weekend or holiday work plus overtime while actually working only half the contracted hours, or workers who claim higher duty or tool allowances to which they are not entitled. In addition to providing management with new insights into payroll behaviors, which may in turn become part of ongoing management reporting, the total payroll cost distribution analysis can point forensic accountants toward urgent payroll control improvements.

The detail inside payroll and personnel databases can reveal hidden information to the forensic examiner. Who are the highest earners of overtime pay and why? Which employees gained the most from weekend and public holiday pay? Who consistently starts late? Finishes early? Who has the most sick leave? Although most employees may perform a fair day’s work, the forensic analysis may point to those who work less, sometimes considerably less, than the time for which they are paid. Joined-up query combinations to search payroll and human resources data can generate powerful insights into the organization’s worst and best outliers, which may be overlooked by the data custodians. An example of a query combination would be: employees with high sick leave + high overtime + low performance appraisal scores + negative disciplinary records. Or, reviewers could invert those factors to find the unrecognized exemplary performers.

Where predication suggests fraud concerns about identified employees, CFEs can add value by triangulating time sheet claims against external data sources such as site access biometric data, company cell phone logs, phone number caller identification, GPS data, company email, Internet usage, company motor fleet vehicle tolls, and vehicle refueling data, most of which contain useful date and time-of-day parameters.  The data buried within these databases can reveal employee behavior, including what they were doing, where they were, and who they were interacting with throughout the work day.

Common findings include:

–Employees who leave work wrongfully during their shift;
–Employees who work fewer hours and take sick time during the week to shift the workload to weekends and public holidays to maximize pay;
–Employees who use company property excessively for personal purposes during working hours;
–Employees who visit vacation destinations while on sick leave;
–Employees who take leave but whose managers do not log the paperwork, thereby not deducting leave taken and overstating leave balances;
–Employees who moonlight in businesses on the side during normal working hours, sometimes using the organization’s equipment to do so.

Well-researched and documented forensic accounting fieldwork can support management action against those who may have defrauded the organization or work teams that may be taking inappropriate advantage of the payroll system. Simultaneously, CFEs and forensic accountants, working proactively, can partner with management to recover historic costs, quantify future savings, reduce reputational and political risk, improve the organization’s anti-fraud policies, and boost the productivity and morale of employees who knew of wrongdoing but felt powerless to stop it.

The Who, the What, the When

CFEs and forensic accountants are seekers. We spend our days searching for the most relevant information about our client requested investigations from an ever-growing and increasingly tangled data sphere and trying to make sense of it. Somewhere hidden in our client’s computers, networks, databases, and spreadsheets are signs of the alleged fraud, accompanying control weaknesses and unforeseen risks, as well as possible opportunities for improvement. And the more data the client organization has, the harder all this is to find.  Although most computer-assisted forensic audit tests focus on the numeric data contained within structured sources, such as financial and transactional databases, unstructured or text based data, such as e-mail, documents, and Web-based content, represents an estimated 8o percent of enterprise data within the typical medium to large-sized organization. When assessing written communications or correspondence about fraud related events, CFEs often find themselves limited to reading large volumes of data, with few automated tools to help synthesize, summarize, and cluster key information points to aid the investigation.

Text analytics is a relatively new investigative tool for CFEs in actual practice although some report having used it extensively for at least the last five or more years. According to the ACFE, the software itself stems from a combination of developments in our sister fields of litigation support and electronic discovery, and from counterterrorism and surveillance technology, as well as from customer relationship management, and research into the life sciences, specifically artificial intelligence. So, the application of text analytics in data review and criminal investigations dates to the mid-1990s.

Generally, CFEs increasingly use text analytics to examine three main elements of investigative data: the who, the what, and the when.

The Who: According to many recent studies, substantially more than a half of business people prefer using e-mail to use of the telephone. Most fraud related business transactions or events, then, will likely have at least some e-mail communication associated with them. Unlike telephone messages, e-mail contains rich metadata, information stored about the data, such as its author, origin, version, and date accessed, and can be documented easily. For example, to monitor who is communicating with whom in a targeted sales department, and conceivably to identify whether any alleged relationships therein might signal anomalous activity, a forensic accountant might wish to analyze metadata in the “to,” “from,” “cc,” or “bcc” fields in departmental e-mails. Many technologies for parsing e-mail with text analytics capabilities are available on the market today, some stemming from civil investigations and related electronic discovery software. These technologies are like the social network diagrams used in law enforcement or in counterterrorism efforts.

The What: The ever-present ambiguity inherent in human language presents significant challenges to the forensic investigator trying to understand the circumstances and actions surrounding the text based aspects of a fraud allegation. This difficulty is compounded by the tendency of people within organizations to invent their own words or to communicate in code. Language ambiguity can be illustrated by examining the word “shred”. A simple keyword search on the word might return not only documents that contain text about shredding a document, but also those where two sports fans are having a conversation about “shredding the defense,” or even e-mails between spouses about eating Chinese “shredded pork” for dinner. Hence, e-mail research analytics seeks to group similar documents according to their semantic context so that documents about shredding as concealment or related to covering up an action would be grouped separately from casual e-mails about sports or dinner, thus markedly reducing the volume of e-mail requiring more thorough ocular review. Concept-based analysis goes beyond traditional search technology by enabling users to group documents according to a statistical inference about the co-occurrence of similar words. In effect, text analytics software allows documents to describe themselves and group themselves by context, as in the shred example. Because text analytics examines document sets and identifies relationships between documents according to their context, it can produce far more relevant results than traditional simple keyword searches.

Using text analytics before filtering with keywords can be a powerful strategy for quickly understanding the content of a large corpus of unstructured, text-based data, and for determining what is relevant to the search. After viewing concepts at an elevated level, subsequent keyword selection becomes more effective by enabling users to better understand the possible code words or company-specific jargon. They can develop the keywords based on actual content, instead of guessing relevant terms, words, or phrases up front.

The When: In striving to understand the time frames in which key events took place, CFEs often need to not only identify the chronological order of documents (e.g., sorted by or limited to dates), but also link related communication threads, such as e-mails, so that similar threads and communications can be identified and plotted over time. A thread comprises a set of messages connected by various relationships; each message consists of either a first message or a reply to or forwarding of some other message in the set. Messages within a thread are connected by relationships that identify notable events, such as a reply vs. a forward, or changes in correspondents. Quite often, e-mails accumulate long threads with similar subject headings, authors, and message content over time. These threads ultimately may lead to a decision, such as approval to proceed with a project or to take some other action. The approval may be critical to understanding business events that led up to a particular journal entry. Seeing those threads mapped over time can be a powerful tool when trying to understand the business logic of a complex financial transaction.

In the context of fraud risk, text analytics can be particularly effective when threads and keyword hits are examined with a view to considering the familiar fraud triangle; the premise that all three components (incentive/pressure, opportunity, and rationalization) are present when fraud exists. This fraud triangle based analysis can be applied in a variety of business contexts where increases in the frequency of certain keywords related to incentive/pressure, opportunity, and rationalization, can indicate an increased level of fraud risk.

Some caveats are in order.  Considering the overwhelming amount of text-based data within any modern enterprise, assurance professionals could never hope to analyze all of it; nor should they. The exercise would prove expensive and provide little value. Just as an external auditor would not reprocess or validate every sales transaction in a sales journal, he or she would not need to look at every related e-mail from every employee. Instead, any professional auditor would take a risk-based approach, identifying areas to test based on a sample of data or on an enterprise risk assessment. For text analytics work, the reviewer may choose data from five or ten individuals to sample from a high-risk department or from a newly acquired business unit. And no matter how sophisticated the search and information retrieval tools used, there is no guarantee that all relevant or high-risk documents will be identified in large data collections. Moreover, different search methods may produce differing results, subject to a measure of statistical variation inherent in probability searches of any type. Just as a statistical sample of accounts receivable or accounts payable in the general ledger may not identify fraud, analytics reviews are similarly limited.

Text analytics can be a powerful fraud examination tool when integrated with traditional forensic data-gathering and analysis techniques such as interviews, independent research, and existing investigative tests involving structured, transactional data. For example, an anomaly identified in the general ledger related to the purchase of certain capital assets may prompt the examiner to review e-mail communication traffic among the key individuals involved, providing context around the circumstances and timing, of events before the entry date. Furthermore, the forensic accountant may conduct interviews or perform additional independent research that may support or conflict with his or her investigative hypothesis. Integrating all three of these components to gain a complete picture of the fraud event can yield valuable information. While text analytics should never replace the traditional rules-based analysis techniques that focus on the client’s financial accounting systems, it’s always equally important to consider the communications surrounding key events typically found in unstructured data, as opposed to that found in the financial systems.

Fraud Risk Assessing the Trusted Insider

A bank employee accesses her neighbor’s accounts on-line and discloses this information to another person living in the neighborhood; soon everyone seems to be talking about the neighbor’s financial situation. An employee of a mutual fund company accesses his father-in-law’s accounts without a legitimate reason or permission from the unsuspecting relative and uses the information to pressure his wife into making a bad investment from which the father-in-law, using money from the fund account, ultimately pays to extricate his daughter. Initially, out of curiosity, an employee at a local hospital accesses admission records of a high-profile athlete whom he recognized in the emergency room but then shares that information (for a price) with a tabloid newspaper reporter who prints a story.

Each of these is an actual case and each is a serious violation of various Federal privacy laws. Each of these three scenarios were not the work of an anonymous intruder lurking in cyberspace or of an identity thief who compromised a data center. Rather, this database browsing was perpetrated by a trusted insider, an employee whose daily duties required them to have access to vast databases housing financial, medical and educational information. From the comfort and anonymity of their workstations, similar employees are increasingly capable of accessing personal information for non-business reasons and, sometimes, to support the accomplishment of actual frauds. The good news is that CFE’s can help with targeted fraud risk assessments specifically tailored to assess the probability of this threat type and then to advise management on an approach to its mitigation.

The Committee of Sponsoring Organizations of the Treadway Commission’s (COSO’s) 2013 update of the Internal Control Integrated Framework directs organizations to conduct a fraud risk assessment as part of their overall risk assessment. The discussion of fraud in COSO 2013 centers on Principle 8: “The organization considers the potential for fraud in assessing risks to the achievement of objectives.” Under the 1992 COSO framework, most organizations viewed fraud risk primarily in terms of satisfying the U.S. Sarbanes-Oxley Act of 2002 requirements to identify fraud controls to prevent or detect fraud risk at the transaction level. In COSO 2013, fraud risk becomes a specific component of the overall risk assessment that focuses on fraud at the entity and transaction levels. COSO now requires a strong internal control foundation that addresses fraud broadly to encompass company objectives as part of its strategy, operations, compliance, and reporting. Principle 8 describes four specific areas: fraudulent financial reporting, fraudulent nonfinancial reporting, misappropriation of assets, and illegal acts. The inclusion of non-financial reporting is a meaningful change that addresses sustainability, health and safety, employment activity and similar reports.

One useful document for performing a fraud risk assessment is Managing the Business Risk of Fraud: A Practical Guide, produced by the American Institute of Certified Public Accountants, and by our organization, the Association of Certified Fraud Examiners, as well as by the Institute of Internal Auditors. This guide to establishing a fraud risk management program includes a sample fraud policy document, fraud prevention scorecard, and lists of fraud exposures and controls. Managing the Business Risk of Fraud advises organizations to view fraud risk assessment as part of their corporate governance effort. This commitment requires a tone at the top that embraces strong governance practices, including written policies that describe the expectations of the board and senior management regarding fraud risk. The Guide points out that as organizations continue to automate key processes and implement technology, thus allowing employees broad access to sensitive data, misuse of that data becomes increasingly difficult to detect and prevent. By combining aggressive data collection strategies with innovative technology, public and private sector organizations have enjoyed dramatic improvements in productivity and service delivery that have contributed to their bottom line. Unfortunately, while these practices have yielded major societal benefits, they have also created a major challenge for those charged with protecting confidential data.

CFE’s proactively assessing client organizations which use substantial amounts of private customer information (PCI) for fraud risk should expect to see the presence of controls related to data access surveillance. Data surveillance is the systematic monitoring of information maintained in an automated, usually in a database, environment. The kinds of controls CFE’s should look for are the presence of a privacy strategy that combines the establishment of a comprehensive policy, an awareness program that reinforces the consequences of non-business accesses, a monitoring tool that provides for ongoing analysis of database activity, an investigative function to resolve suspect accesses and a disciplinary component to hold violators accountable.

The creation of an enterprise confidentiality policy on the front end of the implementation of a data surveillance program is essential to its success. An implementing organization should establish a data access policy that clearly explains the relevant prohibitions, provides examples of prohibited activity and details the consequences of non-business accesses. This policy must apply to all employees, regardless of their title, seniority or function. The AICP/ACFE Guide recommends that all employees, beginning with the CEO, be required to sign an annual acknowledgment affirming that they have received and read the confidentiality policy and understand that violations will result in the imposition of disciplinary action. No employees are granted access to any system housing confidential data until they have first signed the acknowledgment.

In addition to issuing a policy, it is imperative that organizations formally train employees regarding its various provisions and caution them on the consequences of accessing data for non-business purposes. During the orientation process for new hires, all employees should receive specialized training on the confidentiality policy. As an added reminder, prior to logging on to any database that contains personal information, employees should receive an electronic notice stating that their activities are being monitored and that all accesses must be related to an official business purpose. Employees are not granted access into the system until they electronically acknowledge this notice.

Given that data surveillance is a process of ongoing monitoring of database activity, it is necessary for individual accesses to be captured and maintained in a format conducive to analysis. There are many commercially available software tools which can be used to monitor access to relational databases on a real-time basis. Transaction tracking technology, as one example, can dynamically generate Structured Query Language (SQL), based upon various search criteria, and provides the capability for customized analyses within each application housing confidential data. The search results are available in Microsoft Excel, PDF and table formats, and may be printed, e-mailed and archived.

Our CFE client organizations that establish a data access policy and formally notify all employees of the provisions of that policy, institute an ongoing awareness program to reinforce the policy and implement technology to track individual accesses of confidential data have taken the initial steps toward safeguarding data. These are necessary components of a data surveillance program and serve as the foundation upon which the remainder of the process may be based. That said, it is critical that organizations not rely solely on these components, as doing so will result in an unwarranted sense of security. Without an ongoing monitoring process to detect questionable database activity and a comprehensive investigative function to address unauthorized accesses, the impact of the foregoing measures will be marginal.

The final piece of a data surveillance program is the disciplinary process. The ACFE tells us that employees who willfully violate the policy prohibiting nonbusiness access of confidential information must be disciplined; the exact nature of which discipline should be determined by executive management. Without a structured disciplinary process, employees will realize that their database browsing, even if detected, will not result in any consequence and, therefore, they will not be deterred from this type of misconduct. Without an effective disciplinary component, an organization’s privacy protection program will ultimately fail.

The bottom line is that our client organizations that maintain confidential data need to develop measures to protect this asset from internal as well as from external misuse, without imposing barriers that restrict their employees’ ability to perform their duties. In today’s environment, those who are perceived as being unable to protect the sensitive data entrusted to them will inevitably experience an erosion of consumer confidence, and the accompanying consequences. Data surveillance deployed in conjunction with a clear data access policy, an ongoing employee awareness program, an innovative monitoring process, an effective investigative function and a standardized disciplinary procedure are the component controls the CFE should look for when conducting a proactive fraud risk assessment of employee access to PCI.

Financing Death One BitCoin at a Time

Over the past decade, fanatic religious ideologists have evolved to become hybrid terrorists demonstrating exceptional versatility, innovation, opportunism, ruthlessness, and cruelty. Hybrid terrorists are a new breed of organized criminal. Merriam-Webster defines hybrid as “something that is formed by combining two or more things”. In the twentieth century, the military, intelligence forces, and law enforcement agencies each had a specialized skill-set to employ in response to respective crises involving insurgency, international terrorism, and organized crime. Military forces dealt solely with international insurgent threats to the government; intelligence forces dealt solely with international terrorism; and law enforcement agencies focused on their respective country’s organized crime entities. In the twenty-first century, greed, violence, and vengeance motivate the various groups of hybrid terrorists. Hybrid terrorists rely on organized crime such as money laundering, wire transfer fraud, drug and human trafficking, shell companies, and false identification to finance their organizational operations.

Last week’s horrific terror bombing in Manchester brings to the fore, yet again, the issue of such terrorist financing and the increasing role of forensic accountants in combating it. Two of the main tools of modern terror financing schemes are money laundering and virtual currency.

Law enforcement and government agencies in collaboration with forensic accountants play key roles in tracing the source of terrorist financing to the activities used to inflict terror on local and global citizens. Law enforcement agencies utilize investigative and predictive analytics tools to gather, dissect, and convey data to distinguish patterns leading to future terrorist events. Government agencies employ database inquiries of terrorist-related financial information to evaluate the possibilities of terrorist financing and activities. Forensic accountants review the data for patterns related to previous transactions by utilizing data analysis tools, which assist in tracking the source of the funds.

As we all know, forensic accountants use a combination of accounting knowledge combined with investigative skills in litigation support and investigative accounting settings. Several types of organizations, agencies, and companies frequently employ forensic accountants to provide investigative services. Some of these organizations are public accounting firms, law firms, law enforcement agencies, The Internal Revenue Service (IRS), The Central Intelligence Agency (CIA), and The Federal Bureau of Investigations (FBI).

Locating and halting the source of terrorist financing involves two tactics, following the money and drying up the money. Obstructing terrorist financing requires an understanding of both the original and supply source of the illicit funds. As the financing is derived from both legal and illegal funding sources, terrorists may attempt to evade detection by funneling money through legitimate businesses thus making it difficult to trace. Charitable organizations and reputable companies provide a legitimate source through which terrorists may pass money for illicit activities without drawing the attention of law enforcement agencies. Patrons of legitimate businesses are often unaware that their personal contributions may support terrorist activities. However, terrorists also obtain funds from obvious illegal sources, such as kidnapping, fraud, and drug trafficking. Terrorists often change daily routines to evade law enforcement agencies as predictable patterns create trails that are easy for skilled investigators to follow. Audit trails can be traced from the donor source to the terrorist by forensic accountants and law enforcement agencies tracking specific indicators. Audit trails reveal where the funds originate and whether the funds came from legal or illegal sources. The ACFE tells us that basic money laundering is a specific type of illegal funding source, which provides a clear audit trail.

Money laundering is the process of obtaining and funneling illicit funds to disguise the connection with the original unlawful activity. Terrorists launder money to spend the unlawfully obtained money without drawing attention to themselves and their activities. To remain undetected by regulatory authorities, the illicit funds being deposited or spent need to be washed to give the impression that the money came from a seemingly reputable source. There are types of unusual transactions that raise red flags associated with money laundering in financial institutions. The more times an unusual transaction occurs, the greater the probability it is the product of an illicit activity. Money laundering may be quite sophisticated depending on the strategies employed to avoid detection. Some identifiers indicating a possible money-laundering scheme are: lack of identification, money wired to new locations, customer closes account after wiring or transferring copious amounts of money, executed out-of-the-ordinary business transactions, executed transactions involving the customer’s own business or occupation, and executed transactions falling just below the threshold trigger requiring the financial institution to file a report.

Money laundering takes place in three stages: placement, layering, and integration. In the placement stage, the cash proceeds from criminal activity enter the financial system by deposit. During the layering stage, the funds transfer into other accounts, usually offshore financial institutions, thus creating greater distance between the source and origin of the funds and its current location. Legitimate purchases help funnel the money back into the economy during the integration stage, the final stage.

Complicating all this is for the investigator is virtual currency. Virtual currency, unlike traditional forms of money, does not leave a clear audit trail for forensic accountants to trace and investigate. Cases involving the use of virtual currency, i.e. Bitcoins and several rival currencies, create anonymity for the perpetrator and create obstacles for investigators. Bitcoins have no physical form and provide a unique opportunity for terrorists to launder money across international borders without detection by law enforcement or government agencies. Bitcoins are long strings of numbers and letters linked by mathematical encryption algorithms. A consumer uses a mobile phone or computer to create an online wallet with one or more Bitcoin addresses before commencing electronic transactions. Bitcoins may also be used to make legitimate purchases through various, established online retailers.

Current international anti-money laundering laws aid in fighting the war against terrorist financing; however, international laws require actual cash shipments between countries and criminal networks (or at the very least funds transfers between banks). International laws are not applicable to virtual currency transactions, as they do not consist of actual cash shipments. According to the website Bitcoin.org, “Bitcoin uses peer-to-peer technology to operate with no central authority or banks”.

In summary, terrorist organizations find virtual currency to be an effective method for raising illicit funds because, unlike cash transactions, cyber technology offers anonymity with less regulatory oversight. Due to the anonymity factor, Bitcoins are an innovative and convenient way for terrorists to launder money and sell illegal goods. Virtual currencies are appealing for terrorist financiers since funds can be swiftly sent across borders in a secure, cheap, and highly secretive manner. The obscurity of Bitcoin allows international funding sources to conduct exchanges without a trace of evidence. This co-mingling effect is like traditional money laundering but without the regulatory oversight. Government and law enforcement agencies must, as a result, be able to share information with public regulators when they become suspicious of terrorist financing.

Forensic accounting technology is most beneficial when used in conjunction with the analysis tools of law enforcement agencies to predict and analyze future terrorist activity. Even though some of the tools in a forensic accountant’s arsenal are useful in tracking terrorist funds, the ability to identify conceivable terrorist threats is limited. To identify the future activities of terrorist groups, forensic accountants, and law enforcement agencies should cooperate with one another by mutually incorporating the analytical tools utilized by each. Agencies and government officials should become familiar with virtual currency like Bitcoins. Because of the anonymity and lack of regulatory oversight, virtual currency offers terrorist groups a useful means to finance illicit activities on an international scale. In the face of the challenge, new governmental entities may be needed to tie together all the financial forensics efforts of the different stake holder organizations so that information sharing is not compartmentalized.

RVACFES May 2017 Event Sold-Out!

On May 17th and 18th the Central Virginia ACFE Chapter and our partners, the Virginia State Police and the Association of Certified Fraud Examiners (ACFE) were joined by an over-flow crowd of audit and assurance professionals for the ACFE’s training course ‘Conducting Internal Investigations’. The sold-out May 2017 seminar was the ninth that our Chapter has hosted over the years with the Virginia State Police utilizing a distinguished list of certified ACFE instructor-practitioners.

Our internationally acclaimed instructor for the May seminar was Gerard Zack, CFE, CPA, CIA, CCEP. Gerry has provided fraud prevention and investigation, forensic accounting, and internal and external audit services for more than 30 years. He has worked with commercial businesses, not-for-profit organizations, and government agencies throughout North America and Europe. Prior to starting his own practice in 1990, Gerry was an audit manager with a large international public accounting firm. As founder and president of Zack, P.C., he has led numerous fraud investigations and designed customized fraud risk management programs for a diverse client base. Through Zack, P.C., he also provides outsourced internal audit services, compliance and ethics programs, enterprise risk management, fraud risk assessments, and internal control consulting services.

Gerry is a Certified Fraud Examiner (CFE) and Certified Public Accountant (CPA) and has focused most of his career on audit and fraud-related services. Gerry serves on the faculty of the Association of Certified Fraud Examiners (ACFE) and is the 2009 recipient of the ACFE’s James Baker Speaker of the Year Award. He is also a Certified Internal Auditor (CIA) and a Certified Compliance and Ethics Professional (CCEP).

Gerry is the author of Financial Statement Fraud: Strategies for Detection and Investigation (published 2013 by John Wiley & Sons), Fair Value Accounting Fraud: New Global Risks and Detection Techniques (2009 by John Wiley & Sons), and Fraud and Abuse in Nonprofit Organizations: A Guide to Prevention and Detection (2003 by John Wiley & Sons). He is also the author of numerous articles on fraud and teaches seminars on fraud prevention and detection for businesses, government agencies, and nonprofit organizations. He has provided customized internal staff training on specialized auditing issues, including fraud detection in audits, for more than 50 CPA firms.

Gerry is also the founder of the Nonprofit Resource Center, through which he provides antifraud training and consulting and online financial management tools specifically geared toward the unique internal control and financial management needs of nonprofit organizations. Gerry earned his M.B.A at Loyola University in Maryland and his B.S.B.A at Shippensburg University of Pennsylvania.

To some degree, organizations of every size, in every industry, and in every city, experience internal fraud. No entity is immune. Furthermore, any member of an organization can carry out fraud, whether it is committed by the newest customer service employee or by an experienced and highly respected member of upper management. The fundamental reason for this is that fraud is a human problem, not an accounting problem. As long as organizations are employing individuals to perform business functions, the risk of fraud exists.

While some organizations aggressively adopt strong zero tolerance anti-fraud policies, others simply view fraud as a cost of doing business. Despite varying views on the prevalence of, or susceptibility to, fraud within a given organization, all must be prepared to conduct a thorough internal investigation once fraud is suspected. Our ‘Conducting Internal Investigations’ event was structured around the process of investigating any suspected fraud from inception to final disposition and beyond.

What constitutes an act that warrants an examination can vary from one organization to another and from jurisdiction to jurisdiction. It is often resolved based on a definition of fraud adopted by an employer or by a government agency. There are numerous definitions of fraud, but a popular example comes from the joint ACFE-COSO publication, Fraud Risk Management Guide:

Fraud is any intentional act or omission designed to deceive others, resulting in the victim suffering a loss and/or the perpetrator achieving a gain.

However, many law enforcement agencies have developed their own definitions, which might be more appropriate for organizations operating in their jurisdictions. Consequently, fraud examiners should determine the appropriate legal definition in the jurisdiction in which the suspected offense was committed.

Fraud examination is a methodology for resolving fraud allegations from inception to disposition. More specifically, fraud examination involves:

–Assisting in the detection and prevention of fraud;
–Initiating the internal investigation;
–Obtaining evidence and taking statements;
–Writing reports;
–Testifying to findings.

A well run internal investigation can enhance a company’s overall well-being and can help detect the source of lost funds, identify responsible parties and recover losses. It can also provide a defense to legal charges by terminated or disgruntled employees. But perhaps, most importantly, an internal investigation can signal to every company employee that the company will not tolerate fraud.

Our two-day seminar agenda included Gerry’s in depth look at the following topics:

–Assessment of the risk of fraud within an organization and responding when it is identified;
–Detection and investigation of internal frauds with the use of data analytics;
–The collection of documents and electronic evidence needed during an investigation;
–The performance of effective information gathering and admission seeking interviews;
–The wide variety of legal and regulatory concerns related to internal investigations.

Gerry did his usual tremendous job in preparing the professionals in attendance to deal with every step in an internal fraud investigation, from receiving the initial allegation to testifying as a witness. The participants learned to lead an internal investigation with accuracy and confidence by gaining knowledge about topics such as the relevant legal aspects impacting internal investigations, the use of computers and analytics during the investigation, collecting and analyzing internal and external information, and interviewing witnesses and the writing of effective reports.

Industrialized Theft

In at least one way you have to hand it to Ethically Challenged, Inc.;  it sure knows how to innovate, and the recent spate of ransomware attacks proves they also know how to make what’s old new again. Although society’s criminal opponents engage in constant business process improvement, they’ve proven again and again that they’re not just limited to committing new crimes from scratch every time. In the age of Moore’s law, these tasks have been readily automated and can run in the background at scale without the need for significant human intervention. Crime automations like the WannaCry virus allow transnational organized crime groups to gain the same efficiencies and cost savings that multinational corporations obtained by leveraging technology to carry out their core business functions. That’s why today it’s possible for hackers to rob not just one person at a time but 100 million or more, as the world saw with the Sony PlayStation and Target data breaches and now with the WannaCry worm.

As covered in our Chapter’s training event of last year, ‘Investigating on the Internet’, exploit tool kits like Blackhole and SpyEye commit crime “automagically” by minimizing the need for human labor, thereby dramatically reducing criminal costs. They also allow hackers to pursue the “long tail” of opportunity, committing millions of thefts in small amounts so that (in many cases) victims don’t report them and law enforcement has no way to track them. While high-value targets (companies, nations, celebrities, high-net-worth individuals) are specifically and individually targeted, the way the majority of the public is hacked is by automated scripted computer malware, one large digital fishing net that scoops up anything and everything online with a vulnerability that can be exploited. Given these obvious advantages, as of 2016 an estimated 61 percent of all online attacks were launched by fully automated crime tool kits, returning phenomenal profits for the Dark Web overlords who expertly orchestrated them. Modern crime has become reduced and distilled to a software program that anybody can run at tremendous profit.

Not only can botnets and other tools be used over and over to attack and offend, but they’re now enabling the commission of much more sophisticated crimes such as extortion, blackmail, and shakedown rackets. In an updated version of the old $500 million Ukrainian Innovative Marketing solutions “virus detected” scam, fraudsters have unleashed a new torrent of malware that hold the victim’s computer hostage until a ransom is paid and an unlock code is provided by the scammer to regain access to the victim’s own files. Ransomware attack tools are included in a variety of Dark Net tool kits, such as WannaCry and Gameover Zeus. According to the ACFE, there are several varieties of this scam, including one that purports to come from law enforcement. Around the world, users who become infected with the Reveton Trojan suddenly have their computers lock up and their full screens covered with a notice, allegedly from the FBI. The message, bearing an official-looking large, full-color FBI logo, states that the user’s computer has been locked for reasons such as “violation of the federal copyright law against illegally downloaded material” or because “you have been viewing or distributing prohibited pornographic content.”

In the case of the Reveton Trojan, to unlock their computers, users are informed that they must pay a fine ranging from $200 to $400, only accepted using a prepaid voucher from Green Dot’s MoneyPak, which victims are instructed they can buy at their local Walmart or CVS; victims of WannaCry are required to pay in BitCoin. To further intimidate victims and drive home the fact that this is a serious police matter, the Reveton scammers prominently display the alleged violator’s IP address on their screen as well as snippets of video footage previously captured from the victim’s Webcam. As with the current WannaCry exploit, the Reveton scam has successfully targeted tens of thousands of victims around the world, with the attack localized by country, language, and police agency. Thus, users in the U.K. see a notice from Scotland Yard, other Europeans get a warning from Europol, and victims in the United Arab Emirates see the threat, translated into Arabic, purportedly from the Abu Dhabi Police HQ.

WannaCry is even more pernicious than Reveton though in that it actually encrypts all the files on a victim’s computer so that they can no longer be read or accessed. Alarmingly, variants of this type of malware often present a ticking-bomb-type countdown clock advising users that they only have forty-eight hours to pay $300 or all of their files will be permanently destroyed. Akin to threatening “if you ever want to see your files alive again,” these ransomware programs gladly accept payment in Bitcoin. The message to these victims is no idle threat. Whereas previous ransomware might trick users by temporarily hiding their files, newer variants use strong 256-bit Advanced Encryption Standard cryptography to lock user files so that they become irrecoverable. These types of exploits earn scores of millions of dollars for the criminal programmers who develop and sell them on-line to other criminals.

Automated ransomware tools have even migrated to mobile phones, affecting Android handset users in certain countries. Not only have individuals been harmed by the ransomware scourge, so too have companies, nonprofits, and even government agencies, the most infamous of which was the Swansea Police Department in Massachusetts some years back, which became infected when an employee opened a malicious e-mail attachment. Rather than losing its irreplaceable police case files to the scammers, the agency was forced to open a Bitcoin account and pay a $750 ransom to get its files back. The police lieutenant told the press he had no idea what a Bitcoin was or how the malware functioned until his department was struck in the attack.

As the ACFE and other professional organizations have told us, within its world, cybercrime has evolved highly sophisticated methods of operation to sell everything from methamphetamine to child sexual abuse live streamed online. It has rapidly adopted existing tools of anonymity such as the Tor browser to establish Dark Net shopping malls, and criminal consulting services such as hacking and murder for hire are all available at the click of a mouse. Untraceable and anonymous digital currencies, such as Bitcoin, are breathing new life into the underground economy and allowing for the rapid exchange of goods and services. With these additional revenues, cyber criminals are becoming more disciplined and organized, significantly increasing the sophistication of their operations. Business models are being automated wherever possible to maximize profits and botnets can threaten legitimate global commerce, easily trained on any target of the scammer’s choosing. Fundamentally, it’s been done. As WannaCry demonstrates, the computing and Internet based crime machine has been built. With these systems in place, the depth and global reach of cybercrime, mean that crime now scales, and it scales exponentially. Yet, as bad as this threat is today, it is about to become much worse, as we hand such scammers billions of more targets for them to attack as we enter the age of ubiquitous computing and the Internet of Things.

In Plain Sight

By Rumbi Petrozzello, CPA/CFF, CFE
2017 Vice-President – Central Virginia Chapter ACFE

Recently, I was listening to one my favorite podcasts, Radiolab, and they were discussing a series on Audible called “Ponzi Supernova”. Reporter Steve Fishman hounded infamous Ponzi schemer, Bernie Madoff, for several years. One day, Bernie called Steve, collect, and thus began the conversations between Madoff and Fishman that makes this telling of the Madoff Ponzi scheme like none other.

The tale is certainly compelling (how can a story of the largest known Ponzi scheme not be fascinating) and hearing Bernie Madoff talking about what he did and hearing what he says motivated him makes this series something I listened to from beginning to end, almost without taking a break. Through it all, as had happened just about every time I read or heard about Madoff, I was amazed that he was able to perpetrate his fraud for as long as he did, which, depending on who you believe, started somewhere between the early 1960s and 1992 (even Madoff gives different dates for when he started). This is no surprise. All too often, when fraudsters are caught, they try to minimize the extent of their wrongdoing. If they know that you’ve found $1,000, they’ll tell you that $1,000 was all they took. If you go on to find more, then the story will change a little to include what you’ve found. It’s very rare that a fraudster will confess to the full extent of her crime at the first go around (or even at the second or third).

As I listened to the series, something became very apparent. Often when people discuss the Madoff Ponzi scheme, one tends to get the feeling that, for decades, he took money from new investors to pay off old investors and carried on his multi-billion-dollar scheme without a single soul blowing the whistle on him. But that’s not the case. In a 477-page report from the U.S. Securities and Exchange Commission Office of Investigations (OIG) entitled “Investigation of Failure of the SEC to Uncover Bernard Madoff’s Ponzi Scheme – Public Version”, between June 1992 and December 2008, the Securities and Exchange Commission (SEC) received “six substantive complaints” regarding Madoff’s company and some of these complaints were submitted more than once.

One complaint mentioned in the report was received three times, with versions submitted in 2000, 2001 and 2005; the 2005 version was even entitled “The World’s Largest Hedge Fund is a Fraud”. This complaint series was submitted by Madoff’s most well-known nemesis, the whistleblower, Harry Markopolos. But, there were at least five other individuals who shared their concerns and suspicions about Madoff with the SEC. Three of these specifically used the words “Ponzi scheme”, including the first complaint, in 1992. Based on these complaints, the SEC conducted two investigations and three examinations and, even though the complaints explicitly stated that they suspected that Madoff Investments was a Ponzi scheme, none of the investigations or examinations concluded that Madoff was operating a Ponzi scheme. To add to this, the SEC was aware of two articles that questioned Madoff’s returns. Over the years, several investment companies performed their own due diligence and decided that Madoff’s company did not make sense and they believed that investing with Madoff would be a violation of their fiduciary duty to their clients. Despite all of this, none of these investigations or exams contained a finding of fraud.

Whether you’re a Certified Fraud Examiner (CFE) or a CPA, Certified in Financial Forensics (CFF), the work that you do is governed by a set of professional standards that help establish a performance baseline. This begins with competence. This means that those taking on an assignment should be able to complete the assignment successfully. This does not necessarily mean that whoever is leading the job needs to know how to do everything. It does mean that they should ensure that there is the right skill set working on the job, even if it means the use of referrals or consultation. Too many times, while reading the OIG report, the reader confronts the mention of a lack of experience. Listening to Ponzi Supernova, I learnt that at least one examiner was only three weeks out of school. The OIG report stated that, for one examination, because the person leading the investigation had no knowledge of how to investigate a suspected Ponzi scheme, they decided to just not investigate that claim; they decided instead to investigate what they knew, and that was front running (though even that investigation was carried out poorly).

Another ACFE professional standard is that of due professional care. Due professional care “requires diligence, critical analysis and professional skepticism”. It also means that any conclusion that a CFE reaches, must be supported by evidence that is relevant, sufficient and competent. Several times during the various investigations and examinations, SEC staff would ask Madoff or his employees questions and then accept any answers they were given without seeking any third-party confirmation. Sometimes, even when third-party confirmation was sought, the questions asked of those third parties were not the correct ones. Madoff himself tells the story of how, in 2006, Madoff testified that he settled trades for his advisory clients through his personal Depository Trust Company (DTC) account and he even gave the SEC his DTC account information. At this point Madoff was sure that, once the SEC checked this out, his fraud would be discovered. Instead, the SEC merely asked the DTC if Madoff had an account, and nothing more. Had they asked about account activity, they would have then discovered that Madoff’s account, even though it existed, did not trade anywhere near the volume purported by his statements. This brings up other aspects of due professional care; adequate planning and supervision. With proper supervision, the less experienced can be trained not just to ask questions, but to ask, and get adequate answers to, the correct questions. The person reviewing their work would be able to ask them, “did the answer that you got from the DTC answer the question that we are asking? Can we now confirm not that Madoff has an account with the DTC but, instead, that he is trading billions of dollars through these accounts?”

Time and time again, in the OIG report, the SEC stated that they did not have experienced and adequate staff for their examinations and investigations of Madoff. This was an excuse that was used to explain why, for instance, they did not send out requests for third-party confirmations, even after drafting them. In one case, staff stated that they did not send out a request to the National Association of Securities Dealers (NASD) because it would have been too time-consuming to review the data received. Adequate planning would have made sure that there was sufficient, qualified staffing to review the data. Adequate supervision would have ensured that this excuse for not sending out the request was squashed. However, it is not the case that no third-party confirmation requests were sent out. Some were and some of those sent out received responses. Responses were received from the NASD and other financial institutions These entities all claimed that there was no activity with Madoff on the dates that the examiners were asking about. Even with that information, there was no follow-up on the part of the examiners. At every turn, there seemed to be a lot of trust and just about no verification. This is even more surprising when you hear that the examiners would write notes about how Madoff was obviously lying and how many people had reported to the SEC that Madoff was running a dishonest business. Even with so much distrust, and so many whistleblowers, it turned out that those sent to shine a light on Madoff’s operations all seemed to be looking in all the wrong places.

Part of planning an investigation is determining what is being investigated and how the investigation is going to be executed. A very important part of the process is determining, beforehand, what will be done with negative results. When third-party responses were received and they all stated Madoff had not done business with them as claimed, the responses appear to have been filed and no further action taken. When responses were not received, the SEC did not follow up to find out why nothing had been returned. They likely would have found that the institution had not responded to the inquiry because there was nothing to respond about. There does not appear to have been a defined protocol on what to do when the answer to the question, “did this happen” was “No.”

I urge you to, at the very least, read the executive summary of the OIG report. For me at least, what Madoff could get away with, time and time again, with each subsequent SEC examination or investigation, is jaw-dropping. The fact that 1) several whistleblowers shared their concerns and even accompanied them with a great deal of detail and 2) that articles were written and yet, 3) those with access to the information that could prove, with very little effort, that Madoff was not doing what he claimed to be doing, found nothing of concern is something I struggle to comprehend. This whole sad history does underline the importance of referring to, and abiding by, our professional standards, to minimize the risk of missing a fraud like this one. Most importantly, it reduces the risk that someone might get an aneurism trying to wrap their mind around how, even when so many others could see that something was amiss, the watchdog missed it all!