Tag Archives: cyber fraud

Client’s Card Security

Our Chapter recently got a question from a reader of this blog about data privacy; specifically she asked about the Payment Card Industry Data Security Standard (PCI DSS) and whether compliance with that standard’s requirements by a client would provide reasonable assurance that the client organization’s customer data privacy controls and procedures are adequate. The question came up in the course of a credit card fraud examination in which our reader’s small CPA firm was involved. A very good question indeed! The short answer, in my opinion, is that, although PCI DSS compliance audits cover some aspects of data privacy, because they’re limited to credit cards, PCI DSS audits would not, in themselves be sufficient to convince a jury that data privacy is adequately protected throughout a whole organization. The question is interesting because of its bearing on the fraud risk assessments CFE’s routinely conduct. The question is important because CFE’s should understand the scope (and limitations) of PCI DSS compliance activities within their client organizations and communicate the differences when reviewing corporate-wide data privacy for fraud prevention purposes. This understanding will also tend to prevent any potential misunderstandings over duplication of review efforts with business process owners and fraud examination clients.

Given all the IT breeches and intrusions happening daily, consumers are rightly cynical these days about businesses’ ability to protect their personal data. They report that they’re much more willing to do business with companies that have independently verified privacy policies and procedures. In-depth privacy fraud risk assessments can help organizations assess their preparedness for the outside review that inevitably follows a major customer data privacy breach. As I’m sure all the readers of this blog know, data privacy generally applies to information that can be associated with a specific individual or that has identifying characteristics that might be combined with other information to indicate a specific person. Such personally identifiable information (PII) is defined as any piece of data that can be used to uniquely identify, contact, or locate a single person. Information can be considered private without being personally identifiable. Sensitive personal data includes individual preferences, confidential financial or health information, or other personal information. An assessment of data privacy fraud risk encompasses the policy, controls, and procedures in place to protect PII.

In planning a fraud risk assessment of data privacy, CFE’s auditors should evaluate or consider based on risk:

–The consumer and employee PII that the client organization collects, uses, retains, discloses, and discards.
–Privacy contract requirements and risk liabilities for all outsourcing partners, vendors, contractors, and other third parties involving sharing and processing of the organization’s consumer and employee data.
–Compliance with privacy laws and regulations impacting the organization’s specific business and industry.
–Previous privacy breaches within the organization and its third-party service providers, and reported breaches for similar organizations noted by clearing houses like Dunn &
Bradstreet and in the client industry’s trade press.
–The CFE should also consult with the client’s corporate legal department before undertaking the review to determine whether all or part of the assessment procedure should be performed at legal direction and protected as “attorney-client privileged” work products.

The next step in a privacy fraud risk assessment is selecting a framework for the review.
Two frameworks to consider are the American Institute of Certified Public Accountants (AICPA) Privacy Framework and The IIA’s Global Audit Technology Guide: Managing and Auditing Privacy Risks. For ACFE training purposes, one CFE working for a well know on-line retailer reported organizing her fraud assessment report based on the AICPA framework. The CFE chose that methodology because it would be understood and supported easily by management, external auditors, and the audit committee. The AICPA’s ten component framework was useful in developing standards for the organization as well as for an assessment framework:

–Management. The organization defines, documents, communicates, and assigns accountability for its privacy policies and procedures.
–Notice. The organization provides notice about its privacy policies and procedures and identifies the purposes for which PII is collected, used, retained, and disclosed.
–Choice and Consent. The organization describes the choices available to the individual customer and obtains implicit or explicit consent with respect to the collection, use, and disclosure of PII.
–Collection. The organization collects PII only for the purposes identified in the Notice.
–Use, Retention, and Disposal. The organization limits the use of PII to the purposes identified in the Notice and for which the individual customer has provided implicit or explicit consent. The organization retains these data for only as long as necessary to fulfill the stated purposes or as required by laws or regulations, and thereafter disposes of such information appropriately.
–Access. The organization provides individual customers with access to their PII for review and update.
–Disclosure to Third Parties. The organization discloses PII to third parties only for the purposes identified in the Notice and with the implicit or explicit consent of the individual.
–Security for Privacy. The organization protects PII against unauthorized physical and logical access.
–Quality. The organization maintains accurate, complete, and relevant PII for the purposes identified in the Notice.
–Monitoring and Enforcement. The organization monitors compliance with its privacy policies and procedures and has procedures to address privacy complaints and disputes.

Using the detailed assessment procedures in the framework, the CFE, working with internal client staff, developed specific testing procedures for each component, which were performed over a two-month period. Procedures included traditional walkthroughs of processes, interviews with individuals responsible for IT security, technical testing of IT security and infrastructure controls, and review of physical storage facilities for documents with PII. Technical scanning was performed independently by the retailer’s IT staff, which identified PII on servers and some individual personal computers erroneously excluded from compliance monitoring. Facilitated sessions with the CFE and individuals responsible for PII helped identify problem areas. The fraud risk assessment dramatically increased awareness of data privacy and identified several opportunities to strengthen ownership, accountability, controls, procedures, and training. As a result of the assessment, the retailer implemented a formal data classification scheme and increased IT security controls. Several of the vulnerabilities and required enhancements involved controls over hard-copy records containing PII. Management reacted to the overall report positively and requested that the CFE schedule future recurring views of fraudulent privacy breech vulnerability.

Fraud risk assessments of client privacy programs can help make the business case within any organization for focusing on privacy now, and for promoting organizational awareness of privacy issues and threats. This is one of the most significant opportunities for fraud examiners to help assess risks and identify potential gaps that are daily proving so devastating if left unmanaged.

Analytic Reinforcements

Rumbi’s post of last week on ransomware got me thinking on a long drive back from Washington about what an excellent tool the AICPA’s new Cybersecurity Risk Management Reporting Framework is, not only for CPAs but for CFEs as well as for all our client organizations. As the seemingly relentless wave of cyberattacks continues with no sign of let up, organizations are under intense pressure from key stakeholders and regulators to implement and enhance their cyber security and fraud prevention programs to protect customers, employees and all the types of valuable information in their possession.

According to research from the ACFE, the average total cost per company, per event of a data breach is $3.62 million. Initial damage estimates of a single breach, while often staggering, may not take into account less obvious and often undetectable threats such as the theft of intellectual property, espionage, destruction of data, attacks on core operations or attempts to disable critical infrastructure. These effects can knock on for years and have devastating financial, operational and brand impact ramifications.

Given the present broad regulatory pressures to tighten cyber security controls and the visibility surrounding cyberrisk, a number of proposed regulations focused on improving cyber security risk management programs have been introduced in the United States over the past few years by our various governing bodies. One of the more prominent is a regulation by the New York Department of Financial Services (NYDFS) that prescribes certain minimum cyber security standards for those entities regulated by the NYDFS. Based on an entity’s risk assessment, the NYDFS law has specific requirements around data encryption and including data protection and retention, third-party information security, application security, incident response and breach notification, board reporting, and required annual re-certifications.

However, organizations continue to report to the ACFE regarding their struggle to systematically report to stakeholders on the overall effectiveness of their cyber security risk management programs. In response, the AICPA in April of last year released a new cyber security risk management reporting framework intended to help organizations expand cyberrisk reporting to a broad range of internal and external users, to include management and the board of directors. The AICPA’s new reporting framework is designed to address the need for greater stakeholder transparency by providing in-depth, easily consumable information about the state of an organization’s cyberrisk management program. The cyber security risk management examination uses an independent, objective reporting approach and employs broader and more flexible criteria. For example, it allows for the selection and utilization of any control framework considered suitable and available in establishing the entity’s basic cyber security objectives and in developing and maintaining controls within the entity’s cyber security risk management program irregardless of whether the standard is the US National Institute of Standards and Technology (NIST)’s Cybersecurity Framework, the International Organization for standardization (ISO)’s ISO 27001/2 and related frameworks, or even an internally developed framework based on a combination of sources. The examination is voluntary, and applies to all types of entities, but should be considered by CFEs as a leading practice that provides management, boards and other key stakeholders with clear insight into the current state of an organization’s cyber security program while identifying gaps or pitfalls that leave organizations vulnerable to cyber fraud and other intrusions.

What stakeholders might benefit from a client organization’s cyber security risk management examination report? Clearly, we CFEs as we go about our routine fraud risk assessments; but such a report, most importantly, can be vital in helping an organization’s board of directors establish appropriate oversight of a company’s cyber security risk program and credibly communicate its effectiveness to stakeholders, including investors, analysts, customers, business partners and regulators. By leveraging this information, boards can challenge management’s assertions around the effectiveness of their cyberrisk management and fraud prevention programs and drive more effective decision making. Active involvement and oversight from the board can help ensure that an organization is paying adequate attention to cyberrisk management and displaying due diligence. The board can help shape expectations for reporting on cyberthreats while also advocating for greater transparency and assurance around the effectiveness of the program.

The cyber security risk management report in its initial and follow-up iterations can be invaluable in providing overview guidance to CFEs and forensic accountants in targeting both fraud prevention and fraud detection/investigative analytics. We know from our ACFE training that data analytics need to be fully integrated into the investigative process. Ensuring that data analytics are embedded in the detection/investigative process requires support from all levels, starting with the managing CFE. It will be an easier, more coherent process for management to support such a process if management is already supporting cyber security risk management reporting. Management will also have an easier time reinforcing the use of analytics generally, although the data analytics function supporting fraud examination will still have to market its services, team leaders will still be challenged by management, and team members will still have to be trained to effectively employ the newer analytical tools.

The presence of a robust cyber security risk management reporting process should also prove of assistance to the lead CFE in establishing goals for the implementation and use of data analytics in every investigation, and these goals should be communicated to the entire investigative team. It should be made clear to every level of the client organization that data analytics will support the investigative planning process for every detected fraud. The identification of business processes, IT systems, data sources, and potential analytic routines should be discussed and considered not only during planning, but also throughout every stage of the entire investigative engagement. Key in obtaining the buy-in of all is to include investigative team members in identifying areas or tests that the analytics group will target in support of the field work. Initially, it will be important to highlight success stories and educate managers and team leaders about what is possible. Improving on the traditional investigative approach of document review, interviewing, transaction review, etc. investigators can benefit from the implementation of data analytics to allow for more precise identification of the control deficiencies, instances of noncompliance with policies and procedures, and mis-assessment of areas of high risk that contributed to the development of the fraud in the first place. These same analytics can then be used to ensure that appropriate post-fraud management follow-up has occurred by elevating the identified deficiencies to the cyber security risk management reporting process and by implementing enhanced fraud prevention procedures in areas of higher fraud risk. This process would be especially useful in responding to and following up data breaches.

Once patterns are gathered and centralized, analytics can be employed to measure the frequency of occurrence, the bit sizes, the quantity of files executed and average time of use. The math involved allows an examiner to grasp the big picture. Individuals, including examiners, are normally overwhelmed by the sheer volume of information, but automation of pattern recognizing techniques makes big data a tractable investigative resource. The larger the sample size, the easier it is to determine patterns of normal and abnormal behavior. Network haystacks are bombarded by algorithms that can notify the CFE information archeologist about the probes of an insider threat for example.

Without analytics, enterprise-level fraud examination and risk assessment is a diminished discipline, limited in scope and effectiveness. Without an educated investigative workforce, armed with a programing language for automation and an accompanying data-mining philosophy and skill set, the control needs of management leaders at the enterprise level will go unmet; leaders will not have the data needed for fraud prevention on a large scale nor a workforce that is capable of getting them that data in the emergency following a breach or penetration.

The beauty of analytics, from a security and fraud prevention perspective, is that it allows the investigative efforts of the CFE to align with the critical functions of corporate business. It can be used to discover recurring risks, incidents and common trends that might otherwise have been missed. Establishing numerical baselines on quantified data can supplement a normal investigator’s tasks and enhance the auditor’s ability to see beneath the surface of what is presented in an examination. Good communication of analyzed data gives decision makers a better view of their systems through a holistic approach, which can aid in the creation of enterprise-level goals. Analytics and data mining always add dimension and depth to the CFE’s examination process at the enterprise level and dovetail with and are supported beautifully by the AICPA’s cyber security risk management reporting initiative.

CFEs should encourage the staffs of client analytics support functions to possess …

–understanding of the employing enterprise’s data concepts (data elements, record types, database types, and data file formats).
–understanding of logical and physical database structures.
–the ability to communicate effectively with IT and related functions to achieve efficient data acquisition and analysis.
–the ability to perform ad hoc data analysis as required to meet specific fraud examiner and fraud prevention objectives.
–the ability to design, build, and maintain well-documented, ongoing automated data analysis routines.
–the ability to provide consultative assistance to others who are involved in the application of analytics.

Targeting the Blockchain

Both the blockchain and its digital engineering support structures underlying the digital currencies that are fast becoming the financial and transactional media of choice for the nefarious, are now increasingly finding themselves under various modes of fraudster attack.

Bitcoins, the most familiar blockchain application, were invented in 2009 by a mysterious person (or group of people) using the alias Satoshi Nakamoto, and the coins are created or ‘mined’ by solving increasingly difficult mathematical equations, requiring extensive computing power. The system is designed to ensure no more than twenty-one million Bitcoins are ever generated, thereby preventing a central authority from flooding the market with new Bitcoins. Most Bitcoins are purchased on third-party exchanges with traditional currencies, such as dollars or euros, or with credit cards. The exchange rates against the dollar for Bitcoin fluctuate wildly and have ranged from fifty cents per coin around the time of its introduction to over $1,240 in 2013 to around $600 today.

The whole point of using a blockchain is to let people, in particular, people who don’t trust one another, share valuable data in a secure, tamper-proof way. That’s because blockchains store data using sophisticated math and innovative software rules that are extremely difficult for attackers to manipulate. But as cases like the Mount Gox Bitcoin hack demonstrate, the security of even the best designed blockchain and associated support systems can fail in places where the fancy math and software rules come into contact with humans; humans who are skilled fraudsters, in the real world, where things quickly get messy. For CFEs to understand why, start with what makes blockchains “secure” in principle. Bitcoin is a good example. In Bitcoin’s blockchain, the shared data is the history of every Bitcoin transaction ever made: it’s a plain old accounting ledger. The ledger is stored in multiple copies on a network of computers, called “nodes:’ Each time someone submits a transaction to the ledger, the nodes check to make sure the transaction is valid, that whoever spent a bitcoin had a bitcoin to spend. A subset of the nodes competes to package valid transactions into “blocks” and add them to a chain of previous blocks. The owners of these nodes are called miners. Miners who successfully add new blocks to the chain earn bitcoins as a reward.

What makes this system theoretically tamperproof is two things: a cryptographic fingerprint unique to each block, and a consensus protocol, the process by which the nodes in the network agree on a shared history. The fingerprint, called a hash, takes a lot of computing time and energy to generate initially. It thus serves as proof that the miner who added the block to the blockchain did the computational work to earn a bitcoin reward (for this reason, Bitcoin is said to employ a proof-of-work protocol). It also serves as a kind of seal, since altering the block would require generating a new hash. Verifying whether or not the hash matches its block, however, is easy, and once the nodes have done so they update their respective copies of the blockchain with the new block. This is the consensus protocol.

The final security element is that the hashes also serve as the links in the blockchain: each block includes the previous block’s unique hash. So, if you want to change an entry in the ledger retroactively, you have to calculate a new hash not only for the block it’s in but also for every subsequent block. And you have to do this faster than the other nodes can add new blocks to the chain. Consequently, unless you have computers that are more powerful than the rest of the nodes combined (and even then, success isn’t guaranteed), any blocks you add will conflict with existing ones, and the other nodes will automatically reject your alterations. This is what makes the blockchain tamperproof, or immutable.

The reality, as experts are increasingly pointing out, is that implementing blockchain theory in actual practice is difficult. The mere fact that a system works like Bitcoin, as many copycat cryptocurrencies do, doesn’t mean it’s just as secure as Bitcoin. Even when developers use tried and true cryptographic tools, it’s easy to accidentally put them together in ways that are not secure. Bitcoin has been around the longest, so it’s just the most thoroughly battle-tested.

As the ACFE and others have indicated, fraudsters have also found creative ways to cheat. Its been shown that there is a way to subvert a blockchain even if you have less than half the mining power of the other miners. The details are somewhat technical, but essentially a “selfish miner” can gain an unfair advantage by fooling other nodes into wasting time on already-solved crypto-puzzles.

The point is that no matter how tamperproof a blockchain protocol is, it does not exist in a vacuum. The cryptocurrency hacks driving recent headlines are usually failures at places where blockchain systems connect with the real world, for example, in software clients and third-party applications. Hackers can, for instance, break into hot wallets, internet-connected applications for storing the private cryptographic keys that anyone who owns cryptocurrency requires in order to spend it. Wallets owned by online cryptocurrency exchanges have become prime targets. Many exchanges claim they keep most of their users’ money in cold hardware wallets, storage devices disconnected from the internet. But as the recent heist of more than $500 million worth of cryptocurrency from a Japan based exchange showed, that’s not always the case.

Perhaps the most complicated touchpoints between blockchains and the real world are smart contracts, which are computer programs stored in certain kinds of blockchain that can automate financial and other contract related business transactions. Several years ago, hackers exploited an unforeseen quirk in a smart contract written on Ethereum’s blockchain to steal 3.6 million Ether, worth around $80 million at the time from a new kind of blockchain-based investment fund. Since the investment fund’s code lived on the blockchain, the Ethereum community had to push a controversial software upgrade called a hard fork to get the money back, essentially creating a new version of history in which the money was never stolen. According to a number of experts, researchers are scrambling to develop other methods for ensuring that smart contracts won’t malfunction.

An important supposed security guarantee of a blockchain system is decentralization. If copies of the blockchain are kept on a large and widely distributed network of nodes, there’s no one weak point to attack, and it’s hard for anyone to build up enough computing power to subvert the network. But recent reports in the trade press indicate that neither Bitcoin nor Ethereum is as decentralized as the public has been led to believe. The reports indicate that the top four bitcoin-mining operations had more than 53 percent of the system’s average mining capacity per week. By the same measure, three Ethereum miners accounted for 61 percent of Ethereum transactions.

Some experts say alternative consensus protocols, perhaps ones that don’t rely on mining, could be more secure. But this hypothesis hasn’t been tested at a large scale, and new protocols would likely have their own security problems. Others see potential in blockchains that require permission to join, unlike in Bitcoin’s case, where anyone who downloads the software can join the network.

Such consensus systems are anathema to the antihierarchical ethos of cryptocurrencies, but the approach appeals to financial and other institutions looking to exploit the advantages of a shared cryptographic database. Permissioned systems, however, raise their own questions. Who has the authority to grant permission? How will the system ensure that the validators are who they say they are? A permissioned system may make its owners feel more secure, but it really just gives them more control, which means they can make changes whether or not other network participants agree, something true believers would see as violating the very idea of blockchain.

So, in the end, for CFEs, the word ‘secure’ ends up being very hard to define in the context of blockchains. Secure from whom? Secure for what?

A final thought for CFEs and forensic accountants. There are no real names stored on the Bitcoin blockchain, but it records every transaction made by your user client; every time the currency is used the user risks exposing information that can tie his or her identity to those actions. It is known from documents leaked by Edward Snowden that the US National Security Agency has sought ways of connecting activity on the Bitcoin blockchain to people in the physical world. Should governments seek to create and enforce blacklists, they will find that the power to decide which transactions to honor may lie in the hands of just a few Bitcoin miners.

Fraud Prevention Oriented Data Mining

One of the most useful components of our Chapter’s recently completed two-day seminar on Cyber Fraud & Data Breaches was our speaker, Cary Moore’s, observations on the fraud fighting potential of management’s creative use of data mining. For CFEs and forensic accountants, the benefits of data mining go much deeper than as just a tool to help our clients combat traditional fraud, waste and abuse. In its simplest form, data mining provides automated, continuous feedback to ensure that systems and anti-fraud related internal controls operate as intended and that transactions are processed in accordance with policies, laws and regulations. It can also provide our client managements with timely information that can permit a shift from traditional retrospective/detective activities to the proactive/preventive activities so important to today’s concept of what effective fraud prevention should be. Data mining can put the organization out front of potential fraud vulnerability problems, giving it an opportunity to act to avoid or mitigate the impact of negative events or financial irregularities.

Data mining tests can produce “red flags” that help identify the root cause of problems and allow actionable enhancements to systems, processes and internal controls that address systemic weaknesses. Applied appropriately, data mining tools enable organizations to realize important benefits, such as cost optimization, adoption of less costly business models, improved program, contract and payment management, and process hardening for fraud prevention.

In its most complex, modern form, data mining can be used to:

–Inform decision-making
–Provide predictive intelligence and trend analysis
–Support mission performance
–Improve governance capabilities, especially dynamic risk assessment
–Enhance oversight and transparency by targeting areas of highest value or fraud risk for increased scrutiny
–Reduce costs especially for areas that represent lower risk of irregularities
–Improve operating performance

Cary emphasized that leading, successful organizational implementers have tended to take a measured approach initially when embarking on a fraud prevention-oriented data mining initiative, starting small and focusing on particular “pain points” or areas of opportunity to tackle first, such as whether only eligible recipients are receiving program funds or targeting business processes that have previously experienced actual frauds. Through this approach, organizations can deliver quick wins to demonstrate an early return on investment and then build upon that success as they move to more sophisticated data mining applications.

So, according to ACFE guidance, what are the ingredients of a successful data mining program oriented toward fraud prevention? There are several steps, which should be helpful to any organization in setting up such an effort with fraud, waste, abuse identification/prevention in mind:

–Avoid problems by adopting commonly used data mining approaches and related tools.

This is essentially a cultural transformation for any organization that has either not understood the value these tools can bring or has viewed their implementation as someone else’s responsibility. Given the cyber fraud and breach related challenges faced by all types of organizations today, it should be easier for fraud examiners and forensic accountants to convince management of the need to use these tools to prevent problems and to improve the ability to focus on cost-effective means of better controlling fraud -related vulnerabilities.

–Understand the potential that data mining provides to the organization to support day to day management of fraud risk and strategic fraud prevention.

Understanding, both the value of data mining and how to use the results, is at the heart of effectively leveraging these tools. The CEO and corporate counsel can play an important educational and support role for a program that must ultimately be owned by line managers who have responsibility for their own programs and operations.

–Adopt a version of an enterprise risk management program (ERM) that includes a consideration of fraud risk.

An organization must thoroughly understand its risks and establish a risk appetite across the enterprise. In this way, it can focus on those area of highest value to the organization. An organization should take stock of its risks and ask itself fundamental questions, such as:

-What do we lose sleep over?
-What do we not want to hear about us on the evening news or read about in the print media or on a blog?
-What do we want to make sure happens and happens well?

Data mining can be an integral part of an overall program for enterprise risk management. Both are premised on establishing a risk appetite and incorporating a governance and reporting framework. This framework in turn helps ensure that day-to-day decisions are made in line with the risk appetite, and are supported by data needed to monitor, manage and alleviate risk to an acceptable level. The monitoring capabilities of data mining are fundamental to managing risk and focusing on issues of importance to the organization. The application of ERM concepts can provide a framework within which to anchor a fraud prevention program supported by effective data mining.

–Determine how your client is going to use the data mined information in managing the enterprise and safeguarding enterprise assets from fraud, waste and abuse.

Once an organization is on top of the data, using it effectively becomes paramount and should be considered as the information requirements are being developed. As Cary pointed out, getting the right data has been cited as being the top challenge by 20 percent of ACFE surveyed respondents, whereas 40 percent said the top challenge was the “lack of understanding of how to use analytics”. Developing a shared understanding so that everyone is on the same page is critical to success.

–Keep building and enhancing the application of data mining tools.

As indicated above, a tried and true approach is to begin with the lower hanging fruit, something that will get your client started and will provide an opportunity to learn on a smaller scale. The experience gained will help enable the expansion and the enhancement of data mining tools. While this may be done gradually, it should be a priority and not viewed as the “management reform initiative of the day. There should be a clear game plan for building data mining capabilities into the fiber of management’s fraud and breach prevention effort.

–Use data mining as a tool for accountability and compliance with the fraud prevention program.

It is important to hold managers accountable for not only helping institute robust data mining programs, but for the results of these programs. Has the client developed performance measures that clearly demonstrate the results of using these tools? Do they reward those managers who are in the forefront in implementing these tools? Do they make it clear to those who don’t that their resistance or hesitation are not acceptable?

–View this as a continuous process and not a “one and done” exercise.

Risks change over time. Fraudsters are always adjusting their targets and moving to exploit new and emerging weaknesses. They follow the money. Technology will continue to evolve, and it will both introduce new risks but also new opportunities and tools for management. This client management effort to protect against dangers and rectify errors is one that never ends, but also one that can pay benefits in preventing or managing cyber-attacks and breaches that far outweigh the costs if effectively and efficiently implemented.

In conclusion, the stark realities of today’s cyber related challenges at all levels of business, private and public, and the need to address ever rising service delivery expectations have raised the stakes for managing the cost of doing business and conducting the on-going war against fraud, waste and abuse. Today’s client-managers should want to be on top of problems before they become significant, and the strategic use of data mining tools can help them manage and protect their enterprises whilst saving money…a win/win opportunity for the client and for the CFE.

Threat Assessment & Cyber Security

One rainy Richmond evening last week I attended the monthly dinner meeting of one of the professional organizations of which I’m a member.  Our guest speaker’s presentation was outstanding and, in my opinion, well worth sharing with fellow CFE’s especially as we find more and more of our client’s grappling with the reality of  ever-evolving cyber threats.

Our speaker started by indicating that, according to a wide spectrum of current thinking, technology issues in isolation should be but one facet of the overall cyber defense strategy of any enterprise. A holistic view on people, process and technology is required in any organization that wants to make its chosen defense strategy successful and, to be most successful, that strategy needs to be supplemented with a good dose of common sense creative thinking. That creative thinking proved to be the main subject of her talk.

Ironically, the sheer size, complexity and geopolitical diversity of the modern-day enterprise can constitute an inherent obstacle for its goal of achieving business objectives in a secured environment.  The source of the problem is not simply the cyber threats themselves, but threat agents. The term “threat agent,” from the Open Web Application Security Project (OWASP), is used to indicate an individual or group that can manifest a threat. Threat agents are represented by the phenomena of:

–Hacktivism;
–Corporate Espionage;
–Government Actors;
–Terrorists;
–Common Criminals (individual and organized).

Irrespective of the type of threat, the threat agent takes advantage of an identified vulnerability and exploits it in the attempt to negatively impact the value the individual business has at risk. The attempt to execute the threat in combination with the vulnerability is called hacking. When this attempt is successful, and the threat agent can negatively impact the value at risk, it can be concluded that the vulnerability was successfully exploited. So, essentially, enterprises are trying to defend against hacking and, more importantly, against the threat agent that is the hacker in his or her many guises. The ACFE identifies hacking as the single activity that has resulted in the greatest number of cyber breaches in the past decade.

While there is no one-size-fits-all standard to build and run a sustainable security defense in a generic enterprise context, most companies currently deploy something resembling the individual components of the following general framework:

–Business Drivers and Objectives;
–A Risk Strategy;
–Policies and Standards;
–Risk Identification and Asset Profiling;
–People, Process, Technology;
–Security Operations and Capabilities;
–Compliance Monitoring and Reporting.

Most IT risk and security professionals would be able to identify this framework and agree with the assertion that it’s a sustainable approach to managing an enterprise’s security landscape. Our speaker pointed out, however, that in her opinion, if the current framework were indeed working as intended, the number of security incidents would be expected to show a downward trend as most threats would fail to manifest into full-blown incidents. They could then be routinely identified by enterprises as known security problems and dealt with by the procedures operative in day-to-day security operations. Unfortunately for the existing framework, however, recent security surveys conducted by numerous organizations and trade groups clearly show an upward trend of rising security incidents and breaches (as every reader of daily press reports well knows).

The rising tide of security incidents and breaches is not surprising since the trade press also reports an average of 35 new, major security failures on each and every day of the year.  Couple this fact with the ease of execution and ready availability of exploit kits on the Dark Web and the threat grows in both probability of exploitation and magnitude of impact. With speed and intensity, each threat strikes the security structure of an enterprise and whittles away at its management credibility to deal with the threat under the routine, daily operational regimen presently defined. Hence, most affected enterprises endure a growing trend of negative security incidents experienced and reported.

During the last several years, in response to all this, many firms have responded by experimenting with a new approach to the existing paradigm. These organizations have implemented emergency response teams to respond to cyber-threats and incidents. These teams are a novel addition to the existing control structure and have two main functions: real-time response to security incidents and the collection of concurrent internal and external security intelligence to feed predictive analysis. Being able to respond to security incidents via a dedicated response team boosts the capacity of the operational organization to contain and recover from attacks. Responding to incidents, however efficiently, is, in any case, a reactive approach to deal with cyber-threats but isn’t the whole story. This is where cyber-threat intelligence comes into play. Threat intelligence is a more proactive means of enabling an organization to predict incidents. However, this approach also has a downside. The influx of a great deal of intelligence information may limit the ability of the company to render it actionable on a timely basis.

Cyber threat assessments are an effective means to tame what can be this overwhelming influx of intelligence information. Cyber threat assessment is currently recognized in the industry as red teaming, which is the practice of viewing a problem from an adversary or competitor’s perspective. As part of an IT security strategy, enterprises can use red teams to test the effectiveness of the security structure as a whole and to provide a relevance factor to the intelligence feeds on cyber threats. This can help CEOs decide what threats are relevant and have higher exposure levels compared to others. The evolution of cyber threat response, cyber threat
intelligence and cyber threat assessment (red teams) in conjunction with the existing IT risk framework can be used as an effective strategy to counter the agility of evolving cyber threats. The cyber threat assessment process assesses and challenges the structure of existing enterprise security systems, including designs, operational-level controls and the overall cyber threat response and intelligence process to ensure they remain capable of defending against current relevant exploits.

Cyber threat assessment exercises can also be extremely helpful in highlighting the most relevant attacks and in quantifying their potential impacts. The word “adversary” in the definition of the term ‘red team’ is key in that it emphasizes the need to independently challenge the security structure from the view point of an attacker.  Red team exercises should be designed to be independent of the scope, asset profiling, security, IT operations and coverage of existing security policies. Only then can enterprises realistically apply the attacker’s perspective, measure the success of its risk strategy and see how it performs when challenged. It’s essential that red team exercises have the freedom to treat the complete security structure and to point to flaws in all components of the IT risk framework. It’s a common notion that a red team exercise is a penetration test. This is not the case. Use of penetration test techniques by red teams is a means to identify the information required to replicate cyber threats and to create a controlled security incident. The technical shortfalls that are identified during standard penetration testing are mere symptoms of gaps that may exist in the governance of people, processes and technology. Hence, to make the organization more resilient against cyber threats, red team focus should be kept on addressing the root cause and not merely on fixing the security flaws discovered during the exercise. Another key point is to include cyber threat response and threat monitoring in the scope of such assessments. This demands that red team exercises be executed, and partially announced, with CEO-level approval. This ensures that enterprises challenge the end-to-end capabilities of an enterprise to cope with a real-time security incident. Lessons learned from red teaming can be documented to improve the overall security posture of the organization and as an aid in dealing with future threats.

Our speaker concluded by saying that as cyber threats evolve, one-hundred percent security for an active business is impossible to achieve. Business is about making optimum use of existing resources to derive the desired value for stakeholders. Cyber-defense cannot be an exception to this rule. To achieve optimized use of their security investments, CEOs should ensure that security spending for their organization is mapped to the real emerging cyber threat landscape. Red teaming is an effective tool to challenge the status quo of an enterprise’s security framework and to make informed judgements about the actual condition of its actual security posture today. Not only can the judgements resulting from red team exercises be used to improve cyber threat defense, they can also prove an effective mechanism to guide a higher return on cyber-defense investment.

Analytics & Fraud Prevention

During our Chapter’s live training event last year, ‘Investigating on the Internet’, our speaker Liseli Pennings, pointed out that, according to the ACFE’s 2014 Report to the Nations on Occupational Fraud and Abuse, organizations that have proactive, internet oriented, data analytics in place have a 60 percent lower median loss because of fraud, roughly $100,000 lower per incident, than organizations that don’t use such technology. Further, the report went on, use of proactive data analytics cuts the median duration of a fraud in half, from 24 months to 12 months.

This is important news for CFE’s who are daily confronting more sophisticated frauds and criminals who are increasingly cyber based.  It means that integrating more mature forensic data analytics capabilities into a fraud prevention and compliance monitoring program can improve risk assessment, detect potential misconduct earlier, and enhance investigative field work. Moreover, forensic data analytics is a key component of effective fraud risk management as described in The Committee of Sponsoring Organizations of the Treadway Commission’s most recent Fraud Risk Management Guide, issued in 2016, particularly around the areas of fraud risk assessment, prevention, and detection.  It also means that, according to Pennings, fraud prevention and detection is an ideal big data-related organizational initiative. With the growing speed at which they generate data, specifically around their financial reporting and sales business processes, our larger CFE client organizations need ways to prioritize risks and better synthesize information using big data technologies, enhanced visualizations, and statistical approaches to supplement traditional rules-based investigative techniques supported by spreadsheet or database applications.

But with this analytics and fraud prevention integration opportunity comes a caution.  As always, before jumping into any specific technology or advanced analytics technique, it’s crucial to first ask the right risk or control-related questions to ensure the analytics will produce meaningful output for the business objective or risk being addressed. What business processes pose a high fraud risk? High-risk business processes include the sales (order-to-cash) cycle and payment (procure-to-pay) cycle, as well as payroll, accounting reserves, travel and entertainment, and inventory processes. What high-risk accounts within the business process could identify unusual account pairings, such as a debit to depreciation and an offsetting credit to a payable, or accounts with vague or open-ended “catch all” descriptions such as a “miscellaneous,” “administrate,” or blank account names?  Who recorded or authorized the transaction? Posting analysis or approver reports could help detect unauthorized postings or inappropriate segregation of duties by looking at the number of payments by name, minimum or maximum accounts, sum totals, or statistical outliers. When did transactions take place? Analyzing transaction activities over time could identify spikes or dips in activity such as before and after period ends or weekend, holiday, or off-hours activities. Where does the CFE see geographic risks, based on previous events, the economic climate, cyber threats, recent growth, or perceived corruption? Further segmentation can be achieved by business units within regions and by the accounting systems on which the data resides.

The benefits of implementing a forensic data analytics program must be weighed against challenges such as obtaining the right tools or professional expertise, combining data (both internal and external) across multiple systems, and the overall quality of the analytics output. To mitigate these challenges and build a successful program, the CFE should consider that the priority of the initial project matters. Because the first project often is used as a pilot for success, it’s important that the project address meaningful business or audit risks that are tangible and visible to client management. Further, this initial project should be reasonably attainable, with minimal dollar investment and actionable results. It’s best to select a first project that has big demand, has data that resides in easily accessible sources, with a compelling, measurable return on investment. Areas such as insider threat, anti-fraud, anti-corruption, or third-party relationships make for good initial projects.

In the health care insurance industry where I worked for many years, one of the key goals of forensic data analytics is to increase the detection rate of health care provider billing non-compliance, while reducing the risk of false positives. From a capabilities perspective, organizations need to embrace both structured and unstructured data sources that consider the use of data visualization, text mining, and statistical analysis tools. Since the CFE will usually be working as a member of a team, the team should demonstrate the first success story, then leverage and communicate that success model widely throughout the organization. Results should be validated before successes are communicated to the broader organization. For best results and sustainability of the program, the fraud prevention team should be a multidisciplinary one that includes IT, business users, and functional specialists, such as management scientists, who are involved in the design of the analytics associated with the day-to-day operations of the organization and hence related to the objectives of  the fraud prevention program. It helps to communicate across multiple departments to update key stakeholders on the program’s progress under a defined governance regime. The team shouldn’t just report noncompliance; it should seek to improve the business by providing actionable results.

The forensic data analytics functional specialists should not operate in a vacuum; every project needs one or more business champions who coordinate with IT and the business process owners. Keep the analytics simple and intuitive, don’t include too much information in one report so that it isn’t easy to understand. Finally, invest time in automation, not manual refreshes, to make the analytics process sustainable and repeatable. The best trends, patterns, or anomalies often come when multiple months of vendor, customer, or employee data are analyzed over time, not just in the aggregate. Also, keep in mind that enterprise-wide deployment takes time. While quick projects may take four to six weeks, integrating the entire program can easily take more than one or two years. Programs need to be refreshed as new risks and business activities change, and staff need updates to training, collaboration, and modern technologies.

Research findings by the ACFE and others are providing more and more evidence of the benefits of integrating advanced forensic data analytics techniques into fraud prevention and detection programs. By helping increase their client organization’s maturity in this area, CFE’s can assist in delivering a robust fraud prevention program that is highly focused on preventing and detecting fraud risks.