Category Archives: Data Breaches & Cyberfraud

The Know It All

As fraud examiners intimately concerned with the general on-going state of health of fraud management and response systems, we find ourselves constantly looking at the integrity of the data that’s truly the life blood of today’s client organizations.  We’re constantly evaluating the network of anti-fraud controls we hope will help keep those pesky, uncontrolled, random data vulnerabilities to a minimum.   Every little bit of critical information that gets mishandled or falls through the cracks, every transaction that doesn’t get recorded, every anti-fraud policy or procedure that’s misapplied has some effect on the client’s overall fraud management picture. 

When it comes to managing its client, financial and payment data, almost every organization has a Pauline.  Pauline’s the person everyone goes to get the answers about data, and the state of the system(s) that process it, that no one else in her unit ever seems to have.  That’s because Pauline is an exceptional employee with years of detailed hands-on-experience in daily financial system operations and maintenance.  Pauline is also an example of the extraordinary level of dependence that many organizations have today on a small handful of their key employees.   The great recession of past memory where enterprises relied on retaining the experienced employees they had rather than on traditional hiring and cross-training practices only exacerbated a still existing, ever growing trend.  The very real threat to the fraud management system that the Pauline’s of the corporate data world pose is not so much that they will commit fraud themselves (although that’s an ever present possibility) but that they will retire or get another job out of state, taking their vital knowledge of the company systems and data with them. 

The day after Pauline’s retirement party and, to an increasing degree thereafter, it will dawn on  Pauline’s unit management that it’s lost a large amount of valuable information about the true state of its data and financial processing system(s), of its total lack of a large amount of system critical data documentation that’s been carried around nowhere but in Jane’s head.  The point is that, for some organizations, their reliance on a few key employees for day to day, operationally related information on their data goes well beyond what’s appropriate and constitutes an unacceptable level of risk to their fraud prevention system.  Today’s newspapers and the internet are full of stories about data breeches, only reinforcing the importance of vulnerable data and of its documentation to the on-going operational viability of our client organizations. 

Anyone whose investigated frauds involving large scale financial systems (insurance claims, bank records, client payment information) is painfully aware that when the composition of data changes (field definitions or content) surprisingly little of that change related information is ever formally documented.  Most of the information is stored in the heads of some key employees, and those key employees aren’t necessarily the ones involved in everyday, routine data management projects.  There’s always a significant level of detail that’s gone undocumented, left out or to chance, and it becomes up to the analyst of the data (be s/he an auditor, a management scientist, a fraud examiner or other assurance professional) to find the anomalies and question them.  The anomalies might be in the form of missing data, changes in data field definitions, or change in the content of the fields; the possibilities are endless.  Without proper, formal documentation, the immediate or future significance of these types of anomalies for the fraud management systems and for the overall fraud risk assessment process itself become almost impossible to determine.   

If our auditor or fraud examiner, operating under today’s typical budget or time constraints,  is not very thorough and misses even finding some of these anomalies, they can end up never being addressed.   How many times as an analyst have you tried to explain something (like apparently duplicate transactions) about the financial system that just doesn’t look right only to be told, “Oh, yeah.  Pauline made that change back in February before she retired; we don’t have too many details on it.”  In other words, undocumented changes to transactions and data, details of which are now only existent in Pauline’s head.  When a data driven system is built on incomplete information, the system can be said to have failed in its role as a component of overall fraud management.  The cycle of incomplete information gets propagated to future decisions, and the cost of the missing or inadequately explained data can be high.  What can’t be seen, can’t ever be managed or even explained. 

It’s truly humbling for any practitioner to experience how much critical financial information resides in the fading (or absent) memories of past or present key employees.  As fraud examiners we should attempt to foster a culture among our clients supportive of the development of concurrent transaction related documentation and the sharing of knowledge on a consistent basis for all systems but especially in matters involving changes to critical financial systems.  One nice benefit of this approach, which I brought to the attention of one of my clients not too long ago, would be to free up the time of one of these key employees to work on more productive fraud control projects rather than constantly serving as the encyclopedia for the rest of the operational staff. 

Analytic Reinforcements

Rumbi’s post of last week on ransomware got me thinking on a long drive back from Washington about what an excellent tool the AICPA’s new Cybersecurity Risk Management Reporting Framework is, not only for CPAs but for CFEs as well as for all our client organizations. As the seemingly relentless wave of cyberattacks continues with no sign of let up, organizations are under intense pressure from key stakeholders and regulators to implement and enhance their cyber security and fraud prevention programs to protect customers, employees and all the types of valuable information in their possession.

According to research from the ACFE, the average total cost per company, per event of a data breach is $3.62 million. Initial damage estimates of a single breach, while often staggering, may not take into account less obvious and often undetectable threats such as the theft of intellectual property, espionage, destruction of data, attacks on core operations or attempts to disable critical infrastructure. These effects can knock on for years and have devastating financial, operational and brand impact ramifications.

Given the present broad regulatory pressures to tighten cyber security controls and the visibility surrounding cyberrisk, a number of proposed regulations focused on improving cyber security risk management programs have been introduced in the United States over the past few years by our various governing bodies. One of the more prominent is a regulation by the New York Department of Financial Services (NYDFS) that prescribes certain minimum cyber security standards for those entities regulated by the NYDFS. Based on an entity’s risk assessment, the NYDFS law has specific requirements around data encryption and including data protection and retention, third-party information security, application security, incident response and breach notification, board reporting, and required annual re-certifications.

However, organizations continue to report to the ACFE regarding their struggle to systematically report to stakeholders on the overall effectiveness of their cyber security risk management programs. In response, the AICPA in April of last year released a new cyber security risk management reporting framework intended to help organizations expand cyberrisk reporting to a broad range of internal and external users, to include management and the board of directors. The AICPA’s new reporting framework is designed to address the need for greater stakeholder transparency by providing in-depth, easily consumable information about the state of an organization’s cyberrisk management program. The cyber security risk management examination uses an independent, objective reporting approach and employs broader and more flexible criteria. For example, it allows for the selection and utilization of any control framework considered suitable and available in establishing the entity’s basic cyber security objectives and in developing and maintaining controls within the entity’s cyber security risk management program irregardless of whether the standard is the US National Institute of Standards and Technology (NIST)’s Cybersecurity Framework, the International Organization for standardization (ISO)’s ISO 27001/2 and related frameworks, or even an internally developed framework based on a combination of sources. The examination is voluntary, and applies to all types of entities, but should be considered by CFEs as a leading practice that provides management, boards and other key stakeholders with clear insight into the current state of an organization’s cyber security program while identifying gaps or pitfalls that leave organizations vulnerable to cyber fraud and other intrusions.

What stakeholders might benefit from a client organization’s cyber security risk management examination report? Clearly, we CFEs as we go about our routine fraud risk assessments; but such a report, most importantly, can be vital in helping an organization’s board of directors establish appropriate oversight of a company’s cyber security risk program and credibly communicate its effectiveness to stakeholders, including investors, analysts, customers, business partners and regulators. By leveraging this information, boards can challenge management’s assertions around the effectiveness of their cyberrisk management and fraud prevention programs and drive more effective decision making. Active involvement and oversight from the board can help ensure that an organization is paying adequate attention to cyberrisk management and displaying due diligence. The board can help shape expectations for reporting on cyberthreats while also advocating for greater transparency and assurance around the effectiveness of the program.

The cyber security risk management report in its initial and follow-up iterations can be invaluable in providing overview guidance to CFEs and forensic accountants in targeting both fraud prevention and fraud detection/investigative analytics. We know from our ACFE training that data analytics need to be fully integrated into the investigative process. Ensuring that data analytics are embedded in the detection/investigative process requires support from all levels, starting with the managing CFE. It will be an easier, more coherent process for management to support such a process if management is already supporting cyber security risk management reporting. Management will also have an easier time reinforcing the use of analytics generally, although the data analytics function supporting fraud examination will still have to market its services, team leaders will still be challenged by management, and team members will still have to be trained to effectively employ the newer analytical tools.

The presence of a robust cyber security risk management reporting process should also prove of assistance to the lead CFE in establishing goals for the implementation and use of data analytics in every investigation, and these goals should be communicated to the entire investigative team. It should be made clear to every level of the client organization that data analytics will support the investigative planning process for every detected fraud. The identification of business processes, IT systems, data sources, and potential analytic routines should be discussed and considered not only during planning, but also throughout every stage of the entire investigative engagement. Key in obtaining the buy-in of all is to include investigative team members in identifying areas or tests that the analytics group will target in support of the field work. Initially, it will be important to highlight success stories and educate managers and team leaders about what is possible. Improving on the traditional investigative approach of document review, interviewing, transaction review, etc. investigators can benefit from the implementation of data analytics to allow for more precise identification of the control deficiencies, instances of noncompliance with policies and procedures, and mis-assessment of areas of high risk that contributed to the development of the fraud in the first place. These same analytics can then be used to ensure that appropriate post-fraud management follow-up has occurred by elevating the identified deficiencies to the cyber security risk management reporting process and by implementing enhanced fraud prevention procedures in areas of higher fraud risk. This process would be especially useful in responding to and following up data breaches.

Once patterns are gathered and centralized, analytics can be employed to measure the frequency of occurrence, the bit sizes, the quantity of files executed and average time of use. The math involved allows an examiner to grasp the big picture. Individuals, including examiners, are normally overwhelmed by the sheer volume of information, but automation of pattern recognizing techniques makes big data a tractable investigative resource. The larger the sample size, the easier it is to determine patterns of normal and abnormal behavior. Network haystacks are bombarded by algorithms that can notify the CFE information archeologist about the probes of an insider threat for example.

Without analytics, enterprise-level fraud examination and risk assessment is a diminished discipline, limited in scope and effectiveness. Without an educated investigative workforce, armed with a programing language for automation and an accompanying data-mining philosophy and skill set, the control needs of management leaders at the enterprise level will go unmet; leaders will not have the data needed for fraud prevention on a large scale nor a workforce that is capable of getting them that data in the emergency following a breach or penetration.

The beauty of analytics, from a security and fraud prevention perspective, is that it allows the investigative efforts of the CFE to align with the critical functions of corporate business. It can be used to discover recurring risks, incidents and common trends that might otherwise have been missed. Establishing numerical baselines on quantified data can supplement a normal investigator’s tasks and enhance the auditor’s ability to see beneath the surface of what is presented in an examination. Good communication of analyzed data gives decision makers a better view of their systems through a holistic approach, which can aid in the creation of enterprise-level goals. Analytics and data mining always add dimension and depth to the CFE’s examination process at the enterprise level and dovetail with and are supported beautifully by the AICPA’s cyber security risk management reporting initiative.

CFEs should encourage the staffs of client analytics support functions to possess …

–understanding of the employing enterprise’s data concepts (data elements, record types, database types, and data file formats).
–understanding of logical and physical database structures.
–the ability to communicate effectively with IT and related functions to achieve efficient data acquisition and analysis.
–the ability to perform ad hoc data analysis as required to meet specific fraud examiner and fraud prevention objectives.
–the ability to design, build, and maintain well-documented, ongoing automated data analysis routines.
–the ability to provide consultative assistance to others who are involved in the application of analytics.

Fraud Prevention Oriented Data Mining

One of the most useful components of our Chapter’s recently completed two-day seminar on Cyber Fraud & Data Breaches was our speaker, Cary Moore’s, observations on the fraud fighting potential of management’s creative use of data mining. For CFEs and forensic accountants, the benefits of data mining go much deeper than as just a tool to help our clients combat traditional fraud, waste and abuse. In its simplest form, data mining provides automated, continuous feedback to ensure that systems and anti-fraud related internal controls operate as intended and that transactions are processed in accordance with policies, laws and regulations. It can also provide our client managements with timely information that can permit a shift from traditional retrospective/detective activities to the proactive/preventive activities so important to today’s concept of what effective fraud prevention should be. Data mining can put the organization out front of potential fraud vulnerability problems, giving it an opportunity to act to avoid or mitigate the impact of negative events or financial irregularities.

Data mining tests can produce “red flags” that help identify the root cause of problems and allow actionable enhancements to systems, processes and internal controls that address systemic weaknesses. Applied appropriately, data mining tools enable organizations to realize important benefits, such as cost optimization, adoption of less costly business models, improved program, contract and payment management, and process hardening for fraud prevention.

In its most complex, modern form, data mining can be used to:

–Inform decision-making
–Provide predictive intelligence and trend analysis
–Support mission performance
–Improve governance capabilities, especially dynamic risk assessment
–Enhance oversight and transparency by targeting areas of highest value or fraud risk for increased scrutiny
–Reduce costs especially for areas that represent lower risk of irregularities
–Improve operating performance

Cary emphasized that leading, successful organizational implementers have tended to take a measured approach initially when embarking on a fraud prevention-oriented data mining initiative, starting small and focusing on particular “pain points” or areas of opportunity to tackle first, such as whether only eligible recipients are receiving program funds or targeting business processes that have previously experienced actual frauds. Through this approach, organizations can deliver quick wins to demonstrate an early return on investment and then build upon that success as they move to more sophisticated data mining applications.

So, according to ACFE guidance, what are the ingredients of a successful data mining program oriented toward fraud prevention? There are several steps, which should be helpful to any organization in setting up such an effort with fraud, waste, abuse identification/prevention in mind:

–Avoid problems by adopting commonly used data mining approaches and related tools.

This is essentially a cultural transformation for any organization that has either not understood the value these tools can bring or has viewed their implementation as someone else’s responsibility. Given the cyber fraud and breach related challenges faced by all types of organizations today, it should be easier for fraud examiners and forensic accountants to convince management of the need to use these tools to prevent problems and to improve the ability to focus on cost-effective means of better controlling fraud -related vulnerabilities.

–Understand the potential that data mining provides to the organization to support day to day management of fraud risk and strategic fraud prevention.

Understanding, both the value of data mining and how to use the results, is at the heart of effectively leveraging these tools. The CEO and corporate counsel can play an important educational and support role for a program that must ultimately be owned by line managers who have responsibility for their own programs and operations.

–Adopt a version of an enterprise risk management program (ERM) that includes a consideration of fraud risk.

An organization must thoroughly understand its risks and establish a risk appetite across the enterprise. In this way, it can focus on those area of highest value to the organization. An organization should take stock of its risks and ask itself fundamental questions, such as:

-What do we lose sleep over?
-What do we not want to hear about us on the evening news or read about in the print media or on a blog?
-What do we want to make sure happens and happens well?

Data mining can be an integral part of an overall program for enterprise risk management. Both are premised on establishing a risk appetite and incorporating a governance and reporting framework. This framework in turn helps ensure that day-to-day decisions are made in line with the risk appetite, and are supported by data needed to monitor, manage and alleviate risk to an acceptable level. The monitoring capabilities of data mining are fundamental to managing risk and focusing on issues of importance to the organization. The application of ERM concepts can provide a framework within which to anchor a fraud prevention program supported by effective data mining.

–Determine how your client is going to use the data mined information in managing the enterprise and safeguarding enterprise assets from fraud, waste and abuse.

Once an organization is on top of the data, using it effectively becomes paramount and should be considered as the information requirements are being developed. As Cary pointed out, getting the right data has been cited as being the top challenge by 20 percent of ACFE surveyed respondents, whereas 40 percent said the top challenge was the “lack of understanding of how to use analytics”. Developing a shared understanding so that everyone is on the same page is critical to success.

–Keep building and enhancing the application of data mining tools.

As indicated above, a tried and true approach is to begin with the lower hanging fruit, something that will get your client started and will provide an opportunity to learn on a smaller scale. The experience gained will help enable the expansion and the enhancement of data mining tools. While this may be done gradually, it should be a priority and not viewed as the “management reform initiative of the day. There should be a clear game plan for building data mining capabilities into the fiber of management’s fraud and breach prevention effort.

–Use data mining as a tool for accountability and compliance with the fraud prevention program.

It is important to hold managers accountable for not only helping institute robust data mining programs, but for the results of these programs. Has the client developed performance measures that clearly demonstrate the results of using these tools? Do they reward those managers who are in the forefront in implementing these tools? Do they make it clear to those who don’t that their resistance or hesitation are not acceptable?

–View this as a continuous process and not a “one and done” exercise.

Risks change over time. Fraudsters are always adjusting their targets and moving to exploit new and emerging weaknesses. They follow the money. Technology will continue to evolve, and it will both introduce new risks but also new opportunities and tools for management. This client management effort to protect against dangers and rectify errors is one that never ends, but also one that can pay benefits in preventing or managing cyber-attacks and breaches that far outweigh the costs if effectively and efficiently implemented.

In conclusion, the stark realities of today’s cyber related challenges at all levels of business, private and public, and the need to address ever rising service delivery expectations have raised the stakes for managing the cost of doing business and conducting the on-going war against fraud, waste and abuse. Today’s client-managers should want to be on top of problems before they become significant, and the strategic use of data mining tools can help them manage and protect their enterprises whilst saving money…a win/win opportunity for the client and for the CFE.

Every Seat Taken!

Our Chapter’s thanks to all our attendees and to our partners, the Virginia State Police and national ACFE for the unqualified success of our May training event, Cyberfraud and Data Breaches! Our speaker, Cary Moore, CFE, CISSP, conducted a fully interactive, two-day session on one of the most challenging and relevant topics confronting practicing fraud examiners and forensic accountants today.

The event examined the potential avenues of data loss and guided attendees through the crucial strategies needed to mitigate the threat of malicious data theft and the risk of inadvertent data loss, recognizing that information is a valuable asset, and that management must take proactive steps to protect the organization’s intellectual property. As Cary forcefully pointed out, the worth of businesses is no longer based solely on tangible assets and revenue-making potential; the information the organization develops, stores, and collects accounts for a large share of its value.

A data breach occurs when there is a loss or theft of, or unauthorized access to, proprietary information that could result in compromising the data. It is essential that management understand the crisis its organization might face if its information is lost or stolen. Data breaches incur not only high financial costs but can also have a lasting negative effect on an organization’s brand and reputation.

Protecting information assets is especially important because the threats to such assets are on the rise, and the cost of a data breach increases with the number of compromised records. According to a 2017 study by the Ponemon Institute, data breaches involving fewer than 10,000 records caused an average loss of $1.9 million, while beaches with more than 50,000 compromised records caused an average loss of $6.3 million. However, before determining how to protect information assets, it is important to understand the nature of these assets and the many methods by which they can be breached.

Intellectual property is a catchall phrase for knowledge-based assets and capital, but it’s helpful to think of it as intangible proprietary information. Intellectual property (IP) is protected by law. IP law grants certain exclusive rights to owners of a variety of intangible assets. These rights incentivize individuals, company leaders, and investors to allocate the requisite resources to research, develop, and market original technology and creative works.

A trade secret is any idea or information that gives its owner an advantage over its competitors. Trade secrets are particularly susceptible to theft because they provide a competitive advantage. What constitutes a trade secret, however, depends on the organization, industry, and jurisdiction, but generally, to be classified as a trade secret, information must:

• Be secret: The information is not generally known to the relevant portion of the public.
• Confer some sort of economic benefit on its holder: The idea or information must give its owner an advantage over its competitors. The benefit conferred from the information, however, must stem from not being generally known, not just from the value of the information itself. The best test for determining what is confidential information is to determine whether the information would provide an advantage to the competition.
• Be the subject of reasonable efforts to maintain its secrecy: The owner must take reasonable steps to protect its trade secrets from disclosure. That is, a piece of information will not receive protection as a trade secret if the owner does not take adequate steps to protect it from disclosure.

Cary presented in-depth information on the various types of threats to data security including:

–Insiders
–Hackers
–Competitors
–Organized criminal groups
–Government-sponsored groups

Protecting proprietary information is a timely issue, but it is difficult. The event presented a list of common challenges faced when protecting information assets:

–Proprietary information is among the most valuable commodities, and attackers are doing everything in their power to steal as much of this information as possible.
–The risk of data breaches for organizations is high.
–New and emerging technologies create new risks and vulnerabilities.
— IT environments are becoming increasingly complex, making the management of them more expensive, difficult, and time consuming.
–There is a wider range of devices and access points, so businesses must proactively seek ways to combat the effects of this complexity.
–The rise in portable devices is creating more opportunities for data to “leak” from the business.
–The rise in Bring Your Own Device (BYOD) initiatives is generating new operational challenges and security problems.
–The rapidly expanding Internet of Things (IoT) has significantly increased the number of network connected things (e.g., HVAC systems, MRI machines, coffeemakers) that pose data security threats, many of which were inconceivable only a short time ago.
–The number of threats to corporate IT systems is on the rise.
–Malware is becoming more sophisticated.
–There is an increasing number of laws in this area, making information security an urgent priority.

Cary covered the entire gamut of challenges related to cyber fraud and data breaches ranging from legal issues, corporate espionage, social engineering, the use of social media, the bring-your-own-devices phenomenon, and the impact of cloud computing. The remaining portion of the event was devoted to addressing how enterprises can effectively respond when confronted by the challenges posed by these issues including breach response team building and breach prevention techniques like conducting security risk assessments, staff awareness training and the incident response plan.

When an organization experiences a data breach, management must respond in an appropriate and timely manner. During the initial response, time is critical. To help ensure that an organization responds to data breaches timely and efficiently, management should have an incident response plan in place that outlines how to respond to such issues. Timely responses can help prevent further data loss, fines, and customer backlash. An incident response plan outlines the actions an organization will take when data breaches occur. More specifically, a response plan should guide the necessary action when a data breach is reported or identified. Because every breach is different, a response plan should not outline how an organization should respond in every instance. Instead, a response plan should help the organization manage its response and create an environment to minimize risk and maximize the potential for success. In short, a response plan should describe the plan fundamentals that the organization can deploy on short notice.

Again, our sincere thanks go out to all involved in the success of this most worthwhile training event!

Analytics Confronts the Normal

The Information Audit and Control Association (ISACA) tells us that we produce and store more data in a day now than mankind did altogether in the last 2,000 years. The data that is produced daily is estimated to be one exabyte, which is the computer storage equivalent of one quintillion bytes, which is the same as one million terabytes. Not too long ago, about 15 years, a terabyte of data was considered a huge amount of data; today the latest Swiss Army knife comes with a 1 terabyte flash drive.

When an interaction with a business is complete, the information from the interaction is only as good as the pieces of data that get captured during that interaction. A customer walks into a bank and withdraws cash. The transaction that just happened gets stored as a monetary withdrawal transaction with certain characteristics in the form of associated data. There might be information on the date and time when the withdrawal happened; there may be information on which customer made the withdrawal (if there are multiple customers who operate the same account). The amount of cash that was withdrawn, the account from which the money was extracted, the teller/ATM who facilitated the withdrawal, the balance on the account after the withdrawal, and so forth, are all typically recorded. But these are just a few of the data elements that can get captured in any withdrawal transaction. Just imagine all the different interactions possible on all the assorted products that a bank has to offer: checking accounts, savings accounts, credit cards, debit cards, mortgage loans, home equity lines of credit, brokerage, and so on. The data that gets captured during all these interactions goes through data-checking processes and gets stored somewhere internally or in the cloud.  The data that gets stored this way has been steadily growing over the past few decades, and, most importantly for fraud examiners, most of this data carries tons of information about the nuances of the individual customers’ normal behavior.

In addition to what the customer does, from the same data, by looking at a different dimension of the data, examiners can also understand what is normal for certain other related entities. For example, by looking at all the customer withdrawals at a single ARM, CFEs can gain a good understanding of what is normal for that particular ATM terminal.  Understanding the normal behavior of customers is very useful in detecting fraud since deviation from normal behavior is a such a primary indicator of fraud. Understanding non-fraud or normal behavior is not only important at the main account holder level but also at all the entity levels associated with that individual account. The same data presents completely different information when observed in the context of one entity versus another. In this sense, having all the data saved and then analyzed and understood is a key element in tackling the fraud threat to any organization.

Any systematic, numbers-based system of understanding of the phenomenon of fraud as a past occurring event is dependent on an accurate description of exactly what happened through the data stream that got accumulated before, during, and after the fraud scenario occurred. Allowing the data to speak is the key to the success of any model-based system. This data needs to be saved and interpreted very precisely for the examiner’s models to make sense. The first crucial step to building a model is to define, understand, and interpret fraud scenarios correctly. At first glance, this seems like a very easy problem to solve. In practical terms, it is a lot more complicated process than it seems.

The level of understanding of the fraud episode or scenario itself varies greatly among the different business processes involved with handling the various products and functions within an organization. Typically, fraud can have a significant impact on the bottom line of any organization. Looking at the level of specific information that is systematically stored and analyzed about fraud in financial institutions for example, one would arrive at the conclusion that such storage needs to be a lot more systematic and rigorous than it typically is today. There are several factors influencing this. Unlike some of the other types of risk involved in client organizations, fraud risk is a censored problem. For example, if we are looking at serious delinquency, bankruptcy, or charge-off risk in credit card portfolios, the actual dollars-at-risk quantity is very well understood. Based on past data, it is relatively straightforward to quantify precise credit dollars at risk by looking at how many customers defaulted on a loan or didn’t pay their monthly bill for three or more cycles or declared bankruptcy. Based on this, it is easy to quantify the amount at risk as far as credit risk goes. However, in fraud, it is virtually impossible to quantify the actual amount that would have gone out the door as the fraud is stopped immediately after detection. The problem is censored as soon as some intervention takes place, making it difficult to precisely quantify the potential risk.

Another challenge in the process of quantifying fraud is how well the fraud episode itself gets recorded. Consider the case of a credit card number getting stolen without the physical card getting stolen. During a certain period, both the legitimate cardholder and the fraudster are charging using the card. If the fraud detection system in the issuing institution doesn’t identify the fraudulent transactions as they were happening in real time, typically fraud is identified when the cardholder gets the monthly statement and figures out that some of the charges were not made by him/her. Then the cardholder calls the issuer to report the fraud.  In the not too distant past, all that used to get recorded by the bank was the cardholder’s estimate of when the fraud episode began, even though there were additional details about the fraudulent transactions that were likely shared by the cardholder. If all that gets recorded is the cardholder’s estimate of when the fraud episode began, ambiguity is introduced regarding the granularity of the actual fraud episode. The initial estimate of the fraud amount becomes a rough estimate at best.  In the case in which the bank’s fraud detection system was able to catch the fraud during the actual fraud episode, the fraudulent transactions tended to be recorded by a fraud analyst, and sometimes not too accurately. If the transaction was marked as fraud or non-fraud incorrectly, this problem was typically not corrected even after the correct information flowed in. When eventually the transactions that were actually fraudulent were identified using the actual postings of the transactions, relating this back to the authorization transactions was often not a straightforward process. Sometimes the amounts of the transactions may have varied slightly. For example, the authorization transaction of a restaurant charge is sometimes unlikely to include the tip that the customer added to the bill. The posted amount when this transaction gets reconciled would look slightly different from the authorized amount. All of this poses an interesting challenge when designing a data-driven analytical system to combat fraud.

The level of accuracy associated with recording fraud data also tends to be dependent on whether the fraud loss is a liability for the customer or to the financial institution. To a significant extent, the answer to the question, “Whose loss is it?” really drives how well past fraud data is recorded. In the case of unsecured lending such as credit cards, most of the liability lies with the banks, and the banks tend to care a lot more about this type of loss. Hence systems are put in place to capture this data on a historical basis reasonably accurately.

In the case of secured lending, ID theft, and so on, a significant portion of the liability is really on the customer, and it is up to the customer to prove to the bank that he or she has been defrauded. Interestingly, this shift of liability also tends to have an impact on the quality of the fraud data captured. In the case of fraud associated with automated clearing house (ACH) batches and domestic and international wires, the problem is twofold: The fraud instances are very infrequent, making it impossible for the banks to have a uniform method of recording frauds; and the liability shifts are dependent on the geography.  Most international locations put the onus on the customer, while in the United States there is legislation requiring banks to have fraud detection systems in place.  The extent to which our client organizations take responsibility also tends to depend on how much they care about the customer who has been defrauded. When a very valuable customer complains about fraud on her account, a bank is likely to pay attention.  Given that most such frauds are not large scale, there is less need to establish elaborate systems to focus on and collect the data and keep track of past irregularities. The past fraud information is also influenced heavily by whether the fraud is third-party or first-party fraud. Third-party fraud is where the fraud is committed clearly by a third party, not the two parties involved in a transaction. In first-party fraud, the perpetrator of the fraud is the one who has the relationship with the bank. The fraudster in this case goes to great lengths to prevent the banks from knowing that fraud is happening. In this case, there is no reporting of the fraud by the customer. Until the bank figures out that fraud is going on, there is no data that can be collected. Also, such fraud could go on for quite a while and some of it might never be identified. This poses some interesting problems. Internal fraud where the employee of the institution is committing fraud could also take significantly longer to find. Hence the data on this tends to be scarce as well.

In summary, one of the most significant challenges in fraud analytics is to build a sufficient database of normal client transactions.  The normal transactions of any organization constitute the baseline from which abnormal, fraudulent or irregular transactions, can be identified and analyzed.  The pinpointing of the irregular is thus foundational to the development of the transaction processing edits which prevent the irregular transactions embodying fraud from even being processed and paid on the front end; furnishing the key to modern, analytically based fraud prevention.

Cyberfraud & Data Breaches May 2018 Training Event

On May 16th and 17th, our Chapter, supported by our partners national, ACFE and the Virginia State Police, will present our sixteenth Spring training event, this time on the subject of CYBERFRAUD AND DATA BREACHES.  Our presenter will be CARY E. MOORE, CFE, CISSP, MBA; ACFE Presenter Board member and internationally renowned author and authority on every aspect of cybercrime.  CLICK HERE  to see an outline of the training, the agenda and Cary’s bio.  If you decide to do so, you may REGISTER HERE.  Attendees will receive 16 CPE credits, and a printed manual of over 300 pages detailing every subject covered in the training.  In addition, as a door prize, we will be awarding, by drawing, a printed copy of the 2017 Fraud Examiners Manual, a $200 value!

As the relentless wave of cyberattacks continues, all our client organizations are under intense pressure from key stakeholders and regulators to implement and enhance their anti-fraud programs to protect customers, employees and the valuable information in their possession. According to research from IBM Security and the Ponemon Institute, the average total cost per company, per event of a data breach is US $3.62 million. Initial damage estimates of a single breach, while often staggering, may not consider less obvious and often undetectable threats such as theft of intellectual property, espionage, destruction of data, attacks on core operations or attempts to disable critical infrastructure. These knock-on effects can last for years and have devastating financial, operational and brand ramifications.

Given the broad regulatory pressures to tighten anti-fraud cyber security controls and the visibility surrounding cyber risk, a number of proposed regulations focused on improving cyber security risk management programs have been introduced in the United States over the past few years by various governing bodies of which CFEs need to be aware. One of the more prominent is a regulation issued by the New York Department of Financial Services (NYDFS) that prescribes certain minimum cyber security standards for those entities regulated by the NYDFS. Based on the entity’s risk assessment, the NYDFS law has specific requirements around data encryption, protection and retention, third party information security, application security, incident response and breach. notification, board reporting, and annual certifications.

However, organizations continue to struggle to report on the overall effectiveness of their cyber security risk management and anti-fraud programs. The American Institute of Certified Public Accountants (AICPA) has released a cyber security risk management reporting framework intended to help organizations expand cyber risk reporting to a broad range of internal and external users, including the C-suite and the board of directors (BoD). The AICPA’s reporting framework is designed to address the need for greater stakeholder transparency by providing in-depth, easily consumable information about an organization’s cyber risk management
program. The cyber security risk management examination uses an independent, objective reporting approach and employs broader and more flexible criteria. For example, it allows for the selection and utilization of any control framework considered suitable and available in establishing the entity’s cyber security objectives and developing and maintaining controls within the entity’s cyber security risk management program, whether it is the US National Institute of Standards and Technology (NIST)’s Cybersecurity Framework, the International Organization for Standardization (ISO)’s ISO 27001/2 and related frameworks, or internally developed frameworks based on a combination of sources. The examination is voluntary, and applies to all types of entities, but should be considered a leading practice that provides the C-suite, boards and other key stakeholders clear insight into an organization’s cyber security program and identifies gaps or pitfalls that leave organizations vulnerable.

Cyber security risk management examination reports are vital to the fraud control program of any organization doing business on-line.  Such reports help an organization’s BoD establish appropriate oversight of a company’s cyber security risk program and credibly communicate its effectiveness to stakeholders, including investors, analysts, customers, business partners and regulators. By leveraging this information, boards can challenge management’s assertions around the effectiveness of their cyber risk management programs and drive more effective decision making. Active involvement and oversight from the BoD can help ensure that an organization is paying adequate attention to cyber risk management. The board can help shape expectations for reporting on cyber threats and fraud attempts while also advocating for greater transparency and assurance around the effectiveness of the program.

Organizations that choose to utilize the AICPA’s cyber security attestation reporting framework and perform an examination of their cyber security program may be better positioned to gain competitive advantage and enhance their brand in the marketplace. For example, an outsource retail service provider (OSP) that can provide evidence that a well-developed and sound cyber security risk management program is in place in its organization can proactively provide the report to current and potential customers, evidencing that it has implemented appropriate controls to protect the sensitive IT assets and valuable data over which it maintains access. At the same time, current and potential retailor customers of an OSP want the third parties with whom they engage to also place a high level of importance on cyber security. Requiring a cyber security examination report as part of the selection criteria would offer transparency into
outsourcers’ cyber security programs and could be a determining factor in the selection process.

The value of addressing cyber security related fraud concerns and questions by CFEs before regulatory mandates are established or a crisis occurs is quite clear. The knowledgeable CFE can help our client organizations view the new cyber security attestation reporting frameworks as an opportunity to enhance their existing cyber security and anti-fraud programs and gain competitive advantage. The attestation reporting frameworks address the needs of a variety of key stakeholder groups and, in turn, limit the communication and compliance burden. CFE client organizations that view the cyber security reporting landscape as an opportunity can use it to lead, navigate and disrupt in today’s rapidly evolving cyber risk environment.

Please decide to join us for our May Training Event on this vital and timely topic!  YOU MAY REGISTER 0N-LINE HERE.  You can pay with PayPal (you don’t need a PayPal account; you can use any credit card) or just print an invoice and submit your payment by snail mail!