Tag Archives: cyber security

Client’s Card Security

Our Chapter recently got a question from a reader of this blog about data privacy; specifically she asked about the Payment Card Industry Data Security Standard (PCI DSS) and whether compliance with that standard’s requirements by a client would provide reasonable assurance that the client organization’s customer data privacy controls and procedures are adequate. The question came up in the course of a credit card fraud examination in which our reader’s small CPA firm was involved. A very good question indeed! The short answer, in my opinion, is that, although PCI DSS compliance audits cover some aspects of data privacy, because they’re limited to credit cards, PCI DSS audits would not, in themselves be sufficient to convince a jury that data privacy is adequately protected throughout a whole organization. The question is interesting because of its bearing on the fraud risk assessments CFE’s routinely conduct. The question is important because CFE’s should understand the scope (and limitations) of PCI DSS compliance activities within their client organizations and communicate the differences when reviewing corporate-wide data privacy for fraud prevention purposes. This understanding will also tend to prevent any potential misunderstandings over duplication of review efforts with business process owners and fraud examination clients.

Given all the IT breeches and intrusions happening daily, consumers are rightly cynical these days about businesses’ ability to protect their personal data. They report that they’re much more willing to do business with companies that have independently verified privacy policies and procedures. In-depth privacy fraud risk assessments can help organizations assess their preparedness for the outside review that inevitably follows a major customer data privacy breach. As I’m sure all the readers of this blog know, data privacy generally applies to information that can be associated with a specific individual or that has identifying characteristics that might be combined with other information to indicate a specific person. Such personally identifiable information (PII) is defined as any piece of data that can be used to uniquely identify, contact, or locate a single person. Information can be considered private without being personally identifiable. Sensitive personal data includes individual preferences, confidential financial or health information, or other personal information. An assessment of data privacy fraud risk encompasses the policy, controls, and procedures in place to protect PII.

In planning a fraud risk assessment of data privacy, CFE’s auditors should evaluate or consider based on risk:

–The consumer and employee PII that the client organization collects, uses, retains, discloses, and discards.
–Privacy contract requirements and risk liabilities for all outsourcing partners, vendors, contractors, and other third parties involving sharing and processing of the organization’s consumer and employee data.
–Compliance with privacy laws and regulations impacting the organization’s specific business and industry.
–Previous privacy breaches within the organization and its third-party service providers, and reported breaches for similar organizations noted by clearing houses like Dunn &
Bradstreet and in the client industry’s trade press.
–The CFE should also consult with the client’s corporate legal department before undertaking the review to determine whether all or part of the assessment procedure should be performed at legal direction and protected as “attorney-client privileged” work products.

The next step in a privacy fraud risk assessment is selecting a framework for the review.
Two frameworks to consider are the American Institute of Certified Public Accountants (AICPA) Privacy Framework and The IIA’s Global Audit Technology Guide: Managing and Auditing Privacy Risks. For ACFE training purposes, one CFE working for a well know on-line retailer reported organizing her fraud assessment report based on the AICPA framework. The CFE chose that methodology because it would be understood and supported easily by management, external auditors, and the audit committee. The AICPA’s ten component framework was useful in developing standards for the organization as well as for an assessment framework:

–Management. The organization defines, documents, communicates, and assigns accountability for its privacy policies and procedures.
–Notice. The organization provides notice about its privacy policies and procedures and identifies the purposes for which PII is collected, used, retained, and disclosed.
–Choice and Consent. The organization describes the choices available to the individual customer and obtains implicit or explicit consent with respect to the collection, use, and disclosure of PII.
–Collection. The organization collects PII only for the purposes identified in the Notice.
–Use, Retention, and Disposal. The organization limits the use of PII to the purposes identified in the Notice and for which the individual customer has provided implicit or explicit consent. The organization retains these data for only as long as necessary to fulfill the stated purposes or as required by laws or regulations, and thereafter disposes of such information appropriately.
–Access. The organization provides individual customers with access to their PII for review and update.
–Disclosure to Third Parties. The organization discloses PII to third parties only for the purposes identified in the Notice and with the implicit or explicit consent of the individual.
–Security for Privacy. The organization protects PII against unauthorized physical and logical access.
–Quality. The organization maintains accurate, complete, and relevant PII for the purposes identified in the Notice.
–Monitoring and Enforcement. The organization monitors compliance with its privacy policies and procedures and has procedures to address privacy complaints and disputes.

Using the detailed assessment procedures in the framework, the CFE, working with internal client staff, developed specific testing procedures for each component, which were performed over a two-month period. Procedures included traditional walkthroughs of processes, interviews with individuals responsible for IT security, technical testing of IT security and infrastructure controls, and review of physical storage facilities for documents with PII. Technical scanning was performed independently by the retailer’s IT staff, which identified PII on servers and some individual personal computers erroneously excluded from compliance monitoring. Facilitated sessions with the CFE and individuals responsible for PII helped identify problem areas. The fraud risk assessment dramatically increased awareness of data privacy and identified several opportunities to strengthen ownership, accountability, controls, procedures, and training. As a result of the assessment, the retailer implemented a formal data classification scheme and increased IT security controls. Several of the vulnerabilities and required enhancements involved controls over hard-copy records containing PII. Management reacted to the overall report positively and requested that the CFE schedule future recurring views of fraudulent privacy breech vulnerability.

Fraud risk assessments of client privacy programs can help make the business case within any organization for focusing on privacy now, and for promoting organizational awareness of privacy issues and threats. This is one of the most significant opportunities for fraud examiners to help assess risks and identify potential gaps that are daily proving so devastating if left unmanaged.

The Critical Twenty Percent

According to the Pareto Principle, for many phenomena, 80 percent of the consequences stem from 20 percent of the causes. Application of the principle to fraud prevention efforts related particularly to automated systems seems increasingly apropos given the deluge of intrusions, data thefts, worms and other attacks which continue unabated, with organizations of all kinds losing productivity, revenue and more customers every month. ACFE members report having asked the IT managers of numerous victimized organizations over the years what measures their organization took prior to an experienced fraud to secure their networks, systems, applications and data, and the answer has typically involved a combination of traditional perimeter protection solutions (such as firewalls, intrusion detection, antivirus and antispyware) together with patch management, business continuance strategies, and access control methods and policies. As much sense as these traditional steps make at first glance, they clearly aren’t proving sufficiently effective in preventing or even containing many of today’s most sophisticated attacks.

The ACFE has determined that not only are some organizations vastly better than the rest of their industries at preventing and responding to cyber-attacks, but also that the difference between these and other organizations’ effectiveness boils down to just a few foundational controls. And the most significant within these foundational controls are not rooted in standard forms of access control, but, surprisingly, in monitoring and managing change. It turns out that for the best performing organizations there are six important control categories – access, change, resolution, configuration, version release and service levels. There are performance measures involving each of the categories defining audit, operations and security performance measures. These include security effectiveness, audit compliance disruption levels, IT user satisfaction and unplanned work. By analyzing relationships between control objectives and corresponding performance indicators, numerous researchers have been able to differentiate which controls are actually most effective for consistently predictable service delivery, as well as for preventing and responding to security incidents and fraud related exploits.

Of the twenty-one most important foundational controls used by the most effective organizations at controlling intrusions, there were two used by virtually all of them. Both of these controls revolve around change management:

• Are systems monitored for unauthorized changes in real time?
• Are there defined consequences for intentional unauthorized changes?

These controls are supplemented by 1) a formal process for IT configuration management; 2) an automated process for configuration management; 3) a process to track change success rates (the percentage of changes that succeed without causing an incident, service outage or impairment); 4) a process that provides relevant personnel with correct and accurate information on all current IT infrastructure configurations. Researchers found that these top six controls help organizations help manage risks and respond to security incidents by giving them the means to look forward, averting the riskiest changes before they happen, and to look backward, identifying definitively the source of outages, fraud associated abnormalities or service issues. Because they have a process that tracks and records all changes to their infrastructure and their associated success rates, the most effective organizations have a more informed understanding of their production environments and can rule out change as a cause very early in the incident response process. This means they can easily find the changes that caused the abnormal incident and remediate them quickly.

The organizations that are most successful in preventing and responding to fraud related security incidents are those that have mastered change management, thereby documenting and knowing the ‘normal’ state of their systems in the greatest possible detail. The organization must cultivate a ‘culture’ of change management and causality throughout, with zero tolerance for any unauthorized changes. As with any organizational culture, the culture of change management should start at the top, with leaders establishing a tone that all change must follow an explicit change management policy and process from the highest to the lowest levels of the organization, with zero tolerance for unauthorized change. These same executives should establish concrete, well-publicized consequences for violating change management procedures, with a clear, written change management policy. One of the components of an effective change management policy is the establishment of a governing body, such as a change advisory board that reviews and evaluates all changes for risk before approving them. This board reinforces the written policy, requiring mandatory testing tor each and every change, and an explicit rollback plan for each in the case of an unexpected result.

ACFE studies stress that post incident reviews are also crucial, so that the organization protects itself from repeating past mistakes. During these reviews, change owners should document their findings and work to integrate lessons learned into future anti-fraud operational practices.
Perhaps most important for responding to changes is having clear visibility into all change activities, not just those that are authorized. Automated controls that can maintain a change history reduce the risk of human error in managing and controlling the overall process.

So organizations that focus solely on access and reactive resolution controls at the expense of real time change management process controls are almost guaranteed to experience in today’s environment more security incidents, more damage from security incidents, and dramatically longer and less-effective resolution times. On the other hand, organizations that foster a culture of disciplined change management and causality, with full support from senior management, and have zero tolerance for unauthorized change and abnormalities, will have a superior security posture with fewer incidents, dramatically less damage to the business from security breaches and much faster incident identification and resolution of incidents when they happen.

In conducting a cyber-fraud post-mortem, CFE’s and other assurance professionals should not fail to focus on strengthening controls related to reducing 1) the amount of overall time the IT department devotes to unplanned work; 2) a high volume of emergency system changes; 3) and the number and nature of a high volume of failed system changes. All these are red-flags for cyber fraud risk and indicative of a low level of real time system knowledge on the part of the client organization.

Analytic Reinforcements

Rumbi’s post of last week on ransomware got me thinking on a long drive back from Washington about what an excellent tool the AICPA’s new Cybersecurity Risk Management Reporting Framework is, not only for CPAs but for CFEs as well as for all our client organizations. As the seemingly relentless wave of cyberattacks continues with no sign of let up, organizations are under intense pressure from key stakeholders and regulators to implement and enhance their cyber security and fraud prevention programs to protect customers, employees and all the types of valuable information in their possession.

According to research from the ACFE, the average total cost per company, per event of a data breach is $3.62 million. Initial damage estimates of a single breach, while often staggering, may not take into account less obvious and often undetectable threats such as the theft of intellectual property, espionage, destruction of data, attacks on core operations or attempts to disable critical infrastructure. These effects can knock on for years and have devastating financial, operational and brand impact ramifications.

Given the present broad regulatory pressures to tighten cyber security controls and the visibility surrounding cyberrisk, a number of proposed regulations focused on improving cyber security risk management programs have been introduced in the United States over the past few years by our various governing bodies. One of the more prominent is a regulation by the New York Department of Financial Services (NYDFS) that prescribes certain minimum cyber security standards for those entities regulated by the NYDFS. Based on an entity’s risk assessment, the NYDFS law has specific requirements around data encryption and including data protection and retention, third-party information security, application security, incident response and breach notification, board reporting, and required annual re-certifications.

However, organizations continue to report to the ACFE regarding their struggle to systematically report to stakeholders on the overall effectiveness of their cyber security risk management programs. In response, the AICPA in April of last year released a new cyber security risk management reporting framework intended to help organizations expand cyberrisk reporting to a broad range of internal and external users, to include management and the board of directors. The AICPA’s new reporting framework is designed to address the need for greater stakeholder transparency by providing in-depth, easily consumable information about the state of an organization’s cyberrisk management program. The cyber security risk management examination uses an independent, objective reporting approach and employs broader and more flexible criteria. For example, it allows for the selection and utilization of any control framework considered suitable and available in establishing the entity’s basic cyber security objectives and in developing and maintaining controls within the entity’s cyber security risk management program irregardless of whether the standard is the US National Institute of Standards and Technology (NIST)’s Cybersecurity Framework, the International Organization for standardization (ISO)’s ISO 27001/2 and related frameworks, or even an internally developed framework based on a combination of sources. The examination is voluntary, and applies to all types of entities, but should be considered by CFEs as a leading practice that provides management, boards and other key stakeholders with clear insight into the current state of an organization’s cyber security program while identifying gaps or pitfalls that leave organizations vulnerable to cyber fraud and other intrusions.

What stakeholders might benefit from a client organization’s cyber security risk management examination report? Clearly, we CFEs as we go about our routine fraud risk assessments; but such a report, most importantly, can be vital in helping an organization’s board of directors establish appropriate oversight of a company’s cyber security risk program and credibly communicate its effectiveness to stakeholders, including investors, analysts, customers, business partners and regulators. By leveraging this information, boards can challenge management’s assertions around the effectiveness of their cyberrisk management and fraud prevention programs and drive more effective decision making. Active involvement and oversight from the board can help ensure that an organization is paying adequate attention to cyberrisk management and displaying due diligence. The board can help shape expectations for reporting on cyberthreats while also advocating for greater transparency and assurance around the effectiveness of the program.

The cyber security risk management report in its initial and follow-up iterations can be invaluable in providing overview guidance to CFEs and forensic accountants in targeting both fraud prevention and fraud detection/investigative analytics. We know from our ACFE training that data analytics need to be fully integrated into the investigative process. Ensuring that data analytics are embedded in the detection/investigative process requires support from all levels, starting with the managing CFE. It will be an easier, more coherent process for management to support such a process if management is already supporting cyber security risk management reporting. Management will also have an easier time reinforcing the use of analytics generally, although the data analytics function supporting fraud examination will still have to market its services, team leaders will still be challenged by management, and team members will still have to be trained to effectively employ the newer analytical tools.

The presence of a robust cyber security risk management reporting process should also prove of assistance to the lead CFE in establishing goals for the implementation and use of data analytics in every investigation, and these goals should be communicated to the entire investigative team. It should be made clear to every level of the client organization that data analytics will support the investigative planning process for every detected fraud. The identification of business processes, IT systems, data sources, and potential analytic routines should be discussed and considered not only during planning, but also throughout every stage of the entire investigative engagement. Key in obtaining the buy-in of all is to include investigative team members in identifying areas or tests that the analytics group will target in support of the field work. Initially, it will be important to highlight success stories and educate managers and team leaders about what is possible. Improving on the traditional investigative approach of document review, interviewing, transaction review, etc. investigators can benefit from the implementation of data analytics to allow for more precise identification of the control deficiencies, instances of noncompliance with policies and procedures, and mis-assessment of areas of high risk that contributed to the development of the fraud in the first place. These same analytics can then be used to ensure that appropriate post-fraud management follow-up has occurred by elevating the identified deficiencies to the cyber security risk management reporting process and by implementing enhanced fraud prevention procedures in areas of higher fraud risk. This process would be especially useful in responding to and following up data breaches.

Once patterns are gathered and centralized, analytics can be employed to measure the frequency of occurrence, the bit sizes, the quantity of files executed and average time of use. The math involved allows an examiner to grasp the big picture. Individuals, including examiners, are normally overwhelmed by the sheer volume of information, but automation of pattern recognizing techniques makes big data a tractable investigative resource. The larger the sample size, the easier it is to determine patterns of normal and abnormal behavior. Network haystacks are bombarded by algorithms that can notify the CFE information archeologist about the probes of an insider threat for example.

Without analytics, enterprise-level fraud examination and risk assessment is a diminished discipline, limited in scope and effectiveness. Without an educated investigative workforce, armed with a programing language for automation and an accompanying data-mining philosophy and skill set, the control needs of management leaders at the enterprise level will go unmet; leaders will not have the data needed for fraud prevention on a large scale nor a workforce that is capable of getting them that data in the emergency following a breach or penetration.

The beauty of analytics, from a security and fraud prevention perspective, is that it allows the investigative efforts of the CFE to align with the critical functions of corporate business. It can be used to discover recurring risks, incidents and common trends that might otherwise have been missed. Establishing numerical baselines on quantified data can supplement a normal investigator’s tasks and enhance the auditor’s ability to see beneath the surface of what is presented in an examination. Good communication of analyzed data gives decision makers a better view of their systems through a holistic approach, which can aid in the creation of enterprise-level goals. Analytics and data mining always add dimension and depth to the CFE’s examination process at the enterprise level and dovetail with and are supported beautifully by the AICPA’s cyber security risk management reporting initiative.

CFEs should encourage the staffs of client analytics support functions to possess …

–understanding of the employing enterprise’s data concepts (data elements, record types, database types, and data file formats).
–understanding of logical and physical database structures.
–the ability to communicate effectively with IT and related functions to achieve efficient data acquisition and analysis.
–the ability to perform ad hoc data analysis as required to meet specific fraud examiner and fraud prevention objectives.
–the ability to design, build, and maintain well-documented, ongoing automated data analysis routines.
–the ability to provide consultative assistance to others who are involved in the application of analytics.

The Anti-Fraud Blockchain

Blockchain technology, the series of interlocking algorithms powering digital currencies like BitCoin, is emerging as a potent fraud prevention tool.  As every CFE knows, technology is enabling new forms of money and contracting, and the growing digital economy holds great promise to provide a full range of new financial tools, especially to the world’s poor and unbanked. These emerging virtual currencies and financial techniques are often anonymous, and none have received quite as much press as Bitcoin, the decentralized peer-to-peer digital form of money.

Bitcoins were invented in 2009 by a mysterious person (or group of people) using the alias Satoshi Nakamoto, and the coins are created or “mined” by solving increasingly difficult mathematical equations, requiring extensive computing power. The system is designed to ensure no more than twenty-one million Bitcoins are ever generated, thereby preventing a central authority from flooding the market with new Bitcoins. Most people purchase Bitcoins on third-party exchanges with traditional currencies, such as dollars or euros, or with credit cards. The exchange rates against the dollar for Bitcoin fluctuate wildly and have ranged from fifty cents per coin around the time of its introduction to over $16,0000 in December 2017. People can send Bitcoins, or percentages of bitcoin, to each other using computers or mobile apps, where coins are stored in digital wallets. Bitcoins can be directly exchanged between users anywhere in the world using unique alphanumeric identifiers, akin to e-mail addresses, and there are no transaction fees in the basic system, absent intermediaries.

Anytime a purchase takes place, it is recorded in a public ledger known as the blockchain, which ensures no duplicate transactions are permitted. Crypto currencies are called such because they use cryptography to regulate the creation and transfer of money, rather than relying on central authorities. Bitcoin acceptance continues to grow rapidly, and it is possible to use Bitcoins to buy cupcakes in San Francisco, cocktails in Manhattan, and a Subway sandwich in Allentown.

Because Bitcoin can be spent online without the need for a bank account and no ID is required to buy and sell the crypto currency, it provides a convenient system for anonymous, or more precisely pseudonymous, transactions, where a user’s true name is hidden. Though Bitcoin, like all forms of money, can be used for both legal and illegal purposes, its encryption techniques and relative anonymity make it strongly attractive to fraudsters and criminals of all kinds. Because funds are not stored in a central location, accounts cannot readily be seized or frozen by police, and tracing the transactions recorded in the blockchain is significantly more complex than serving a subpoena on a local bank operating within traditionally regulated financial networks. As a result, nearly all the so-called Dark Web’s illicit commerce is facilitated through alternative currency systems. People do not send paper checks or use credit cards in their own names to buy meth and pornography. Rather, they turn to anonymous digital and virtual forms of money such as Bitcoin.

A blockchain is, essentially, a way of moving information between parties over the Internet and storing that information and its transaction history on a disparate network of computers. Bitcoin, and all the other digital currencies, operates on a blockchain: as transactions are aggregated into blocks, each block is assigned a unique cryptographic signature called a “hash.” Once the validating cryptographic puzzle for the latest block has been solved by a coin mining computer, three things happen: the result is time-stamped, the new block is linked irrevocably to the blocks before and after it by its unique hash, and the block and its hash are posted to all the other computers that were attempting to solve the puzzle involved in the mining process for new coins. This decentralized network of computers is the repository of the immutable ledger of bitcoin transactions.  If you wanted to steal a bitcoin, you’d have to rewrite the coin’s entire history on the blockchain in broad daylight.

While bitcoin and other digital currencies operate on a blockchain, they are not the blockchain itself. It’s an insight of many computer scientists that in addition to exchanging digital money, the blockchain can be used to facilitate transactions of other kinds of digitized data, such as property registrations, birth certificates, medical records, and bills of lading. Because the blockchain is decentralized and its ledger immutable, all these types of transactions would be protected from hacking; and because the blockchain is a peer-to-peer system that lets people and businesses interact directly with each other, it is inherently more efficient and  cheaper than current systems that are burdened with middlemen such as lawyers and regulators.

A CFE’s client company that aims to reduce drug counterfeiting could have its CFE investigator use the blockchain to follow pharmaceuticals from provenance to purchase. Another could use it to do something similar with high-end sneakers. Yet another, a medical marijuana producer, could create a blockchain that registers everything that has happened to a cannabis product, from seed to sale, letting consumers, retailers and government regulators know where everything came from and where it went. The same thing can be done with any normal crop so, in the same way that a consumer would want to know where the corn on her table came from, or the apple that she had at lunch originated, all stake holders involved in the medical marijuana enterprise would know where any batch of product originated and who touched it all along the way.

While a blockchain is not a full-on solution to fraud or hacking, its decentralized infrastructure ensures that there are no “honeypots” of data available, like financial or medical records on isolated company servers, for criminals to exploit. Still, touting a bitcoin-derived technology as an answer to cybercrime may seem a stretch considering the high-profile, and lucrative, thefts of cryptocurrency over the past few years. Its estimated that as of March 2015, a full third of  all Bitcoin exchanges, (where people store their bitcoin), up to then had been hacked, and nearly half had closed. There was, most famously, the 2014 pilferage of Mt. Gox, a Japanese based digital coin exchange, in which 850,000 bitcoins worth $460,000,000 disappeared. Two years later another exchange, Bitfinex, was hacked and around $60 million in bitcoin was taken; the company’s solution was to spread the loss to all its customers, including those whose accounts had not been drained.

Unlike money kept in a bank, cryptocurrencies are uninsured and unregulated. That is one of the consequences of a monetary system that exists, intentionally, beyond government control or oversight. It may be small consolation to those who were affected by these thefts that the bitcoin network itself and the blockchain has never been breached, which perhaps proves the immunity of the blockchain to hacking.

This security of the blockchain itself demonstrates how smart contracts can be written and stored on it. These are covenants, written in code, that specify the terms of an agreement. They are smart because as soon as its terms are met, the contract executes automatically, without human intervention. Once triggered, it can’t be amended, tampered with, or impeded. This is programmable money. Such smart contracts are a tool with the potential to change how business in done. The concept, as with digital currencies, is based on computers synced together. Now imagine that rather than syncing a transaction, software is synced. Every machine in the network runs the same small program. It could be something simple, like a loan: A sends B some money, and B’s account automatically pays it back, with interest, a few days later. All parties agree to these terms, and it’s locked in using the smart contract. The parties have achieved programmable money!

There is no doubt that smart contracts and the blockchain itself will augment the trend toward automation, though it is automation through lines of code, not robotics. For businesses looking to cut costs and reduce fraud, this is one of the main attractions of blockchain technology. The challenge is that, if contracts are automated, what will happen to traditional firm control structures, processes, and intermediaries like lawyers and accountants? And what about managers? Their roles would all radically change. Most blockchain advocates imagine them changing so radically as to disappear altogether, taking with them many of the costs currently associated with doing business. According to a recent report in the trade press, the blockchain could reduce banks’ infrastructure costs attributable to cross-border payments, securities trading, and regulatory compliance by $15-20 billion per annum by 2022.  Whereas most technologies tend to automate workers on the periphery, blockchain automates away the center. Instead of putting the taxi driver out of a job, blockchain puts Uber out of a job and lets the taxi drivers work with the customer directly.

Whether blockchain technology will be a revolution for good or one that continues what has come to seem technology’s inexorable, crushing ascendance will be determined not only by where it is deployed, but how. The blockchain could be used by NGOs to eliminate corruption in the distribution of foreign aid by enabling funds to move directly from giver to receiver. It is also a way for banks to operate without external oversight, encouraging other kinds of corruption. Either way, we as CFEs would be wise to remember that technology is never neutral. It is always endowed with the values of its creators. In the case of the blockchain and crypto-currency, those values are libertarian and mechanistic; trust resides in algorithmic rules, while the rules of the state and other regulatory bodies are often viewed with suspicion and hostility.

A CDC for Cyber

I remember reading somewhere a few years back that Microsoft had commissioned a report which recommended that the U.S. government set up an entity akin to its Center for Disease Control but for cyber security.  An intriguing idea.  The trade press talks about malware and computer viruses and infections to describe self -replicating malicious code in the same way doctors talk about metastasizing cancers or the flu; likewise, as with public health, rather than focusing on prevention and detection, we often blame those who have become infected and try to retrospectively arrest/prosecute (cure) those responsible (the cancer cells, hackers) long after the original harm is done. Regarding cyber, what if we extended this paradigm and instead viewed global cyber security as an exercise in public health?

As I recall, the report pointed out that organizations such as the Centers for Disease Control in Atlanta and the World Health Organization in Geneva have over decades developed robust systems and objective methodologies for identifying and responding to public health threats; structures and frameworks that are far more developed than those existent in today’s cyber-security community. Given the many parallels between communicable human diseases and those affecting today’s technologies, there is also much fraud examiners and security professionals can learn from the public health model, an adaptable system capable of responding to an ever-changing array of pathogens around the world.

With cyber as with matters of public health, individual actions can only go so far. It’s great if an individual has excellent techniques of personal hygiene, but if everyone in that person’s town has the flu, eventually that individual will probably succumb as well. The comparison is relevant to the world of cyber threats. Individual responsibility and action can make an enormous difference in cyber security, but ultimately the only hope we have as a nation in responding to rapidly propagating threats across this planetary matrix of interconnected technologies is to construct new institutions to coordinate our response. A trusted, international cyber World Health Organization could foster cooperation and collaboration across companies, countries, and government agencies, a crucial step required to improve the overall public health of the networks driving the critical infrastructures in both our online and our off-line worlds.

Such a proposed cyber CDC could go a long way toward counteracting the technological risks our country faces today and could serve a critical role in improving the overall public health of the networks driving the critical infrastructures of our world. A cyber CDC could fulfill many roles that are carried out today only on an ad hoc basis, if at all, including:

• Education — providing members of the public with proven methods of cyber hygiene to protect themselves;
• Network monitoring — detection of infection and outbreaks of malware in cyberspace;
• Epidemiology — using public health methodologies to study digital cyber disease propagation and provide guidance on response and remediation;
• Immunization — helping to ‘vaccinate’ companies and the public against known threats through software patches and system updates;
• Incident response — dispatching experts as required and coordinating national and global efforts to isolate the sources of online infection and treat those affected.

While there are many organizations, both governmental and non-governmental, that focus on the above tasks, no single entity owns them all. It is through these gaps in effort and coordination that cyber risks continue to mount. An epidemiological approach to our growing technological risks is required to get to the source of malware infections, as was the case in the fight against malaria. For decades, all medical efforts focused in vain on treating the disease in those already infected. But it wasn’t until epidemiologists realized the malady was spread by mosquitoes breeding in still pools of water that genuine progress was made in the fight against the disease. By draining the pools where mosquitoes and their larvae grow, epidemiologists deprived them of an important breeding ground, thus reducing the spread of malaria. What stagnant pools can we drain in cyberspace to achieve a comparable result? The answer represents the yet unanswered challenge.

There is another major challenge a cyber CDC would face: most of those who are sick have no idea they are walking around infected, spreading disease to others. Whereas malaria patients develop fever, sweats, nausea, and difficulty breathing, important symptoms of their illness, infected computer users may be completely asymptomatic. This significant difference is evidenced by the fact that the overwhelming majority of those with infected devices have no idea there is malware on their machines nor that they might have even joined a botnet army. Even in the corporate world, with the average time to detection of a network breach now at 210 days, most companies have no idea their most prized assets, whether intellectual property or a factory’s machinery, have been compromised. The only thing worse than being hacked is being hacked and not knowing about it. If you don’t know you’re sick, how can you possibly get treatment? Moreover, how can we prevent digital disease propagation if carriers of these maladies don’t realize they are infecting others?

Addressing these issues could be a key area of import for any proposed cyber CDC and fundamental to future communal safety and that of critical information infrastructures. Cyber-security researchers have pointed out the obvious Achilles’ heel of the modern technology infused world, the fact that today everything is either run by computers (or will be) and that everything is reliant on these computers continuing to work. The challenge is that we must have some way of continuing to work even if all the computers fail. Were our information systems to crash on a mass scale, there would be no trading on financial markets, no taking money from ATMs, no telephone network, and no pumping gas. If these core building blocks of our society were to suddenly give way, what would humanity’s backup plan be? The answer is simply, we don’t now have one.

Complicating all this from a law enforcement and fraud investigation perspective is that black hats generally benefit from technology long before defenders and investigators ever do. The successful ones have nearly unlimited budgets and don’t have to deal with internal bureaucracies, approval processes, or legal constraints. But there are other systemic issues that give criminals the upper hand, particularly around jurisdiction and international law. In a matter of minutes, the perpetrator of an online crime can virtually visit six different countries, hopping from server to server and continent to continent in an instant. But what about the police who must follow the digital evidence trail to investigate the matter?  As with all government activities, policies, and procedures, regulations must be followed. Trans-border cyber-attacks raise serious jurisdictional issues, not just for an individual police department, but for the entire institution of policing as currently formulated. A cop in Baltimore has no authority to compel an ISP in Paris to provide evidence, nor can he make an arrest on the right bank. That can only be done by request, government to government, often via mutual legal assistance treaties. The abysmally slow pace of international law means it commonly takes years for police to get evidence from overseas (years in a world in which digital evidence can be destroyed in seconds). Worse, most countries still do not even have cyber-crime laws on the books, meaning that criminals can act with impunity making response through a coordinating entity like a cyber-CDC more valuable to the U.S. specifically and to the world in general.

Experts have pointed out that we’re engaged in a technological arms race, an arms race between people who are using technology for good and those who are using it for ill. The challenge is that nefarious uses of technology are scaling exponentially in ways that our current systems of protection have simply not matched.  The point is, if we are to survive the progress offered by our technologies and enjoy their benefits, we must first develop adaptive mechanisms of security that can match or exceed the exponential pace of the threats confronting us. On this most important of imperatives, there is unambiguously no time to lose.

Help for the Little Guy

It’s clear to the news media and to every aware assurance professional that today’s cybercriminals are more sophisticated than ever in their operations and attacks. They’re always on the lookout for innovative ways to exploit vulnerabilities in every global payment system and in the cloud.

According to the ACFE, more consumer records were compromised in 2015-16 than in the previous four years combined. Data breach statistics from this year (2017) are projected to be even grimmer due to the growth of increasingly sophisticated attack methods such as increasingly complex malware infections and system vulnerability exploits, which grew tenfold in 2016. With attacks coming in many different forms and from many different channels, consumers, businesses and financial institutions (often against their will) are being forced to gain a better understanding of how criminals operate, especially in ubiquitous channels like social networks. They then have a better chance of mitigating the risks and recognizing attacks before they do severe damage.

As your Chapter has pointed out over the years in this blog, understanding the mechanics of data theft and the conversion process of stolen data into cash can help organizations of all types better anticipate in the exact ways criminals may exploit the system, so that organizations can put appropriate preventive measures in place. Classic examples of such criminal activity include masquerading as a trustworthy entity such as a bank or credit card company. These phishers send e-mails and instant messages that prompt users to reply with sensitive information such as usernames, passwords and credit card details, or to enter the information at a rogue web site. Other similar techniques include using text messaging (SMSishing or smishing) or voice mail (vishing) or today’s flood of offshore spam calls to lure victims into giving up sensitive information. Whaling is phishing targeted at high-worth accounts or individuals, often identified through social networking sites such as LinkedIn or Facebook. While it’s impossible to anticipate or prevent every attack, one way to stay a step ahead of these criminals is to have a thorough understanding of how such fraudsters operate their enterprises.

Although most cyber breaches reported recently in the news have struck large companies such as Equifax and Yahoo, the ACFE tells us that small and mid-sized businesses suffer a far greater number of devastating cyber incidents. These breaches involve organizations of every industry type; all that’s required for vulnerability is that they operate network servers attached to the internet. Although the number of breached records a small to medium sized business controls is in the hundreds or thousands, rather than in the millions, the cost of these breaches can be higher for the small business because it may not be able to effectively address such incidents on its own.  Many small businesses have limited or no resources committed to cybersecurity, and many don’t employ any assurance professionals apart from the small accounting firms performing their annual financial audit. For these organizations, the key questions are “Where should we focus when it comes to cybersecurity?” and “What are the minimum controls we must have to protect the sensitive information in our custody?” Fraud Examiners and forensic accountants with client attorneys assisting small businesses can assist in answering these questions by checking that their client attorney’s organizations implement a few vital cybersecurity controls.

First, regardless of their industry, small businesses must ensure their network perimeter is protected. The first step is identifying the vulnerabilities by performing an external network scan at least quarterly. A small business can either hire an outside company to perform these scans, or, if they have small in-house or contracted IT, they can license off-the-shelf software to run the scans, themselves. Moreover, small businesses need a process in place to remedy the identified critical, high, and medium vulnerabilities within three months of the scan run date, while low vulnerabilities are less of a priority. The fewer vulnerabilities the perimeter network has,
the less chance that an external hacker will breach the organization’s network.

Educating employees about their cybersecurity responsibilities is not a simple check-sheet matter. Smaller businesses not only need help in implementing an effective information security policy, they also need to ensure employees are aware of the policy and of their responsibilities. The policy and training should cover:

–Awareness of phishing attacks;
–Training on ransomware management;
–Travel tips;
–Potential threats of social engineering;
–Password protection;
–Risks of storing sensitive data in the cloud;
–Accessing corporate information from home computers and other personal devices;
–Awareness of tools the organization provides for securely sending emails or sharing large files;
–Protection of mobile devices;
–Awareness of CEO spoofing attacks.

In addition, small businesses should verify employees’ level of awareness by conducting simulation exercises. These can be in the form of a phishing exercise in which organizations themselves send fake emails to their employees to see if they will click on a web link, or a social engineering exercise in which a hired individual tries to enter the organization’s physical location and steal sensitive information such as information on computer screens left in plain sight.

In small organizations, sensitive information tends to proliferate across various platforms and folders. For example, employees’ personal information typically resides in human resources software or with a cloud service provider, but through various downloads and reports, the information can proliferate to shared drives and folders, laptops, emails, and even cloud folders like Dropbox or Google Drive. Assigned management at the organization should check that the organization has identified the sites of such proliferation to make sure it has a good handle on the state of all the organization’s sensitive information:

–Inventory all sensitive business processes and the related IT systems. Depending on the organization’s industry, this information could include customer information, pricing data, customers’ credit card information, patients’ health information, engineering data, or financial data;
–For each business process, identify an information owner who has complete authority to approve user access to that information;
–Ensure that the information owner periodically reviews access to all the information he or she owns and updates the access list.

Organizations should make it hard to get to their sensitive data by building layers or network segments. Although the network perimeter is an organization’s first line of defense, the probability of the network being penetrated is today at an all-time high. Management should check whether the organization has built a layered defense to protect its sensitive information. Once the organization has identified its sensitive information, management should work with the IT function to segment those servers that run its sensitive applications.  This segmentation will result in an additional layer of protection for these servers, typically by adding another firewall for the segment. Faced with having to penetrate another layer of defense, an intruder may decide to go elsewhere where less sensitive information is stored.

An organization’s electronic business front door also can be the entrance for fraudsters and criminals. Most of today’s malware enters through the network but proliferates through the endpoints such as laptops and desktops. At a minimum, internal small business management must ensure that all the endpoints are running anti-malware/anti-virus software. Also, they should check that this software’s firewall features are enabled. Moreover, all laptop hard drives should be encrypted.

In addition to making sure their client organizations have implemented these core controls, assurance professionals should advise small business client executives to consider other protective controls:

–Monitor the network. Network monitoring products and services can provide real-time alerts in case there is an intrusion;
–Manage service providers. Organizations should inventory all key service providers and review all contracts for appropriate security, privacy, and data breach notification language;
–Protect smart devices. Increasingly, company information is stored on mobile devices. Several off-the-shelf solutions can manage and protect the information on these devices. Small businesses should ensure they are able to wipe the sensitive information from these devices if they are lost or stolen;
–Monitor activity related to sensitive information. Management IT should log activities against their sensitive information and keep an audit log in case an incident occurs and they need to review the logs to evaluate the incident.

Combined with the controls listed above, these additional controls can help any small business reduce the probability of a data breach. But a security program is only as strong as its weakest link Through their assurance and advisory work, CFE’s and forensic accountants can proactively help identify these weaknesses and suggest ways to strengthen their smaller client organization’s anti-fraud defenses.

From Inside the Building

By Rumbi Petrozzello, CFE, CPA/CFF
2017 Vice-President – Central Virginia Chapter ACFE

Several months ago, I attended an ACFE session where one of the speakers had worked on the investigation of Edward Snowden. He shared that one of the ways Snowden had gained access to some of the National Security Agency (NSA) data that he downloaded was through the inadvertent assistance of his supervisor. According to this investigator, Snowden’s supervisor shared his password with Snowden, giving Snowden access to information that was beyond his subordinate’s level of authorization. In addition to this, when those security personnel reviewing downloads made by employees noticed that Snowden was downloading copious amounts of data, they approached Snowden’s supervisor to question why this might be the case. The supervisor, while acknowledging this to be true, stated that Snowden wasn’t really doing anything untoward.

At another ACFE session, a speaker shared information with us about how Chelsea Manning was able to download and remove data from a secure government facility. Manning would come to work, wearing headphones, listening to music on a Discman. Security would hear the music blasting and scan the CDs. Day after day, it was the same scenario. Manning showed up to work, music blaring.  Security staff grew so accustomed to Manning, the Discman and her CDs that when she came to work though security with a blank CD boldly labelled “LADY GAGA”, security didn’t blink. They should have because it was that CD and ones like it that she later carried home from work that contained the data she eventually shared with WikiLeaks.

Both these high-profile disasters are notable examples of the bad outcome arising from a realized internal threat. Both Snowden and Manning worked for organizations that had, and have, more rigorous security procedures and policies in place than most entities. Yet, both Snowden and Manning did not need to perform any magic tricks to sneak data out of the secure sites where the target data was held; it seems that it all it took was audacity on the one side and trust and complacency on the other.

When organizations deal with outside parties, such as vendors and customers, they tend to spend a lot of time setting up the structures and systems that will guide how the organization will interact with those vendors and customers. Generally, companies will take these systems of control seriously, if only because of the problems they will have to deal with during annual external audits if they don’t. The typical new employee will spend a lot of time learning what the steps are from the point when a customer places an order through to the point the customer’s payment is received. There will be countless training manuals to which to refer and many a reminder from co-workers who may be negatively impacted if the rooky screws up.

However, this scenario tends not to hold up when it comes to how employees typically share information and interact with each other. This is true despite the elevated risk that a rogue insider represents. Often, when we think about an insider causing harm to a company through fraudulent acts, we tend to imagine a villain, someone we could identify easily because s/he is obviously a terrible person. After all, only a terrible person could defraud their employer. In fact, as the ACFE tells us, the most successful fraudsters are the ones who gain our trust and who, therefore, don’t really have to do too much for us to hand over the keys to the kingdom. As CFEs and Forensic Accountants, we need to help those we work with understand the risks that an insider threat can represent and how to mitigate that risk. It’s important, in advising our clients, to guide them toward the creation of preventative systems of policy and procedure that they sometimes tend to view as too onerous for their employees. Excuses I often hear run along the lines of:

• “Our employees are like family here, we don’t need to have all these rules and regulations”

• “I keep a close eye on things, so I don’t have to worry about all that”

• “My staff knows what they are supposed to do; don’t worry about it.”

Now, if people can easily walk sensitive information out of locations that have documented systems and are known to be high security operations, can you imagine what they can do at your client organizations? Especially if the employer is assuming that their employees magically know what they are supposed to do? This is the point that we should be driving home with our clients. We should look to address the fact that both trust and complacency in organizations can be problems as well as assets. It’s great to be able to trust employees, but we should also talk to our clients about the fraud triangle and how one aspect of it, pressure, can happen to any staff member, even the most trusted. With that in mind, it’s important to institute controls so that, should pressure arise with an employee, there will be little opportunity open to that employee to act. Both Manning and Snowden have publicly spoken about the pressures they felt that led them to act in the way they did. The reason we even know about them today is that they had the opportunity to act on those pressures. I’ve spent time consulting with large organizations, often for months at a time. During those times, I got to chat with many members of staff, including security. On a couple of occasions, I forgot and left my building pass at home. Even though I was on a first name basis with the security staff and had spent time chatting with them about our personal lives, they still asked me for identification and looked me up in the system. I’m sure they thought I was a nice and trustworthy enough person, but they knew to follow procedures and always checked on whether I was still authorized to access the building. The important point is that they, despite knowing me, knew to check and followed through.

Examples of controls employees should be reminded to follow are:

• Don’t share your password with a fellow employee. If that employee cannot access certain information with their own password, either they are not authorized to access that information or they should speak with an administrator to gain the desired access. Sharing a password seems like a quick and easy solution when under time pressures at work, but remind employees that when they share their login information, anything that goes awry will be attributed to them.

• Always follow procedures. Someone looking for an opportunity only needs one.

• When something looks amiss, thoroughly investigate it. Even if someone tells you that all is well, verify that this is indeed the case.

• Explain to staff and management why a specific control is in place and why it’s important. If they understand why they are doing something, they are more likely to see the control as useful and to apply it.

• Schedule training on a regular basis to remind staff of the controls in place and the systems they are to follow. You may believe that staff knows what they are supposed to do, but reminding them reduces the risk of them relying on hearsay and secondhand information. Management is often surprised by what they think staff knows and what they find out the staff really knows.

It should be clear to your clients that they have control over who has access to sensitive information and when and how it leaves their control. It doesn’t take much for an insider to gain access to this information. A face you see smiling at you daily is the face of a person you can grow comfortable with and with whom you can drop your guard. However, if you already have an adequate system and effective controls in place, you take the personal out of the equation and everyone understands that we are all just doing our job.

Sock Puppets

The issue of falsely claimed identity in all its myriad forms has shadowed the Internet since the beginning of the medium.  Anyone who has used an on-line dating or auction site is all too familiar with the problem; anyone can claim to be anyone.  Likewise, confidence games, on or off-line, involve a range of fraudulent conduct committed by professional con artists against unsuspecting victims. The victims can be organizations, but more commonly are individuals. Con artists have classically acted alone, but now, especially on the Internet, they usually group together in criminal organizations for increasingly complex criminal endeavors. Con artists are skilled marketers who can develop effective marketing strategies, which include a target audience and an appropriate marketing plan: crafting promotions, product, price, and place to lure their victims. Victimization is achieved when this marketing strategy is successful. And falsely claimed identities are always an integral component of such schemes, especially those carried out on-line.

Such marketing strategies generally involve a specific target market, which is usually made up of affinity groups consisting of individuals grouped around an objective, bond, or association like Facebook or LinkedIn Group users. Affinity groups may, therefore, include those associated through age, gender, religion, social status, geographic location, business or industry, hobbies or activities, or professional status. Perpetrators gain their victims’ trust by affiliating themselves with these groups.  Historically, various mediums of communication have been initially used to lure the victim. In most cases, today’s fraudulent schemes begin with an offer or invitation to connect through the Internet or social network, but the invitation can come by mail, telephone, newspapers and magazines, television, radio, or door-to-door channels.

Once the mark receives and accepts the offer to connect, some sort of response or acceptance is requested. The response will typically include (in the case of Facebook or LinkedIn) clicking on a link included in a fraudulent follow-up post to visit a specified web site or to call a toll-free number.

According to one of Facebook’s own annual reports, up to 11.2 percent of its accounts are fake. Considering the world’s largest social media company has 1.3 billion users, that means up to 140 million Facebook accounts are fraudulent; these users simply don’t exist. With 140 million inhabitants, the fake population of Facebook would be the tenth-largest country in the world. Just as Nielsen ratings on television sets determine different advertising rates for one television program versus another, on-line ad sales are determined by how many eyeballs a Web site or social media service can command.

Let’s say a shyster want 3,000 followers on Twitter to boost the credibility of her scheme? They can be hers for $5. Let’s say she wants 10,000 satisfied customers on Facebook for the same reason? No problem, she can buy them on several websites for around $1,500. A million new friends on Instagram can be had for only $3,700. Whether the con man wants favorites, likes, retweets, up votes, or page views, all are for sale on Web sites like Swenzy, Fiverr, and Craigslist. These fraudulent social media accounts can then be freely used to falsely endorse a product, service, or company, all for just a small fee. Most of the work of fake account set up is carried out in the developing world, in places such as India and Bangladesh, where actual humans may control the accounts. In other locales, such as Russia, Ukraine, and Romania, the entire process has been scripted by computer bots, programs that will carry out pre-encoded automated instructions, such as “click the Like button,” repeatedly, each time using a different fake persona.

Just as horror movie shape-shifters can physically transform themselves from one being into another, these modern screen shifters have their own magical powers, and organizations of men are eager to employ them, studying their techniques and deploying them against easy marks for massive profit. In fact, many of these clicks are done for the purposes of “click fraud.” Businesses pay companies such as Facebook and Google every time a potential customer clicks on one of the ubiquitous banner ads or links online, but organized crime groups have figured out how to game the system to drive profits their way via so-called ad networks, which capitalize on all those extra clicks.

Painfully aware of this, social media companies have attempted to cut back on the number of fake profiles. As a result, thousands and thousands of identities have disappeared over night among the followers of many well know celebrities and popular websites. If Facebook has 140 million fake profiles, there is no way they could have been created manually one by one. The process of creation is called sock puppetry and is a reference to the children’s toy puppet created when a hand is inserted into a sock to bring the sock to life. In the online world, organized crime groups create sock puppets by combining computer scripting, web automation, and social networks to create legions of online personas. This can be done easily and cheaply enough to allow those with deceptive intentions to create hundreds of thousands of fake online citizens. One only needs to consult a readily available on-line directory of the most common names in any country or region. Have a scripted bot merely pick a first name and a last name, then choose a date of birth and let the bot sign up for a free e-mail account. Next, scrape on-line photo sites such as Picasa, Instagram, Facebook, Google, and Flickr to choose an age-appropriate image to represent your new sock puppet.

Armed with an e-mail address, name, date of birth, and photograph, you sign up your fake persona for an account on Facebook, LinkedIn, Twitter, or Instagram. As a last step, you teach your puppets how to talk by scripting them to reach out and send friend requests, repost other people’s tweets, and randomly like things they see Online. Your bots can even communicate and cross-post with one another. Before the fraudster knows it, s/he has thousands of sock puppets at his disposal for use as he sees fit. It is these armies of sock puppets that criminals use as key constituents in their phishing attacks, to fake on-line reviews, to trick users into downloading spyware, and to commit a wide variety of financial frauds, all based on misplaced and falsely claimed identity.

The fraudster’s environment has changed and is changing over time, from a face-to-face physical encounter to an anonymous on-line encounter in the comfort of the victim’s own home. While some consumers are unaware that a weapon is virtually right in front of them, others are victims who struggle with the balance of the many wonderful benefits offered by advanced technology and the painful effects of its consequences. The goal of law enforcement has not changed over the years; to block the roads and close the loopholes of perpetrators even as perpetrators continue to strive to find yet another avenue to commit fraud in an environment in which they can thrive. Today, the challenge for CFEs, law enforcement and government officials is to stay on the cutting edge of technology, which requires access to constantly updated resources and communication between organizations; the ability to gather information; and the capacity to identify and analyze trends, institute effective policies, and detect and deter fraud through restitution and prevention measures.

Now is the time for CFEs and other assurance professionals to continuously reevaluate all we for take for granted in the modern technical world and to increasingly question our ever growing dependence on the whole range of ubiquitous machines whose potential to facilitate fraud so few of our clients and the general public understand.

Industrialized Theft

In at least one way you have to hand it to Ethically Challenged, Inc.;  it sure knows how to innovate, and the recent spate of ransomware attacks proves they also know how to make what’s old new again. Although society’s criminal opponents engage in constant business process improvement, they’ve proven again and again that they’re not just limited to committing new crimes from scratch every time. In the age of Moore’s law, these tasks have been readily automated and can run in the background at scale without the need for significant human intervention. Crime automations like the WannaCry virus allow transnational organized crime groups to gain the same efficiencies and cost savings that multinational corporations obtained by leveraging technology to carry out their core business functions. That’s why today it’s possible for hackers to rob not just one person at a time but 100 million or more, as the world saw with the Sony PlayStation and Target data breaches and now with the WannaCry worm.

As covered in our Chapter’s training event of last year, ‘Investigating on the Internet’, exploit tool kits like Blackhole and SpyEye commit crime “automagically” by minimizing the need for human labor, thereby dramatically reducing criminal costs. They also allow hackers to pursue the “long tail” of opportunity, committing millions of thefts in small amounts so that (in many cases) victims don’t report them and law enforcement has no way to track them. While high-value targets (companies, nations, celebrities, high-net-worth individuals) are specifically and individually targeted, the way the majority of the public is hacked is by automated scripted computer malware, one large digital fishing net that scoops up anything and everything online with a vulnerability that can be exploited. Given these obvious advantages, as of 2016 an estimated 61 percent of all online attacks were launched by fully automated crime tool kits, returning phenomenal profits for the Dark Web overlords who expertly orchestrated them. Modern crime has become reduced and distilled to a software program that anybody can run at tremendous profit.

Not only can botnets and other tools be used over and over to attack and offend, but they’re now enabling the commission of much more sophisticated crimes such as extortion, blackmail, and shakedown rackets. In an updated version of the old $500 million Ukrainian Innovative Marketing solutions “virus detected” scam, fraudsters have unleashed a new torrent of malware that hold the victim’s computer hostage until a ransom is paid and an unlock code is provided by the scammer to regain access to the victim’s own files. Ransomware attack tools are included in a variety of Dark Net tool kits, such as WannaCry and Gameover Zeus. According to the ACFE, there are several varieties of this scam, including one that purports to come from law enforcement. Around the world, users who become infected with the Reveton Trojan suddenly have their computers lock up and their full screens covered with a notice, allegedly from the FBI. The message, bearing an official-looking large, full-color FBI logo, states that the user’s computer has been locked for reasons such as “violation of the federal copyright law against illegally downloaded material” or because “you have been viewing or distributing prohibited pornographic content.”

In the case of the Reveton Trojan, to unlock their computers, users are informed that they must pay a fine ranging from $200 to $400, only accepted using a prepaid voucher from Green Dot’s MoneyPak, which victims are instructed they can buy at their local Walmart or CVS; victims of WannaCry are required to pay in BitCoin. To further intimidate victims and drive home the fact that this is a serious police matter, the Reveton scammers prominently display the alleged violator’s IP address on their screen as well as snippets of video footage previously captured from the victim’s Webcam. As with the current WannaCry exploit, the Reveton scam has successfully targeted tens of thousands of victims around the world, with the attack localized by country, language, and police agency. Thus, users in the U.K. see a notice from Scotland Yard, other Europeans get a warning from Europol, and victims in the United Arab Emirates see the threat, translated into Arabic, purportedly from the Abu Dhabi Police HQ.

WannaCry is even more pernicious than Reveton though in that it actually encrypts all the files on a victim’s computer so that they can no longer be read or accessed. Alarmingly, variants of this type of malware often present a ticking-bomb-type countdown clock advising users that they only have forty-eight hours to pay $300 or all of their files will be permanently destroyed. Akin to threatening “if you ever want to see your files alive again,” these ransomware programs gladly accept payment in Bitcoin. The message to these victims is no idle threat. Whereas previous ransomware might trick users by temporarily hiding their files, newer variants use strong 256-bit Advanced Encryption Standard cryptography to lock user files so that they become irrecoverable. These types of exploits earn scores of millions of dollars for the criminal programmers who develop and sell them on-line to other criminals.

Automated ransomware tools have even migrated to mobile phones, affecting Android handset users in certain countries. Not only have individuals been harmed by the ransomware scourge, so too have companies, nonprofits, and even government agencies, the most infamous of which was the Swansea Police Department in Massachusetts some years back, which became infected when an employee opened a malicious e-mail attachment. Rather than losing its irreplaceable police case files to the scammers, the agency was forced to open a Bitcoin account and pay a $750 ransom to get its files back. The police lieutenant told the press he had no idea what a Bitcoin was or how the malware functioned until his department was struck in the attack.

As the ACFE and other professional organizations have told us, within its world, cybercrime has evolved highly sophisticated methods of operation to sell everything from methamphetamine to child sexual abuse live streamed online. It has rapidly adopted existing tools of anonymity such as the Tor browser to establish Dark Net shopping malls, and criminal consulting services such as hacking and murder for hire are all available at the click of a mouse. Untraceable and anonymous digital currencies, such as Bitcoin, are breathing new life into the underground economy and allowing for the rapid exchange of goods and services. With these additional revenues, cyber criminals are becoming more disciplined and organized, significantly increasing the sophistication of their operations. Business models are being automated wherever possible to maximize profits and botnets can threaten legitimate global commerce, easily trained on any target of the scammer’s choosing. Fundamentally, it’s been done. As WannaCry demonstrates, the computing and Internet based crime machine has been built. With these systems in place, the depth and global reach of cybercrime, mean that crime now scales, and it scales exponentially. Yet, as bad as this threat is today, it is about to become much worse, as we hand such scammers billions of more targets for them to attack as we enter the age of ubiquitous computing and the Internet of Things.

The Caveat Emptor World of Cyber Security

banner

skeleton-6Over the last five years it’s apparent that everything has changed on the internet; business activities, information technology, the communications environment as well as the threat landscape, most especially to the corporate big data of our clients (Target recently and, I’m sure, many others to come).   Retailers and hackers have become locked in a cyber arms race and retail customers are the losers.  The press and media at all levels are full of reports of government and business systems being infiltrated and thousand  of terabytes of big data stolen.  Consumer’s computers,  wireless modems and, increasingly cell phones are being compromised and it seems  the fabric of cyberspace is under attack, with even nation states demonstrating their ability to take control of the internet seemingly at will.

All of this is as a result of a number of fundamental shifts in technology and those shifts require an equally fundamental shift in attitudes toward security for concerned players at all levels.  This is because information technology has evolved for our clients from purely a means of system automation into an essential characteristic of society itself, an entity called cyberspace.  Seemingly before our eyes, the kind of quality, reliability and availability that has been traditionally associated only with power and water utilities is now absolutely essential for the protection and continued operation of the technology used to deliver all types of government and business services to its ever expanding user public.  The critical information service flows of cyberspace have become as essential to the continuity of our daily survival as water and power grid services.

Cloud computing, coupled with the internal big data on customers of retail and financial services companies enables individuals  and organizations to access application services and data from anywhere via a web interface; this is essentially an information service model of delivery with attitude.  I used to produce this blog by running Microsoft Front Page on a PC; now I use WordPress in the cloud…the economies possible through use of cloud rather than internal IT solutions will inevitably see the majority of businesses (and governments) running in the cloud.  This model substantially changes the ways in which organizations can affect and will have to securely manage both their IT functions and the security of their systems.

As fraud examiners conducting fraud risk assessments and investigating cyber based incidents, we need to be aware that the security standards employed today by the bulk of our clients (and currently arrayed to protect their constantly growing hoards of customer and financial data) were developed in a world in which computers were subject to the frauds and other criminal activities perpetrated primarily by individuals inside, and to a lesser extent, outside their organizations.  That era is now long past.  What has changed is the rapid increase in organized cyber crime through the emergence of robot networks (bot-nets) enabling network penetrations and related criminal activity to be conducted on an unprecedented global scale.  These bot-networks can be used as force multipliers to deliver massive denial of service attacks on targeted businesses, essentially cutting the victims off from the global internet.

Cyber crime is now arguably a bigger issue than illegal drugs given its potential to directly affect the lives (and livelihoods) of every customer of every business and of every citizen of every country.  Yet the growing problem in cyberspace doesn’t come from the threats alone but from the combination of threats and vulnerabilities.  As fraud professionals in the front line we need to make our clients aware that their vulnerabilities are neither more nor less than byproducts of the currently low or none existent level of quality in the personnel and products they use to provide themselves with cyber security.  The establishment and official recognition of cyber security as a profession is long overdue and recent thefts of big data from a host of companies prove it.  We are rapidly leaving the era when it was cheaper for individual companies (and governments) to pay the cost of occasional cyber breaches than to invest in adequate security (the credit card industry and the European smart card chip is a case in point).  It’s no longer acceptable for professionals, trades people, products and services that are critical to the continuing success of the vital cyber enterprise to operate on a basis of caveat emptor.

In order to assist security professionals in the fight against the ever rising tide of cyber crime, fraud examiners and other control assurance professionals need to understand for each of our client types:

–their business, the related strategic cyber objectives, the market, the stakeholders and what information (especially big data related to customers and products) the enterprise uses and shares;

–the business information flows, relationships and dependencies;

–the value of the business information in financial, strategic and operational terms;

–the impact of failure in information management-corruption, loss or disclosure-and failure in the service provided;

–what it takes to recover to a manageable position in the event of failure or cyber fraud and to understand where such a recovery is not possible (definition of loss boundary conditions).

With this type information in hand, the fraud examiner can hope to realistically assist the security professional in the definition of risk profiles and related fraud scenarios with the objective of moving toward the creation of appropriate cyber security threat counter measures; the effort is guaranteed to add value.

Please make plans to join us on April 16-17th, 2014 for the Central Virginia Chapter’s seminar on the Topic of Introduction to Fraud Examination for 16 CPE ($200.00 for early Registration)! For details see our Prior Post entitled, “Save the Date”!