Tag Archives: cyber crime

A CDC for Cyber

I remember reading somewhere a few years back that Microsoft had commissioned a report which recommended that the U.S. government set up an entity akin to its Center for Disease Control but for cyber security.  An intriguing idea.  The trade press talks about malware and computer viruses and infections to describe self -replicating malicious code in the same way doctors talk about metastasizing cancers or the flu; likewise, as with public health, rather than focusing on prevention and detection, we often blame those who have become infected and try to retrospectively arrest/prosecute (cure) those responsible (the cancer cells, hackers) long after the original harm is done. Regarding cyber, what if we extended this paradigm and instead viewed global cyber security as an exercise in public health?

As I recall, the report pointed out that organizations such as the Centers for Disease Control in Atlanta and the World Health Organization in Geneva have over decades developed robust systems and objective methodologies for identifying and responding to public health threats; structures and frameworks that are far more developed than those existent in today’s cyber-security community. Given the many parallels between communicable human diseases and those affecting today’s technologies, there is also much fraud examiners and security professionals can learn from the public health model, an adaptable system capable of responding to an ever-changing array of pathogens around the world.

With cyber as with matters of public health, individual actions can only go so far. It’s great if an individual has excellent techniques of personal hygiene, but if everyone in that person’s town has the flu, eventually that individual will probably succumb as well. The comparison is relevant to the world of cyber threats. Individual responsibility and action can make an enormous difference in cyber security, but ultimately the only hope we have as a nation in responding to rapidly propagating threats across this planetary matrix of interconnected technologies is to construct new institutions to coordinate our response. A trusted, international cyber World Health Organization could foster cooperation and collaboration across companies, countries, and government agencies, a crucial step required to improve the overall public health of the networks driving the critical infrastructures in both our online and our off-line worlds.

Such a proposed cyber CDC could go a long way toward counteracting the technological risks our country faces today and could serve a critical role in improving the overall public health of the networks driving the critical infrastructures of our world. A cyber CDC could fulfill many roles that are carried out today only on an ad hoc basis, if at all, including:

• Education — providing members of the public with proven methods of cyber hygiene to protect themselves;
• Network monitoring — detection of infection and outbreaks of malware in cyberspace;
• Epidemiology — using public health methodologies to study digital cyber disease propagation and provide guidance on response and remediation;
• Immunization — helping to ‘vaccinate’ companies and the public against known threats through software patches and system updates;
• Incident response — dispatching experts as required and coordinating national and global efforts to isolate the sources of online infection and treat those affected.

While there are many organizations, both governmental and non-governmental, that focus on the above tasks, no single entity owns them all. It is through these gaps in effort and coordination that cyber risks continue to mount. An epidemiological approach to our growing technological risks is required to get to the source of malware infections, as was the case in the fight against malaria. For decades, all medical efforts focused in vain on treating the disease in those already infected. But it wasn’t until epidemiologists realized the malady was spread by mosquitoes breeding in still pools of water that genuine progress was made in the fight against the disease. By draining the pools where mosquitoes and their larvae grow, epidemiologists deprived them of an important breeding ground, thus reducing the spread of malaria. What stagnant pools can we drain in cyberspace to achieve a comparable result? The answer represents the yet unanswered challenge.

There is another major challenge a cyber CDC would face: most of those who are sick have no idea they are walking around infected, spreading disease to others. Whereas malaria patients develop fever, sweats, nausea, and difficulty breathing, important symptoms of their illness, infected computer users may be completely asymptomatic. This significant difference is evidenced by the fact that the overwhelming majority of those with infected devices have no idea there is malware on their machines nor that they might have even joined a botnet army. Even in the corporate world, with the average time to detection of a network breach now at 210 days, most companies have no idea their most prized assets, whether intellectual property or a factory’s machinery, have been compromised. The only thing worse than being hacked is being hacked and not knowing about it. If you don’t know you’re sick, how can you possibly get treatment? Moreover, how can we prevent digital disease propagation if carriers of these maladies don’t realize they are infecting others?

Addressing these issues could be a key area of import for any proposed cyber CDC and fundamental to future communal safety and that of critical information infrastructures. Cyber-security researchers have pointed out the obvious Achilles’ heel of the modern technology infused world, the fact that today everything is either run by computers (or will be) and that everything is reliant on these computers continuing to work. The challenge is that we must have some way of continuing to work even if all the computers fail. Were our information systems to crash on a mass scale, there would be no trading on financial markets, no taking money from ATMs, no telephone network, and no pumping gas. If these core building blocks of our society were to suddenly give way, what would humanity’s backup plan be? The answer is simply, we don’t now have one.

Complicating all this from a law enforcement and fraud investigation perspective is that black hats generally benefit from technology long before defenders and investigators ever do. The successful ones have nearly unlimited budgets and don’t have to deal with internal bureaucracies, approval processes, or legal constraints. But there are other systemic issues that give criminals the upper hand, particularly around jurisdiction and international law. In a matter of minutes, the perpetrator of an online crime can virtually visit six different countries, hopping from server to server and continent to continent in an instant. But what about the police who must follow the digital evidence trail to investigate the matter?  As with all government activities, policies, and procedures, regulations must be followed. Trans-border cyber-attacks raise serious jurisdictional issues, not just for an individual police department, but for the entire institution of policing as currently formulated. A cop in Baltimore has no authority to compel an ISP in Paris to provide evidence, nor can he make an arrest on the right bank. That can only be done by request, government to government, often via mutual legal assistance treaties. The abysmally slow pace of international law means it commonly takes years for police to get evidence from overseas (years in a world in which digital evidence can be destroyed in seconds). Worse, most countries still do not even have cyber-crime laws on the books, meaning that criminals can act with impunity making response through a coordinating entity like a cyber-CDC more valuable to the U.S. specifically and to the world in general.

Experts have pointed out that we’re engaged in a technological arms race, an arms race between people who are using technology for good and those who are using it for ill. The challenge is that nefarious uses of technology are scaling exponentially in ways that our current systems of protection have simply not matched.  The point is, if we are to survive the progress offered by our technologies and enjoy their benefits, we must first develop adaptive mechanisms of security that can match or exceed the exponential pace of the threats confronting us. On this most important of imperatives, there is unambiguously no time to lose.

Help for the Little Guy

It’s clear to the news media and to every aware assurance professional that today’s cybercriminals are more sophisticated than ever in their operations and attacks. They’re always on the lookout for innovative ways to exploit vulnerabilities in every global payment system and in the cloud.

According to the ACFE, more consumer records were compromised in 2015-16 than in the previous four years combined. Data breach statistics from this year (2017) are projected to be even grimmer due to the growth of increasingly sophisticated attack methods such as increasingly complex malware infections and system vulnerability exploits, which grew tenfold in 2016. With attacks coming in many different forms and from many different channels, consumers, businesses and financial institutions (often against their will) are being forced to gain a better understanding of how criminals operate, especially in ubiquitous channels like social networks. They then have a better chance of mitigating the risks and recognizing attacks before they do severe damage.

As your Chapter has pointed out over the years in this blog, understanding the mechanics of data theft and the conversion process of stolen data into cash can help organizations of all types better anticipate in the exact ways criminals may exploit the system, so that organizations can put appropriate preventive measures in place. Classic examples of such criminal activity include masquerading as a trustworthy entity such as a bank or credit card company. These phishers send e-mails and instant messages that prompt users to reply with sensitive information such as usernames, passwords and credit card details, or to enter the information at a rogue web site. Other similar techniques include using text messaging (SMSishing or smishing) or voice mail (vishing) or today’s flood of offshore spam calls to lure victims into giving up sensitive information. Whaling is phishing targeted at high-worth accounts or individuals, often identified through social networking sites such as LinkedIn or Facebook. While it’s impossible to anticipate or prevent every attack, one way to stay a step ahead of these criminals is to have a thorough understanding of how such fraudsters operate their enterprises.

Although most cyber breaches reported recently in the news have struck large companies such as Equifax and Yahoo, the ACFE tells us that small and mid-sized businesses suffer a far greater number of devastating cyber incidents. These breaches involve organizations of every industry type; all that’s required for vulnerability is that they operate network servers attached to the internet. Although the number of breached records a small to medium sized business controls is in the hundreds or thousands, rather than in the millions, the cost of these breaches can be higher for the small business because it may not be able to effectively address such incidents on its own.  Many small businesses have limited or no resources committed to cybersecurity, and many don’t employ any assurance professionals apart from the small accounting firms performing their annual financial audit. For these organizations, the key questions are “Where should we focus when it comes to cybersecurity?” and “What are the minimum controls we must have to protect the sensitive information in our custody?” Fraud Examiners and forensic accountants with client attorneys assisting small businesses can assist in answering these questions by checking that their client attorney’s organizations implement a few vital cybersecurity controls.

First, regardless of their industry, small businesses must ensure their network perimeter is protected. The first step is identifying the vulnerabilities by performing an external network scan at least quarterly. A small business can either hire an outside company to perform these scans, or, if they have small in-house or contracted IT, they can license off-the-shelf software to run the scans, themselves. Moreover, small businesses need a process in place to remedy the identified critical, high, and medium vulnerabilities within three months of the scan run date, while low vulnerabilities are less of a priority. The fewer vulnerabilities the perimeter network has,
the less chance that an external hacker will breach the organization’s network.

Educating employees about their cybersecurity responsibilities is not a simple check-sheet matter. Smaller businesses not only need help in implementing an effective information security policy, they also need to ensure employees are aware of the policy and of their responsibilities. The policy and training should cover:

–Awareness of phishing attacks;
–Training on ransomware management;
–Travel tips;
–Potential threats of social engineering;
–Password protection;
–Risks of storing sensitive data in the cloud;
–Accessing corporate information from home computers and other personal devices;
–Awareness of tools the organization provides for securely sending emails or sharing large files;
–Protection of mobile devices;
–Awareness of CEO spoofing attacks.

In addition, small businesses should verify employees’ level of awareness by conducting simulation exercises. These can be in the form of a phishing exercise in which organizations themselves send fake emails to their employees to see if they will click on a web link, or a social engineering exercise in which a hired individual tries to enter the organization’s physical location and steal sensitive information such as information on computer screens left in plain sight.

In small organizations, sensitive information tends to proliferate across various platforms and folders. For example, employees’ personal information typically resides in human resources software or with a cloud service provider, but through various downloads and reports, the information can proliferate to shared drives and folders, laptops, emails, and even cloud folders like Dropbox or Google Drive. Assigned management at the organization should check that the organization has identified the sites of such proliferation to make sure it has a good handle on the state of all the organization’s sensitive information:

–Inventory all sensitive business processes and the related IT systems. Depending on the organization’s industry, this information could include customer information, pricing data, customers’ credit card information, patients’ health information, engineering data, or financial data;
–For each business process, identify an information owner who has complete authority to approve user access to that information;
–Ensure that the information owner periodically reviews access to all the information he or she owns and updates the access list.

Organizations should make it hard to get to their sensitive data by building layers or network segments. Although the network perimeter is an organization’s first line of defense, the probability of the network being penetrated is today at an all-time high. Management should check whether the organization has built a layered defense to protect its sensitive information. Once the organization has identified its sensitive information, management should work with the IT function to segment those servers that run its sensitive applications.  This segmentation will result in an additional layer of protection for these servers, typically by adding another firewall for the segment. Faced with having to penetrate another layer of defense, an intruder may decide to go elsewhere where less sensitive information is stored.

An organization’s electronic business front door also can be the entrance for fraudsters and criminals. Most of today’s malware enters through the network but proliferates through the endpoints such as laptops and desktops. At a minimum, internal small business management must ensure that all the endpoints are running anti-malware/anti-virus software. Also, they should check that this software’s firewall features are enabled. Moreover, all laptop hard drives should be encrypted.

In addition to making sure their client organizations have implemented these core controls, assurance professionals should advise small business client executives to consider other protective controls:

–Monitor the network. Network monitoring products and services can provide real-time alerts in case there is an intrusion;
–Manage service providers. Organizations should inventory all key service providers and review all contracts for appropriate security, privacy, and data breach notification language;
–Protect smart devices. Increasingly, company information is stored on mobile devices. Several off-the-shelf solutions can manage and protect the information on these devices. Small businesses should ensure they are able to wipe the sensitive information from these devices if they are lost or stolen;
–Monitor activity related to sensitive information. Management IT should log activities against their sensitive information and keep an audit log in case an incident occurs and they need to review the logs to evaluate the incident.

Combined with the controls listed above, these additional controls can help any small business reduce the probability of a data breach. But a security program is only as strong as its weakest link Through their assurance and advisory work, CFE’s and forensic accountants can proactively help identify these weaknesses and suggest ways to strengthen their smaller client organization’s anti-fraud defenses.

From Inside the Building

By Rumbi Petrozzello, CFE, CPA/CFF
2017 Vice-President – Central Virginia Chapter ACFE

Several months ago, I attended an ACFE session where one of the speakers had worked on the investigation of Edward Snowden. He shared that one of the ways Snowden had gained access to some of the National Security Agency (NSA) data that he downloaded was through the inadvertent assistance of his supervisor. According to this investigator, Snowden’s supervisor shared his password with Snowden, giving Snowden access to information that was beyond his subordinate’s level of authorization. In addition to this, when those security personnel reviewing downloads made by employees noticed that Snowden was downloading copious amounts of data, they approached Snowden’s supervisor to question why this might be the case. The supervisor, while acknowledging this to be true, stated that Snowden wasn’t really doing anything untoward.

At another ACFE session, a speaker shared information with us about how Chelsea Manning was able to download and remove data from a secure government facility. Manning would come to work, wearing headphones, listening to music on a Discman. Security would hear the music blasting and scan the CDs. Day after day, it was the same scenario. Manning showed up to work, music blaring.  Security staff grew so accustomed to Manning, the Discman and her CDs that when she came to work though security with a blank CD boldly labelled “LADY GAGA”, security didn’t blink. They should have because it was that CD and ones like it that she later carried home from work that contained the data she eventually shared with WikiLeaks.

Both these high-profile disasters are notable examples of the bad outcome arising from a realized internal threat. Both Snowden and Manning worked for organizations that had, and have, more rigorous security procedures and policies in place than most entities. Yet, both Snowden and Manning did not need to perform any magic tricks to sneak data out of the secure sites where the target data was held; it seems that it all it took was audacity on the one side and trust and complacency on the other.

When organizations deal with outside parties, such as vendors and customers, they tend to spend a lot of time setting up the structures and systems that will guide how the organization will interact with those vendors and customers. Generally, companies will take these systems of control seriously, if only because of the problems they will have to deal with during annual external audits if they don’t. The typical new employee will spend a lot of time learning what the steps are from the point when a customer places an order through to the point the customer’s payment is received. There will be countless training manuals to which to refer and many a reminder from co-workers who may be negatively impacted if the rooky screws up.

However, this scenario tends not to hold up when it comes to how employees typically share information and interact with each other. This is true despite the elevated risk that a rogue insider represents. Often, when we think about an insider causing harm to a company through fraudulent acts, we tend to imagine a villain, someone we could identify easily because s/he is obviously a terrible person. After all, only a terrible person could defraud their employer. In fact, as the ACFE tells us, the most successful fraudsters are the ones who gain our trust and who, therefore, don’t really have to do too much for us to hand over the keys to the kingdom. As CFEs and Forensic Accountants, we need to help those we work with understand the risks that an insider threat can represent and how to mitigate that risk. It’s important, in advising our clients, to guide them toward the creation of preventative systems of policy and procedure that they sometimes tend to view as too onerous for their employees. Excuses I often hear run along the lines of:

• “Our employees are like family here, we don’t need to have all these rules and regulations”

• “I keep a close eye on things, so I don’t have to worry about all that”

• “My staff knows what they are supposed to do; don’t worry about it.”

Now, if people can easily walk sensitive information out of locations that have documented systems and are known to be high security operations, can you imagine what they can do at your client organizations? Especially if the employer is assuming that their employees magically know what they are supposed to do? This is the point that we should be driving home with our clients. We should look to address the fact that both trust and complacency in organizations can be problems as well as assets. It’s great to be able to trust employees, but we should also talk to our clients about the fraud triangle and how one aspect of it, pressure, can happen to any staff member, even the most trusted. With that in mind, it’s important to institute controls so that, should pressure arise with an employee, there will be little opportunity open to that employee to act. Both Manning and Snowden have publicly spoken about the pressures they felt that led them to act in the way they did. The reason we even know about them today is that they had the opportunity to act on those pressures. I’ve spent time consulting with large organizations, often for months at a time. During those times, I got to chat with many members of staff, including security. On a couple of occasions, I forgot and left my building pass at home. Even though I was on a first name basis with the security staff and had spent time chatting with them about our personal lives, they still asked me for identification and looked me up in the system. I’m sure they thought I was a nice and trustworthy enough person, but they knew to follow procedures and always checked on whether I was still authorized to access the building. The important point is that they, despite knowing me, knew to check and followed through.

Examples of controls employees should be reminded to follow are:

• Don’t share your password with a fellow employee. If that employee cannot access certain information with their own password, either they are not authorized to access that information or they should speak with an administrator to gain the desired access. Sharing a password seems like a quick and easy solution when under time pressures at work, but remind employees that when they share their login information, anything that goes awry will be attributed to them.

• Always follow procedures. Someone looking for an opportunity only needs one.

• When something looks amiss, thoroughly investigate it. Even if someone tells you that all is well, verify that this is indeed the case.

• Explain to staff and management why a specific control is in place and why it’s important. If they understand why they are doing something, they are more likely to see the control as useful and to apply it.

• Schedule training on a regular basis to remind staff of the controls in place and the systems they are to follow. You may believe that staff knows what they are supposed to do, but reminding them reduces the risk of them relying on hearsay and secondhand information. Management is often surprised by what they think staff knows and what they find out the staff really knows.

It should be clear to your clients that they have control over who has access to sensitive information and when and how it leaves their control. It doesn’t take much for an insider to gain access to this information. A face you see smiling at you daily is the face of a person you can grow comfortable with and with whom you can drop your guard. However, if you already have an adequate system and effective controls in place, you take the personal out of the equation and everyone understands that we are all just doing our job.

Sock Puppets

The issue of falsely claimed identity in all its myriad forms has shadowed the Internet since the beginning of the medium.  Anyone who has used an on-line dating or auction site is all too familiar with the problem; anyone can claim to be anyone.  Likewise, confidence games, on or off-line, involve a range of fraudulent conduct committed by professional con artists against unsuspecting victims. The victims can be organizations, but more commonly are individuals. Con artists have classically acted alone, but now, especially on the Internet, they usually group together in criminal organizations for increasingly complex criminal endeavors. Con artists are skilled marketers who can develop effective marketing strategies, which include a target audience and an appropriate marketing plan: crafting promotions, product, price, and place to lure their victims. Victimization is achieved when this marketing strategy is successful. And falsely claimed identities are always an integral component of such schemes, especially those carried out on-line.

Such marketing strategies generally involve a specific target market, which is usually made up of affinity groups consisting of individuals grouped around an objective, bond, or association like Facebook or LinkedIn Group users. Affinity groups may, therefore, include those associated through age, gender, religion, social status, geographic location, business or industry, hobbies or activities, or professional status. Perpetrators gain their victims’ trust by affiliating themselves with these groups.  Historically, various mediums of communication have been initially used to lure the victim. In most cases, today’s fraudulent schemes begin with an offer or invitation to connect through the Internet or social network, but the invitation can come by mail, telephone, newspapers and magazines, television, radio, or door-to-door channels.

Once the mark receives and accepts the offer to connect, some sort of response or acceptance is requested. The response will typically include (in the case of Facebook or LinkedIn) clicking on a link included in a fraudulent follow-up post to visit a specified web site or to call a toll-free number.

According to one of Facebook’s own annual reports, up to 11.2 percent of its accounts are fake. Considering the world’s largest social media company has 1.3 billion users, that means up to 140 million Facebook accounts are fraudulent; these users simply don’t exist. With 140 million inhabitants, the fake population of Facebook would be the tenth-largest country in the world. Just as Nielsen ratings on television sets determine different advertising rates for one television program versus another, on-line ad sales are determined by how many eyeballs a Web site or social media service can command.

Let’s say a shyster want 3,000 followers on Twitter to boost the credibility of her scheme? They can be hers for $5. Let’s say she wants 10,000 satisfied customers on Facebook for the same reason? No problem, she can buy them on several websites for around $1,500. A million new friends on Instagram can be had for only $3,700. Whether the con man wants favorites, likes, retweets, up votes, or page views, all are for sale on Web sites like Swenzy, Fiverr, and Craigslist. These fraudulent social media accounts can then be freely used to falsely endorse a product, service, or company, all for just a small fee. Most of the work of fake account set up is carried out in the developing world, in places such as India and Bangladesh, where actual humans may control the accounts. In other locales, such as Russia, Ukraine, and Romania, the entire process has been scripted by computer bots, programs that will carry out pre-encoded automated instructions, such as “click the Like button,” repeatedly, each time using a different fake persona.

Just as horror movie shape-shifters can physically transform themselves from one being into another, these modern screen shifters have their own magical powers, and organizations of men are eager to employ them, studying their techniques and deploying them against easy marks for massive profit. In fact, many of these clicks are done for the purposes of “click fraud.” Businesses pay companies such as Facebook and Google every time a potential customer clicks on one of the ubiquitous banner ads or links online, but organized crime groups have figured out how to game the system to drive profits their way via so-called ad networks, which capitalize on all those extra clicks.

Painfully aware of this, social media companies have attempted to cut back on the number of fake profiles. As a result, thousands and thousands of identities have disappeared over night among the followers of many well know celebrities and popular websites. If Facebook has 140 million fake profiles, there is no way they could have been created manually one by one. The process of creation is called sock puppetry and is a reference to the children’s toy puppet created when a hand is inserted into a sock to bring the sock to life. In the online world, organized crime groups create sock puppets by combining computer scripting, web automation, and social networks to create legions of online personas. This can be done easily and cheaply enough to allow those with deceptive intentions to create hundreds of thousands of fake online citizens. One only needs to consult a readily available on-line directory of the most common names in any country or region. Have a scripted bot merely pick a first name and a last name, then choose a date of birth and let the bot sign up for a free e-mail account. Next, scrape on-line photo sites such as Picasa, Instagram, Facebook, Google, and Flickr to choose an age-appropriate image to represent your new sock puppet.

Armed with an e-mail address, name, date of birth, and photograph, you sign up your fake persona for an account on Facebook, LinkedIn, Twitter, or Instagram. As a last step, you teach your puppets how to talk by scripting them to reach out and send friend requests, repost other people’s tweets, and randomly like things they see Online. Your bots can even communicate and cross-post with one another. Before the fraudster knows it, s/he has thousands of sock puppets at his disposal for use as he sees fit. It is these armies of sock puppets that criminals use as key constituents in their phishing attacks, to fake on-line reviews, to trick users into downloading spyware, and to commit a wide variety of financial frauds, all based on misplaced and falsely claimed identity.

The fraudster’s environment has changed and is changing over time, from a face-to-face physical encounter to an anonymous on-line encounter in the comfort of the victim’s own home. While some consumers are unaware that a weapon is virtually right in front of them, others are victims who struggle with the balance of the many wonderful benefits offered by advanced technology and the painful effects of its consequences. The goal of law enforcement has not changed over the years; to block the roads and close the loopholes of perpetrators even as perpetrators continue to strive to find yet another avenue to commit fraud in an environment in which they can thrive. Today, the challenge for CFEs, law enforcement and government officials is to stay on the cutting edge of technology, which requires access to constantly updated resources and communication between organizations; the ability to gather information; and the capacity to identify and analyze trends, institute effective policies, and detect and deter fraud through restitution and prevention measures.

Now is the time for CFEs and other assurance professionals to continuously reevaluate all we for take for granted in the modern technical world and to increasingly question our ever growing dependence on the whole range of ubiquitous machines whose potential to facilitate fraud so few of our clients and the general public understand.

Industrialized Theft

In at least one way you have to hand it to Ethically Challenged, Inc.;  it sure knows how to innovate, and the recent spate of ransomware attacks proves they also know how to make what’s old new again. Although society’s criminal opponents engage in constant business process improvement, they’ve proven again and again that they’re not just limited to committing new crimes from scratch every time. In the age of Moore’s law, these tasks have been readily automated and can run in the background at scale without the need for significant human intervention. Crime automations like the WannaCry virus allow transnational organized crime groups to gain the same efficiencies and cost savings that multinational corporations obtained by leveraging technology to carry out their core business functions. That’s why today it’s possible for hackers to rob not just one person at a time but 100 million or more, as the world saw with the Sony PlayStation and Target data breaches and now with the WannaCry worm.

As covered in our Chapter’s training event of last year, ‘Investigating on the Internet’, exploit tool kits like Blackhole and SpyEye commit crime “automagically” by minimizing the need for human labor, thereby dramatically reducing criminal costs. They also allow hackers to pursue the “long tail” of opportunity, committing millions of thefts in small amounts so that (in many cases) victims don’t report them and law enforcement has no way to track them. While high-value targets (companies, nations, celebrities, high-net-worth individuals) are specifically and individually targeted, the way the majority of the public is hacked is by automated scripted computer malware, one large digital fishing net that scoops up anything and everything online with a vulnerability that can be exploited. Given these obvious advantages, as of 2016 an estimated 61 percent of all online attacks were launched by fully automated crime tool kits, returning phenomenal profits for the Dark Web overlords who expertly orchestrated them. Modern crime has become reduced and distilled to a software program that anybody can run at tremendous profit.

Not only can botnets and other tools be used over and over to attack and offend, but they’re now enabling the commission of much more sophisticated crimes such as extortion, blackmail, and shakedown rackets. In an updated version of the old $500 million Ukrainian Innovative Marketing solutions “virus detected” scam, fraudsters have unleashed a new torrent of malware that hold the victim’s computer hostage until a ransom is paid and an unlock code is provided by the scammer to regain access to the victim’s own files. Ransomware attack tools are included in a variety of Dark Net tool kits, such as WannaCry and Gameover Zeus. According to the ACFE, there are several varieties of this scam, including one that purports to come from law enforcement. Around the world, users who become infected with the Reveton Trojan suddenly have their computers lock up and their full screens covered with a notice, allegedly from the FBI. The message, bearing an official-looking large, full-color FBI logo, states that the user’s computer has been locked for reasons such as “violation of the federal copyright law against illegally downloaded material” or because “you have been viewing or distributing prohibited pornographic content.”

In the case of the Reveton Trojan, to unlock their computers, users are informed that they must pay a fine ranging from $200 to $400, only accepted using a prepaid voucher from Green Dot’s MoneyPak, which victims are instructed they can buy at their local Walmart or CVS; victims of WannaCry are required to pay in BitCoin. To further intimidate victims and drive home the fact that this is a serious police matter, the Reveton scammers prominently display the alleged violator’s IP address on their screen as well as snippets of video footage previously captured from the victim’s Webcam. As with the current WannaCry exploit, the Reveton scam has successfully targeted tens of thousands of victims around the world, with the attack localized by country, language, and police agency. Thus, users in the U.K. see a notice from Scotland Yard, other Europeans get a warning from Europol, and victims in the United Arab Emirates see the threat, translated into Arabic, purportedly from the Abu Dhabi Police HQ.

WannaCry is even more pernicious than Reveton though in that it actually encrypts all the files on a victim’s computer so that they can no longer be read or accessed. Alarmingly, variants of this type of malware often present a ticking-bomb-type countdown clock advising users that they only have forty-eight hours to pay $300 or all of their files will be permanently destroyed. Akin to threatening “if you ever want to see your files alive again,” these ransomware programs gladly accept payment in Bitcoin. The message to these victims is no idle threat. Whereas previous ransomware might trick users by temporarily hiding their files, newer variants use strong 256-bit Advanced Encryption Standard cryptography to lock user files so that they become irrecoverable. These types of exploits earn scores of millions of dollars for the criminal programmers who develop and sell them on-line to other criminals.

Automated ransomware tools have even migrated to mobile phones, affecting Android handset users in certain countries. Not only have individuals been harmed by the ransomware scourge, so too have companies, nonprofits, and even government agencies, the most infamous of which was the Swansea Police Department in Massachusetts some years back, which became infected when an employee opened a malicious e-mail attachment. Rather than losing its irreplaceable police case files to the scammers, the agency was forced to open a Bitcoin account and pay a $750 ransom to get its files back. The police lieutenant told the press he had no idea what a Bitcoin was or how the malware functioned until his department was struck in the attack.

As the ACFE and other professional organizations have told us, within its world, cybercrime has evolved highly sophisticated methods of operation to sell everything from methamphetamine to child sexual abuse live streamed online. It has rapidly adopted existing tools of anonymity such as the Tor browser to establish Dark Net shopping malls, and criminal consulting services such as hacking and murder for hire are all available at the click of a mouse. Untraceable and anonymous digital currencies, such as Bitcoin, are breathing new life into the underground economy and allowing for the rapid exchange of goods and services. With these additional revenues, cyber criminals are becoming more disciplined and organized, significantly increasing the sophistication of their operations. Business models are being automated wherever possible to maximize profits and botnets can threaten legitimate global commerce, easily trained on any target of the scammer’s choosing. Fundamentally, it’s been done. As WannaCry demonstrates, the computing and Internet based crime machine has been built. With these systems in place, the depth and global reach of cybercrime, mean that crime now scales, and it scales exponentially. Yet, as bad as this threat is today, it is about to become much worse, as we hand such scammers billions of more targets for them to attack as we enter the age of ubiquitous computing and the Internet of Things.

Analytics & Fraud Prevention

During our Chapter’s live training event last year, ‘Investigating on the Internet’, our speaker Liseli Pennings, pointed out that, according to the ACFE’s 2014 Report to the Nations on Occupational Fraud and Abuse, organizations that have proactive, internet oriented, data analytics in place have a 60 percent lower median loss because of fraud, roughly $100,000 lower per incident, than organizations that don’t use such technology. Further, the report went on, use of proactive data analytics cuts the median duration of a fraud in half, from 24 months to 12 months.

This is important news for CFE’s who are daily confronting more sophisticated frauds and criminals who are increasingly cyber based.  It means that integrating more mature forensic data analytics capabilities into a fraud prevention and compliance monitoring program can improve risk assessment, detect potential misconduct earlier, and enhance investigative field work. Moreover, forensic data analytics is a key component of effective fraud risk management as described in The Committee of Sponsoring Organizations of the Treadway Commission’s most recent Fraud Risk Management Guide, issued in 2016, particularly around the areas of fraud risk assessment, prevention, and detection.  It also means that, according to Pennings, fraud prevention and detection is an ideal big data-related organizational initiative. With the growing speed at which they generate data, specifically around their financial reporting and sales business processes, our larger CFE client organizations need ways to prioritize risks and better synthesize information using big data technologies, enhanced visualizations, and statistical approaches to supplement traditional rules-based investigative techniques supported by spreadsheet or database applications.

But with this analytics and fraud prevention integration opportunity comes a caution.  As always, before jumping into any specific technology or advanced analytics technique, it’s crucial to first ask the right risk or control-related questions to ensure the analytics will produce meaningful output for the business objective or risk being addressed. What business processes pose a high fraud risk? High-risk business processes include the sales (order-to-cash) cycle and payment (procure-to-pay) cycle, as well as payroll, accounting reserves, travel and entertainment, and inventory processes. What high-risk accounts within the business process could identify unusual account pairings, such as a debit to depreciation and an offsetting credit to a payable, or accounts with vague or open-ended “catch all” descriptions such as a “miscellaneous,” “administrate,” or blank account names?  Who recorded or authorized the transaction? Posting analysis or approver reports could help detect unauthorized postings or inappropriate segregation of duties by looking at the number of payments by name, minimum or maximum accounts, sum totals, or statistical outliers. When did transactions take place? Analyzing transaction activities over time could identify spikes or dips in activity such as before and after period ends or weekend, holiday, or off-hours activities. Where does the CFE see geographic risks, based on previous events, the economic climate, cyber threats, recent growth, or perceived corruption? Further segmentation can be achieved by business units within regions and by the accounting systems on which the data resides.

The benefits of implementing a forensic data analytics program must be weighed against challenges such as obtaining the right tools or professional expertise, combining data (both internal and external) across multiple systems, and the overall quality of the analytics output. To mitigate these challenges and build a successful program, the CFE should consider that the priority of the initial project matters. Because the first project often is used as a pilot for success, it’s important that the project address meaningful business or audit risks that are tangible and visible to client management. Further, this initial project should be reasonably attainable, with minimal dollar investment and actionable results. It’s best to select a first project that has big demand, has data that resides in easily accessible sources, with a compelling, measurable return on investment. Areas such as insider threat, anti-fraud, anti-corruption, or third-party relationships make for good initial projects.

In the health care insurance industry where I worked for many years, one of the key goals of forensic data analytics is to increase the detection rate of health care provider billing non-compliance, while reducing the risk of false positives. From a capabilities perspective, organizations need to embrace both structured and unstructured data sources that consider the use of data visualization, text mining, and statistical analysis tools. Since the CFE will usually be working as a member of a team, the team should demonstrate the first success story, then leverage and communicate that success model widely throughout the organization. Results should be validated before successes are communicated to the broader organization. For best results and sustainability of the program, the fraud prevention team should be a multidisciplinary one that includes IT, business users, and functional specialists, such as management scientists, who are involved in the design of the analytics associated with the day-to-day operations of the organization and hence related to the objectives of  the fraud prevention program. It helps to communicate across multiple departments to update key stakeholders on the program’s progress under a defined governance regime. The team shouldn’t just report noncompliance; it should seek to improve the business by providing actionable results.

The forensic data analytics functional specialists should not operate in a vacuum; every project needs one or more business champions who coordinate with IT and the business process owners. Keep the analytics simple and intuitive, don’t include too much information in one report so that it isn’t easy to understand. Finally, invest time in automation, not manual refreshes, to make the analytics process sustainable and repeatable. The best trends, patterns, or anomalies often come when multiple months of vendor, customer, or employee data are analyzed over time, not just in the aggregate. Also, keep in mind that enterprise-wide deployment takes time. While quick projects may take four to six weeks, integrating the entire program can easily take more than one or two years. Programs need to be refreshed as new risks and business activities change, and staff need updates to training, collaboration, and modern technologies.

Research findings by the ACFE and others are providing more and more evidence of the benefits of integrating advanced forensic data analytics techniques into fraud prevention and detection programs. By helping increase their client organization’s maturity in this area, CFE’s can assist in delivering a robust fraud prevention program that is highly focused on preventing and detecting fraud risks.

Offered & Bid

Our Chapter was contacted last week by an apparent victim of an on-line auction fraud scheme called shilling.  Our victim bought an item on the auction and subsequently received independent verification that the seller had multiple ID’s which he used to artificially increase the high bid on the item ultimately purchased by our victim.  On-line consumer auctions have been a ubiquitous feature of the on-line landscape for the last two decades and, according the ACFE, the number of scams involving them is ever increasing.

The Internet allows con artists to trade in an environment of anonymity, which makes fraud easier to perpetrate. So every buyer of items from online auctions not only has to worry about the item being in good condition and every seller has to be concerned about being paid, they must both also worry about whether the other party to the transaction is even legitimate.   Common internet auction fraud complaints include products that never arrive, arrive damaged, or are valued less than originally promised. Many complaints also stem from sellers who deliver the product but never receive payment. Almost all auction sites have responded over the years by  instituting policies to prevent these types of fraud and have suspended people who break the rules. eBay, for example, has implemented buyer protection and fraudulent website protection programs, as well as several other safeguards to prevent fraudsters from abusing their auction services but the abuses just seem to go on and on.

What apparently happened to our victim is called shilling.  Shilling occurs when sellers arrange to have fictitious bids placed on their item to drive up the price. This is accomplished either by their own use of multiple user IDs (as our victim suspects of her seller) or by having other partners in crime artificially increase the high bid on their item; typically, these individuals are friends or family members of the seller. If the shiller sees a legitimately high bid that does not measure up to his or her expectations, s/he might burst in to give it a boost by raising the bid. This auction activity is one of the worst auction offenses and is cause for immediate and indefinite site suspension for any seller caught in its performance by any legitimate auction.

A related ploy that also raises lots of complaints is called sniping.  Sniping is a bid manipulation process in which an unscrupulous bidder bids during the last few seconds of an auction to gain the high bid just as the time runs out, thus negating the ability of another bidder to answer with a still higher bid. Most bidders who successfully engage in this practice do so with the aid of sniping technology. In general, sniping is legal; however, most online auctions sites have instituted no-sniping policies, as the practice is devious and may harm legitimate, honest bidders.

Then there’s bid shielding.  Bid shielding is a scam in which a group of dishonest bidders target an item and inflate the high bid value to discourage other real bidders. At the last moment, the highest bidder or other bidders will retract their bids, thereby shielding the lower bidder and allowing him to run away with the item at a desirable, and deceitful, price.

In the relentless drive for more customers, some sellers resort to bid siphoning which occurs when fraudulent sellers lure bidders off legitimate sites by offering to sell the “same” item at a lower price. They intend to trick consumers into sending money without delivering the item. By going off-site, buyers lose any protections the original site may provide, such as insurance, feedback forms, or guarantees.  This practice is often accompanied by sellers embellishing or distorting the descriptions of their wares. Borrowed images, ambiguous descriptions, and falsified facts are some of the tactics a seller will utilize in misleading a buyer with the end of guiding her to participation in a siphoning scheme.

The second chance scammer offers losing bidders of a closed auction a second chance to purchase the item that they lost in the auction. As with siphoning victims, second chance buyers lose any protections the original site may provide once they go off-site.

One of the most common complaints associated with on-line auctions is price manipulation.  To avoid price manipulation, consumers need to understand the auction format before bidding. Sellers may set up the auction with questionable bidding rules that leave the winning buyer in an adverse situation. For example, say you are a winner in an auction. You bid $50, but the lowest successful bid is only $45. The seller congratulates you on your win, and requests your high bid of $50 plus postage.  As another example, let’s say the highest bidder retracts his bid or the seller cancels it, which leaves you the highest bidder. The seller then wants you to pay the maximum bid amount, citing that the previous high bidder had outbid you. Finally, let’s say you win a straight auction with a high bid of $85. The seller contacts you and instructs you to send your high bid, plus shipping, packaging, listing fee costs, and numerous other charges.

Our last example relates to the practice of fee stacking which refers to the addition of hidden charges to the total amount due from the winning bidder after the auction has concluded. Shipping and handling fees can vary greatly; therefore, the buyer should inquire before bidding to avoid unexpected costs. Typically, postage and handling fees are charged at a flat rate. However, some scheming sellers add separate charges for postage, packaging, handling, and shipping, and often devise other fees to tack on as well, leaving the buyer with a much higher purchase price than anticipated.

Then there’s the flat failure to ship the purchased merchandise.  This is the one type of on-line auction fraud that most people have heard of even if they don’t themselves participate in on-line auctions and involves a seller receiving payment for the item sold, but not shipping the merchandise. If the merchandise does not arrive, the buyer should contact the seller for the item or request a refund, hopefully having kept a receipt of payment for the purchase. If the purchaser made the purchase with a credit card, s/he can contact the credit card company to deny the charges. If the buyer gets nowhere with the seller, the buyer should contact the U.S. Postal Inspection Service, as the failure to ship constitutes mail fraud.

On the other hand fraudulent buyer claims of lost or damaged items are also considered mail fraud. Some buyers falsely claim the item arrived damaged or did not arrive at all, and thus refuse payment. Sellers should insure the item during shipping and send it via certified mail, which requires a signature verifying receipt.

A related buyer scam is switch and return.  Let’s say you have successfully auctioned a vintage item. You, the seller, package it with care and ship it to the anxious buyer. But when the buyer receives it, he is not satisfied. You offer a refund. However, when the buyer returns the item, you get back an item that does not resemble the high-quality item that you shipped. The buyer has switched the high-quality item with a low-quality item and returned it to you. The buyer ends up with both the item and the refund.

The on-line market is awash in fakes. The seller “thinks” it is an original; but the buyer should think again. With the use of readily attainable computer graphics and imaging technology, a reproduction can be made to look almost identical to an original. Many fraudsters take full advantage of these capabilities to dupe unsuspecting or uninformed buyers into purchasing worthless items for high prices.

If you are a fraud examiner working with clients involved in the on-line auction market or a buyer or seller in those markets …

— Become familiar with the chosen auction site;
— Understand as much as possible about how internet auctions work, what the site obligations are toward a buyer or seller, and what the buyer’s or seller’s obligations are before bidding or selling;
— Find out what protections the auction site offers buyers;
— Try to determine the relative value of an item before bidding;
— Find out all you can about the seller, especially if the only information you have is an e-mail address.  If the seller is a business, check with the Better Business Bureau where the seller/buyer is located;
— Examine the feedback on the seller and use common sense. If a seller has a history of negative feedback, then do not deal with that seller;
— Consider whether the item comes with a warranty, and whether follow-up service is available if it is needed;
— Do not allow the seller or buyer to convince you to ignore the rules of a legitimate internet auction.

 

The CFE, Management & Cybersecurity

Strategic decisions affect the ultimate success or failure of any organization. Thus, they are usually evaluated and made by the top executives. Risk management contributes meaningfully and consistently to the organization’s success as defined at the highest levels. To achieve this objective, top executives first must believe there is substantial value to be gained by embracing risk management. The best way for CFEs and other risk management professionals to engage these executives is to align fraud risk management with achievement (or non-achievement) of the organization’s vital performance targets, and use it to drive better decisions and outcomes with a higher degree of certainty.

Next, top management must trust its internal risk management professional as a peer who provides valuable perspective. Every risk assurance professional must earn trust and respect by consistently exhibiting insightful risk and performance management competence, and by evincing a deep understanding of the business and its strategic vision, objectives, and initiatives. He or she must simplify fraud risk discussions by focusing on uncertainty relative to strategic objectives and by categorizing these risks in a meaningful way. Moreover, the risk professional must always be willing to take a contrarian position, relying on objective evidence where readily available, rather than simply deferring to the subjective. Because CFEs share many of these same traits, the CFE can help internal risk executives gain that trust and respect within their client organizations.

In the past, many organizations integrated fraud risk into the evaluation of other controls. Today, per COSO guidance, the adequacy of anti-fraud controls is specifically assessed as part of the evaluation of the control activities related to identified fraud risks. Managements that identify a gap related to the fraud risk assessments performed by CFEs and work to implement a robust assessment take away an increased focus on potential fraud scenarios specific to their organizations. Many such managements have implemented new processes, including CFE facilitated sessions with operating management, that allow executives to consider fraud in new ways. The fraud risk assessment can also raise management’s awareness of opportunities for fraud outside its areas of responsibility.

The blurred line of responsibility between an entity’s internal control system and those of outsourced providers creates a need for more rigorous controls over communication between parties. Previously, many companies looked to contracts, service-level agreements, and service organization reports as their approach to managing service organizations. Today, there is a need to go further. Specifically, there is a need for focus on the service providers’ internal processes and tone at the top. Implementing these additional areas of fraud risk assessment focus can increase visibility into the vendor’s performance, fraud prevention and general internal control structure.

Most people view risk as something that should be avoided or reduced. However, CFEs and other risk professionals realize that risk is valued when it can help achieve a competitive advantage. ACFE studies show that investors and other stakeholders place a premium on management’s ability to limit the uncertainty surrounding their performance projections, especially regarding fraud risk. With Information Technology budgets shrinking and more being asked from IT, outsourcing key components of IT or critical business processes to third-party cloud based providers is now common. Management should obtain a report on all the enterprise’s critical business applications and the related data that is managed by such providers. Top management should make sure that the organization has appropriate agreements in place with all service providers and that an appropriate audit of the provider’s operations, such as Service Organization Controls (SOC) 1 and SOC 2 assurance reports, is performed regularly by an independent party.

It’s also imperative that client management understand the safe harbor clauses in data breach laws for the countries and U.S. states where the organization does business.  In the United States, almost every state has enacted laws requiring organizations to notify the state in case of a data breach. The criteria defining what constitutes a data breach are similar in each state, with slight variations.

CFE vulnerability assessments should strive to impress on IT management that it should strive to make upper management aware of all major breach attempts, not just actual incidents, made against the organization. To see the importance of this it’s necessary only to open a newspaper and read about the serious data breaches occurring around the world on almost a daily basis. The definition of major may, of course, differ, depending on the organization’s industry and whether the organization is global, national, or local.  Additionally, top management and the board should plan to meet with the organization’s chief information security officer (CISO) at least once a year. This meeting should supplement the CFE’s annual update of the fraud risk assessment by helping management understand the state of cybersecurity within the organization and enabling top managers and directors to discuss key cybersecurity topics. It’s also important that the CISO is reporting to the appropriate levels within the organization. Keep in mind that although many CISOs continue to report within the IT organization, sometimes the chief information officer’s agenda conflicts with the CISO’s agenda. As such, the ACFE reports that a better reporting arrangement to promote independence is to migrate reporting lines to other officers such as the general counsel, chief operating officer, chief risk officer (CRO), or even the CEO, depending on the industry and the organization’s degree of dependence on technology.

As a matter of routine, every organization should establish relationships with the appropriate national and local authorities who have responsibility for cybersecurity or cybercrime response. For example, boards of U.S. companies should verify that management has protocols in place to guide contact with the Federal Bureau of Investigation (FBI) in case of a breech; the FBI has established its Key Partnership Engagement Unit, a targeted outreach program to senior executives of major private-sector corporations.

If there is a Chief Risk Officer (CRO) or equivalent, upper management and the board should, as with the CISO, meet with him or her quarterly or, at the least, annually and review all the fraud related risks that were either avoided or accepted. There are times when a business unit will identify a technology need that its executive is convinced is the right solution for the organization, even though the technology solution may have potential security risks. The CRO should report to the board about those decisions by business-unit executives that have the potential to expose the organization to additional security risks.

And don’t forget that management should be made to verify that the organization’s cyber insurance coverage is sufficient to address potential cyber risks. To understand the total potential impact of a major data breach, the board should always ask management to provide the cost per record of a data breach.

No business can totally mitigate every fraud related cyber risk it faces, but every business must focus on the vulnerabilities that present the greatest exposure. Cyber risk management is a multifaceted function that manages acceptance and avoidance of risk against the necessary actions to operate the business for success and growth, and to meet strategic objectives. Every business needs to regard risk management as an ongoing conversation between its management and supporting professionals, a conversation whose importance requires participation by an organization’s audit committee and other board members, with the CFE and the CISO serving increasingly important roles.

Cybersecurity – Is There a Role for Fraud Examiners?

cybersecurityAt a cybersecurity fraud prevention conference, I attended recently in California one of the featured speakers addressed the difference between information security and cybersecurity and the complexity of assessing the fraud preparedness controls specifically directed against cyber fraud.  It seems the main difficulty is the lack of a standard to serve as the basis of a fraud examiner’s or auditor’s risk review. The National Institute of Standards and Technology’s (NIST) framework has become a de facto standard despite the fact that it’s more than a little light on specific details.  Though it’s not a standard, there really is nothing else at present against which to measure cybersecurity.  Moreover, the technology that must be the subject of a cybersecurity risk assessment is poorly understood and is mutating rapidly.  CFE’s, and everyone else in the assurance community, are hard pressed to keep up.

To my way of thinking, a good place to start in all this confusion is for the practicing fraud examiner to consider the fundamental difference between information security and cybersecurity, the differing nature of the threat itself.   There is simply a distinction between protecting information against misuse of all sorts (information security) and an attack by a government, a terrorist group, or a criminal enterprise that has immense resources of expertise, personnel and time, all directed at subverting one individual organization (cybersecurity).  You can protect your car with a lock and insurance but those are not the tools of choice if you see a gang of thieves armed with bricks approaching your car at a stoplight. This distinction is at the very core of assessing an organization’s preparations for addressing the risk of cyberattacks and for defending itself against them.

As is true in so many investigations, the cybersecurity element of the fraud risk assessment process begins with the objectives of the review, which leads immediately on to the questions one chooses to ask. If an auditor only wants to know “Are we secure against cyberattacks?” then the answer should be up on a billboard in letters fifty feet high: No organization should ever consider itself safe against cyber attackers. They are too powerful and pervasive for any complacency. If major television networks can be stricken, if the largest banks can be hit, if governments are not immune, then the CFE’s client organization is not secure either.  Still, all anti-fraud reviewers can ask subtle and meaningful questions of client management, specifically focused on the data and software at risk of an attack. A fraud risk assessment process specific to cybersecurity might delve into the internals of database management systems and system software, requiring the considerable skills of a CFE supported by one or more tech-savvy consultants s/he has engaged to form the assessment team. Or it might call for just asking simple questions and applying basic arithmetic.

If the fraud examiner’s concern is the theft of valuable information, the simple corrective is to make the data valueless, which is usually achieved through encryption. The CFE’s question might be, “Of all your data, what percentage is encrypted?” If the answer is 100 percent, the follow-up question is whether the data are always encrypted—at rest, in transit and in use. If it cannot be shown that all data are secured all of the time, the next step is to determine what is not protected and under what circumstances. The assessment finding would consist of a flat statement of the amount of unencrypted data susceptible to theft and a recitation of the potential value to an attacker in stealing each category of unprotected data. The readers of this blog know that data must be decrypted in order to be used and so would be quick to point out that “universal” encryption in use is, ultimately, a futile dream. There are vendors who, think otherwise, but let’s accept the fact that data will, at some time, be exposed within a computer’s memory. Is that a fault attributable to the data or to the memory and to the programs running in it? Experts say it’s the latter. In-memory attacks are fairly devious, but the solutions are not. Rebooting gets rid of them and antimalware programs that scan memory can find them. So a CFE can ask,” How often is each system rebooted?” and “Does your anti-malware software scan memory?

To the extent that software used for attacks is embedded in the programs themselves, the problem lies in a failure of malware protection or of change management. A CFE need not worry this point; according to my California presenter many auditors (and security professionals) have wrestled with this problem and not solved it either. All a CFE needs to ask is whether anyone would be able to know whether a program had been subverted. An audit of the change management process would often provide a bounty of findings, but would not answer the reviewer’s question. The solution lies in having a version of a program known to be free from flaws (such as newly released code) and an audit trail of

known changes. It’s probably beyond the talents of a typical CFE to generate a hash total using a program as data and then to apply the known changes in order to see if the version running in production matches a recalculated hash total. But it is not beyond the skills of IT expects the CFE can add to her team and for the in-house IM staff responsible keeping their employer’s programs safe. A CFE fraud risk reviewer need only find out if anyone is performing such a check. If not, the CFE can simply conclude and report to the client that no one knows for sure if the client’s programs have been penetrated or not.

Finally, a CFE might want to find out if the environment in which data are processed is even capable of being secured. Ancient software running on hardware or operating systems that have passed their end of life are probably not reliable in that regard. Here again, the CFE need only obtain lists and count. How many programs have not been maintained for, say, five years or more? Which operating systems that are no longer supported are still in use? How much equipment in the data center is more than 10 years old? All this is only a little arithmetic and common sense, not rocket science.

In conclusion, frauds associated with weakened or absent cybersecurity systems are not likely to become a less important feature of the corporate landscape over time. Instead, they are poised to become an increasingly important aspect of doing business for those who create automated applications and solutions, and for those who attempt to safeguard them on the front end and for those who investigate and prosecute crimes against them on the back end. While the ramifications of every cyber fraud prevention decision are broad and diverse, a few basic good practices can be defined which the CFE, the fraud expert, can help any client management implement:

  • Know your fraud risk and what it should be;
  • Be educated in management science and computer technology. Ensure that your education includes basic fraud prevention techniques and associated prevention controls;
  • Know your existing cyber fraud prevention decision model, including the shortcomings of those aspects of the model in current use and develop a schedule to address them;
  • Know your frauds. Understand the common fraud scenarios targeting your industry so that you can act swiftly when confronted with one of them.

We can conclude that the issues involving cybersecurity are many and complex but that CFE’s are equipped  to bring much needed, fraud related experience to any management’s table as part of the team in confronting them.

Special Chapter Event – Trends in Advanced Data Analytics

CommercialOur guest speaker, Kevin Jones, a Director at the health care analytics concern HMS, made a number of excellent points during a special presentation entitled ‘Trends in Advanced Data Analytics’ given yesterday at the Virginia State Police Training Academy in Richmond, Virginia.  Specifically, Kevin addressed the challenges facing organizations new to the utilization of big data as a fraud investigation tool.  Kevin rightly pointed out that even though every sophisticated computer assisted fraud scheme leaves traces on the victimized system(s), those data are often over-looked, collected incorrectly or analyzed ineffectively as a result of simple, wholly preventable, ignorance.

Kevin provided a number of examples of the difficulties for the prosecutorial process arising if relevant evidence isn’t gathered appropriately at the very beginning of an investigation; evidence not gathered initially will likely become evidence foregone.  Not only do many organizations underestimate how often they may need to produce reliable fraud related evidence of exactly what happened in their information system during the commission of a fraud or irregularity, but they woefully underestimate the demands that the legal system will make in terms of ensuring the admissibility and reliability of any digital evidence.

Kevin emphasized that concerns new to the use of analytics to begin the complex process of fighting routine incidents of fraud, waste and abuse need to develop a detailed response plan to address those fraud related incidents found to involve their data.   Lacking such a plan, much potential evidence will either never be collected or will be found worthless (as a result of contamination) in the support of any future prosecution.  To specifically address this type of enterprise risk, Kevin advises that fraud examiners and auditors should get the concerned departments of the enterprise (like information management and financial operations) talking to each other about how they want to manage information security incidents, including the responsibilities and procedures they will jointly follow in responding to a fraud challenge.

The involvement of fraud examiners and auditors is especially important when developing the part of the response plan governing legal action to be taken against a person or organization after discovery of an information security incident.  This part of the plan needs to address exactly what evidence should be collected, retained and presented in conformance with the rules of evidence promulgated by the relevant jurisdictions concerned.  Kevin pointed out that it’s often over-looked that the same sorts of defined plans need to be developed when collecting and presenting evidence supporting internal disciplinary actions.

According to Kevin, in most jurisdictions, the legal admissibility of digital evidence in a court of law is governed by three fundamental principles: relevance, reliability and sufficiency.  Evidence is relevant when it can prove or disprove an element of a specific case being investigated.  Reliability relates generally to the evidence being what it purports to be, i.e., that it has not been spoiled or altered in some way.  In many jurisdictions the concept of sufficiency means that enough evidence has been collected to prove or disprove the elements of the matter.

To reduce the threat of legal challenges, Kevin suggests that management needs to consider providing assurance that the organization has performed due diligence by implementing security measures over its data.  If security is lax or non-existent, it can be hard to prove that any wrong doing even occurred in the absence of well-defined rules.  There need to be reviews of the organization’s information security systems and procedures at defined intervals to determine whether their control objectives, controls, processes and procedures conform to the requirements of information security standards and relevant regulations.  Management needs to obtain assurance that its information security procedures are implemented and maintained effectively and that they are performing as expected.

Kevin’s message to fraud examiners is that we can help client management on a number of fronts to develop a coordinated approach to developing readiness to employ digital information and analytics as tools to combat the inter-related threats represented by fraud, waste and abuse:

— As fraud experts, do we feel that the organization has adequately identified the main fraud related risk scenarios commonly faced in its industry or business sector?
–Has the client identified the types of evidence it’s going to need in a civil litigation or criminal proceeding and the steps it will need to take to secure those data?
–Has the client inventoried the data that it routinely collects and the ready availability of those data in a crisis?
–What’s the client’s degree of general familiarity with common legal issues and problems such as admissibility, data protection, human rights, limits to surveillance, and legal obligations to staff members and others?
–Is there an action plan in place to assure the coordinated action of organizational components in the face of a crisis involving major instance(s) of fraud, waste and abuse?

The sincere thanks of all our Chapter members to Kevin and to the members of his team at HMS for making this special RVA_CFES event on developing trends in advanced analytics and data mining a success!