Category Archives: Credit Card Fraud

Sock Puppets

The issue of falsely claimed identity in all its myriad forms has shadowed the Internet since the beginning of the medium.  Anyone who has used an on-line dating or auction site is all too familiar with the problem; anyone can claim to be anyone.  Likewise, confidence games, on or off-line, involve a range of fraudulent conduct committed by professional con artists against unsuspecting victims. The victims can be organizations, but more commonly are individuals. Con artists have classically acted alone, but now, especially on the Internet, they usually group together in criminal organizations for increasingly complex criminal endeavors. Con artists are skilled marketers who can develop effective marketing strategies, which include a target audience and an appropriate marketing plan: crafting promotions, product, price, and place to lure their victims. Victimization is achieved when this marketing strategy is successful. And falsely claimed identities are always an integral component of such schemes, especially those carried out on-line.

Such marketing strategies generally involve a specific target market, which is usually made up of affinity groups consisting of individuals grouped around an objective, bond, or association like Facebook or LinkedIn Group users. Affinity groups may, therefore, include those associated through age, gender, religion, social status, geographic location, business or industry, hobbies or activities, or professional status. Perpetrators gain their victims’ trust by affiliating themselves with these groups.  Historically, various mediums of communication have been initially used to lure the victim. In most cases, today’s fraudulent schemes begin with an offer or invitation to connect through the Internet or social network, but the invitation can come by mail, telephone, newspapers and magazines, television, radio, or door-to-door channels.

Once the mark receives and accepts the offer to connect, some sort of response or acceptance is requested. The response will typically include (in the case of Facebook or LinkedIn) clicking on a link included in a fraudulent follow-up post to visit a specified web site or to call a toll-free number.

According to one of Facebook’s own annual reports, up to 11.2 percent of its accounts are fake. Considering the world’s largest social media company has 1.3 billion users, that means up to 140 million Facebook accounts are fraudulent; these users simply don’t exist. With 140 million inhabitants, the fake population of Facebook would be the tenth-largest country in the world. Just as Nielsen ratings on television sets determine different advertising rates for one television program versus another, on-line ad sales are determined by how many eyeballs a Web site or social media service can command.

Let’s say a shyster want 3,000 followers on Twitter to boost the credibility of her scheme? They can be hers for $5. Let’s say she wants 10,000 satisfied customers on Facebook for the same reason? No problem, she can buy them on several websites for around $1,500. A million new friends on Instagram can be had for only $3,700. Whether the con man wants favorites, likes, retweets, up votes, or page views, all are for sale on Web sites like Swenzy, Fiverr, and Craigslist. These fraudulent social media accounts can then be freely used to falsely endorse a product, service, or company, all for just a small fee. Most of the work of fake account set up is carried out in the developing world, in places such as India and Bangladesh, where actual humans may control the accounts. In other locales, such as Russia, Ukraine, and Romania, the entire process has been scripted by computer bots, programs that will carry out pre-encoded automated instructions, such as “click the Like button,” repeatedly, each time using a different fake persona.

Just as horror movie shape-shifters can physically transform themselves from one being into another, these modern screen shifters have their own magical powers, and organizations of men are eager to employ them, studying their techniques and deploying them against easy marks for massive profit. In fact, many of these clicks are done for the purposes of “click fraud.” Businesses pay companies such as Facebook and Google every time a potential customer clicks on one of the ubiquitous banner ads or links online, but organized crime groups have figured out how to game the system to drive profits their way via so-called ad networks, which capitalize on all those extra clicks.

Painfully aware of this, social media companies have attempted to cut back on the number of fake profiles. As a result, thousands and thousands of identities have disappeared over night among the followers of many well know celebrities and popular websites. If Facebook has 140 million fake profiles, there is no way they could have been created manually one by one. The process of creation is called sock puppetry and is a reference to the children’s toy puppet created when a hand is inserted into a sock to bring the sock to life. In the online world, organized crime groups create sock puppets by combining computer scripting, web automation, and social networks to create legions of online personas. This can be done easily and cheaply enough to allow those with deceptive intentions to create hundreds of thousands of fake online citizens. One only needs to consult a readily available on-line directory of the most common names in any country or region. Have a scripted bot merely pick a first name and a last name, then choose a date of birth and let the bot sign up for a free e-mail account. Next, scrape on-line photo sites such as Picasa, Instagram, Facebook, Google, and Flickr to choose an age-appropriate image to represent your new sock puppet.

Armed with an e-mail address, name, date of birth, and photograph, you sign up your fake persona for an account on Facebook, LinkedIn, Twitter, or Instagram. As a last step, you teach your puppets how to talk by scripting them to reach out and send friend requests, repost other people’s tweets, and randomly like things they see Online. Your bots can even communicate and cross-post with one another. Before the fraudster knows it, s/he has thousands of sock puppets at his disposal for use as he sees fit. It is these armies of sock puppets that criminals use as key constituents in their phishing attacks, to fake on-line reviews, to trick users into downloading spyware, and to commit a wide variety of financial frauds, all based on misplaced and falsely claimed identity.

The fraudster’s environment has changed and is changing over time, from a face-to-face physical encounter to an anonymous on-line encounter in the comfort of the victim’s own home. While some consumers are unaware that a weapon is virtually right in front of them, others are victims who struggle with the balance of the many wonderful benefits offered by advanced technology and the painful effects of its consequences. The goal of law enforcement has not changed over the years; to block the roads and close the loopholes of perpetrators even as perpetrators continue to strive to find yet another avenue to commit fraud in an environment in which they can thrive. Today, the challenge for CFEs, law enforcement and government officials is to stay on the cutting edge of technology, which requires access to constantly updated resources and communication between organizations; the ability to gather information; and the capacity to identify and analyze trends, institute effective policies, and detect and deter fraud through restitution and prevention measures.

Now is the time for CFEs and other assurance professionals to continuously reevaluate all we for take for granted in the modern technical world and to increasingly question our ever growing dependence on the whole range of ubiquitous machines whose potential to facilitate fraud so few of our clients and the general public understand.

Where the Money Is

bank-robberyOne of the followers of our Central Virginia Chapter’s group on LinkedIn is a bank auditor heavily engaged in his organization’s analytics based fraud control program.  He was kind enough to share some of his thoughts regarding his organization’s sophisticated anti-fraud data modelling program as material for this blog post.

Our LinkedIn connection reports that, in his opinion, getting fraud data accurately captured, categorized, and stored is the first, vitally important challenge to using data-driven technology to combat fraud losses. This might seem relatively easy to those not directly involved in the process but, experience quickly reveals that having fraud related data stored reliably over a long period of time and in a readily accessible format represents a significant challenge requiring a systematic approach at all levels of any organization serious about the effective application of analytically supported fraud management. The idea of any single piece of data being of potential importance to addressing a problem is a relatively new concept in the history of banking and of most other types of financial enterprises.

Accumulating accurate data starts with an overall vision of how the multiple steps in the process connect to affect the outcome. It’s important for every member of the fraud control team to understand how important each process pre-defined step is in capturing the information correctly — from the person who is responsible for risk management in the organization to the people who run the fraud analytics program to the person who designs the data layout to the person who enters the data. Even a customer service analyst or a fraud analyst not marking a certain type of transaction correctly as fraud can have an on-going impact on developing an accurate fraud control system. It really helps to establish rigorous processes of data entry on the front end and to explain to all players exactly why those specific processes are in place. Process without communication and communication without process both are unlikely to produce desirable results. In order to understand the importance of recording fraud information correctly, it’s important for management to communicate to all some general understanding about how a data-driven detection system (whether it’s based on simple rules or on sophisticated models) is developed.

Our connection goes on to say that even after an organization has implemented a fraud detection system that is based on sophisticated techniques and that can execute effectively in real time, it’s important for the operational staff to use the output recommendations of the system effectively. There are three ways that fraud management can improve results within even a highly sophisticated system like that of our LinkedIn connection.

The first strategy is never to allow operational staff to second-guess a sophisticated model at will. Very often, a model score of 900 (let’s say this is an indicator of very high fraud risk), when combined with some decision keys and sometimes on its own, can perform extremely well as a fraud predictor. It’s good practice to use the scores at this high risk range generated by a tested model as is and not allow individual analysts to adjust it further. This policy will have to be completely understood and controlled at the operational level. Using a well-developed fraud score as is without watering it down is one of the most important operational strategies for the long term success of any model. Application of this rule also makes it simpler to identify instances of model scoring failure by rendering them free of any subsequent analyst adjustments.

Second, fraud analysts will have to be trained to use the scores and the reason codes (reason codes explain why the score is indicative of risk) effectively in operations. Typically, this is done by writing some rules in operations that incorporate the scores and reason codes as decision keys. In the fraud management world, these rules are generally referred to as strategies. It’s extremely important to ensure strategies are applied uniformly by all fraud analysts. It’s also essential to closely monitor how the fraud analysts are operating using the scores and strategies.

Third, it’s very important to train the analysts to mark transactions that are confirmed or reported to be fraudulent by the organization’s customers accurately in their data store.

All three of these strategies may seem very straight forward to accomplish, but in practical terms, they are not that easy without a lot of planning, time, and energy. A superior fraud detection system can be rendered almost useless if it is not used correctly. It is extremely important to allow the right level of employee to exercise the right level of judgment.  Again, individual fraud analysts should not be allowed to second-guess the efficacy of a fraud score that is the result of a sophisticated model. Similarly, planners of operations should take into account all practical limitations while coming up with fraud strategies (fraud scenarios). Ensuring that all of this gets done the right way with the right emphasis ultimately leads the organization to good, effective fraud management.

At the heart of any fraud detection system is a rule or a model that attempts to detect a behavior that has been observed repeatedly in various frequencies in the past and classifies it as fraud or non-fraud with a certain rank ordering. We would like to figure out this behavior scenario in advance and stop it in its tracks. What we observe from historical data and our experience needs be converted to some sort of a rule that can be systematically applied to the data real-time in the future. We expect that these rules or models will improve our chance of detecting aberrations in behavior and help us distinguish between genuine customers and fraudsters in a timely manner. The goal is to stop the bleeding of cash from the account and to accomplish that as close to the start of the fraud episode as we can. If banks can accurately identify early indicators of on-going fraud, significant losses can be avoided.

In statistical terms, what we define as a fraud scenario would be the dependent variable or the variable we are trying to predict (or detect) using a model. We would try to use a few independent variables (as many of the variables used in the model tend to have some dependency on each other in real life) to detect fraud. Fundamentally, at this stage we are trying to model the fraud scenario using these independent variables. Typically, a model attempts to detect fraud as opposed to predict fraud. We are not trying to say that fraud is likely to happen on this entity in the future; rather, we are trying to determine whether fraud is likely happening at the present moment, and the goal of the fraud model is to identify this as close to the time that the fraud starts as possible.

In credit risk management, we try to predict if there will likely be serious delinquency or default risk in the future, based on the behavior exhibited in the entity today. With respect to detecting fraud, during the model-building process, not having accurate fraud data is akin to not knowing what the target is in a shooting range. If a model or rule is built on data that is only 75 percent accurate, it is going to cause the model’s accuracy and effectiveness to be suspect as well. There are two sides to this problem.  Suppose we mark 25 percent of the fraudulent transactions inaccurately as non-fraud or good transactions. Not only are we missing out on learning from a significant portion of fraudulent behavior, by misclassifying it as non-fraud, the misclassification leads to the model assuming the behavior is actually good behavior. Hence, misclassification of data affects both sides of the equation. Accurate fraud data is fundamental to addressing the fraud problem effectively.

So, in summary, collecting accurate fraud data is not the responsibility of just one set of people in any organization. The entire mind-set of the organization should be geared around collecting, preserving, and using this valuable resource effectively. Interestingly, our LinkedIn connection concludes, the fraud data challenges faced by a number of other industries are very similar to those faced by financial institutions such as his own. Banks are probably further along in fraud management and can provide a number of pointers to other industries, but fundamentally, the problem is the same everywhere. Hence, a number of techniques he details in this post are applicable to a number of industries, even though most of his experience is bank based. As fraud examiners and forensic accountants, we will no doubt witness the impact of the application of analytically based fraud risk management by an ever multiplying number of client industrial types.

Volunteering for Fraud

identity-theftOur Chapter has a member who has just completed work on an interesting identity theft case.  It seems the victim provided various items of highly specific, identifiable personal information to a local, specialty retailer in exchange for a verbal agreement to provide a discount card and store credit.  Whether the information was subsequently hacked or just carelessly shared, sold or handled by the retailer is still unclear but what is certain is that this identical information was used by fraudsters, with other meta data, to build two different, highly credible, loan applications, one of which was approved by a financial institution.

Our member’s case is an example of the all too real risk posed by voluntarily shared information. In our desire to use services of various kinds – for efficiency, productivity, profit or just for fun – we all seem to find ourselves agreeing to terms and conditions that we may not even see or read or, if we read them, not fully comprehend. A moments reflection would lead any knowledgeable auditor to the conclusion that this amounts to contractual sharing of data even though the “contract” might not even refer to any direct exchange for consideration between the company and the patron, but rather is just for the use of the offeror’s infrastructure; this practice results in trillions of elements of data that the owner of the infrastructure controls, aggregates and uses for its own economic gain.  While simple transactional data associated with the payment process have a definite cycle, voluntarily supplied personal data becomes perpetual. The intentions behind the former are usually tacitly articulated and apply within the realm of the specific payment arrangements between the agreeing parties. In contrast, voluntarily supplied personal data are generally timeless, can be “sliced and diced” using data mining, and can be further masked and shared for the economic gain of the infrastructure owner and of its business partners and, possibly, its customers.

We can think of data volunteerism as the act of volunteering personal information on the part of a user when, in fact, that user might not necessarily want or mean to do so. It’s not so much consent to share personal data, but rather lack of dissent in sharing data. Passivity or inertia on the part of the personal data sharer plays an important role in one’s attraction to data volunteerism. Immediate perceived benefits of seeking the offered services and, thus, benefiting from them, outweigh anything that the user vaguely understands as the costs of doing so under the service provider’s terms and conditions agreed to by the user.

Before clicking the “I agree” button on an agreement of use, how often have we all paused and analyzed the contents of the agreement? Such agreements are generally long, filled with legalese and we feel like we’re wasting time in getting to the services provided by that company or app that just popped the agreement on our screen. According to ACFE, under the prospect theory of decision-making behavior, losses are weighted more heavily than gains. And now here we are, delaying the immediate gratification of using some cool phone service. And so we all fall into the vulnerability of allowing an apparently harmless verbal or written agreement stand in the way of doing something we want to do right now. As with the case of our member’s client, people willingly share personal information when they are nudged by a sales clerk or by a new app on their phone to do so. The perceived immediate benefits seem to outweigh any remotely noticed costs of volunteering the information.

All of this has broad implications for fraud examination and for law enforcement.  Every non-cash payment transaction involves the exchange of personal identifying information on some level.   Bank checks, written contracts, account passwords, phone numbers and a host of other identifying information are both the life blood of the financial system and the continuous targets of every type of thief.  Nothing financial happens until personal data are exchanged and the more aggregated elements of data fraudsters have about anyone at their command the easier their job becomes.

As fraud examiners we should strive to make our clients aware of the general ground rules for the sharing of personal data propagated by the ACFE and others:

  1. The giver must have knowingly consented to the collection, use or disclosure of personal information.
  2. Consent must be obtained in a meaningful way, generally requiring that organizations communicate the purposes for collection, so that the giver will reasonably know and understand how the information will be collected, used or disclosed.
  3. Organizations must create a higher threshold for consent by contemplating different forms of consent depending on the nature of information and its sensitivity.
  4. In a giver-receiver relationship, consent is dynamic and ongoing. It is implied all the time that the giver grants the privilege of use to the information receiver and that the privilege is only good as long as the giver’s consent is not withdrawn.
  5. The receiver has a duty to adequately safeguard the personal data entrusted to it.

A legal definition of consent is hard to find. The common law context suggests that consent is a “freely given agreement.” An agreement, contractual or by choice, implies a particular aim or object. While it is clear that the force of laws and regulations is necessary, in the end, what equally matters is the behavior of the user. Concepts and paradigms such as bounded rationality and prospect theory point to the vulnerability of human users in exercising consent. If that is where the failure occurs, privacy issues will only propagate, not get better. Finally, remember that privacy solutions embedded in the technology to empower users to protect their privacy are only as good as the motivation, knowledge and determination of the user.

As fraud examiners and assurance professionals we have to face the fact that not all our user/clients are equally technology savvy; not all users consider it worth their time to navigate through privacy monitors in a retail store or in an on-line app to feel safe. And generally, all users, indeed all of us, are creatures of bounded rationality.

Costs of cyber crime in 2015 were an estimated US $1.52 billion in the US alone and US $221 billion globally. These criminals find a bonanza if they can successfully perpetrate a data breach in which they break into a system and/or database to steal personally identifiable information (e.g., addresses, social security numbers, financial account numbers), or better yet, data on credit/debit cards.

Data volunteerism nudges people to share more and more personal information. This results in a huge pool of data across companies and institutions. If hard surveillance, such as the use of a camera watching over a parking lot, is concretely vivid, soft surveillance remains buried in the technology, allowing it to work freely on available data and metadata. As this use of data by app providers and others becomes wider and stronger and related frauds proliferate, the public could lose trust in these providers and the loss of trust would translate into loss of sales for the provider. The best way for CFE’s to address these issues for all stakeholders is through client education on the ACFE’s ground rules for self-protection in the sharing of personal information.

A Question of Privacy

CreditCards2Our Chapter recently got a question from a reader of this blog about data privacy; specifically she asked about the Payment Card Industry Data Security Standard (PCI DSS) and whether compliance with that standard’s requirements by a client would provide reasonable assurance that the client organization’s customer data privacy controls and procedures are adequate. The question came up in the course of a credit card fraud examination in which our reader’s small CPA firm was involved. A very good question indeed!  The short answer, in my opinion, is that, although PCI DSS compliance audits cover some aspects of data privacy, because they’re limited to credit cards, PCI DSS audits would not, in themselves be sufficient to convince a jury that data privacy is adequately protected throughout a whole organization.  The question is interesting because of its bearing on the fraud risk assessments CFE’s conduct.   The question is important because CFE’s should understand the scope (and limitations) of PCI DSS compliance activities within their client organizations and communicate the differences when reviewing corporate-wide data privacy for fraud prevention purposes.  This will also prevent any potential misunderstandings over duplication of review efforts with business process owners and fraud examination clients.

Given all the IT breeches and intrusions happening daily, consumers are rightly cynical these days about businesses’ ability to protect their personal data. They report that they’re much more willing to do business with companies that have independently verified privacy policies and procedures. In-depth privacy fraud risk assessments can help organizations assess their preparedness for the outside review that inevitably follows a major customer data privacy breach.  As I’m sure all the readers of this blog know, data privacy generally applies to information that can be associated with a specific individual or that has identifying characteristics that might be combined with other information to indicate a specific person. Such personally identifiable information (PII) is defined as any piece of data that can be used to uniquely identify, contact, or locate a single person.  Information can be considered private without being personally identifiable.  Sensitive personal data includes individual preferences, confidential financial or health information, or other personal information. An assessment of data privacy fraud risk encompasses the policy, controls, and procedures in place to protect PII.

In planning a fraud risk assessment of data privacy, CFE’s auditors should evaluate or consider based on risk:

–The consumer and employee PII that the client organization collects, uses, retains, discloses, and discards.
–Privacy contract requirements and risk liabilities for all outsourcing partners, vendors, contractors, and other third parties involving sharing and processing of the organization’s consumer and employee data.
–Compliance with privacy laws and regulations impacting the organization’s specific business and industry.
–Previous privacy breaches within the organization and its third-party service providers, and reported breaches for similar organizations noted by clearing houses like Dunn & Bradstreet and in the industry’s trade press.
–The CFE should also consult with the client’s corporate legal department before undertaking the review to determine whether all or part of the assessment procedure should be performed at legal’s direction and protected as “attorney-client privileged” work products.

The next step in a privacy fraud risk assessment is selecting a framework for the review. Two frameworks to consider are the American Institute of Certified Public Accountants (AICPA) Privacy Framework and The IIA’s Global Audit Technology Guide: Managing and Auditing Privacy Risks.  For ACFE training purposes, one CFE working for a well know on-line retailer reported organizing her fraud assessment report based on the AICPA framework. The CFE chose that methodology because it would be understood and supported easily by management, external auditors, and the audit committee.

The AICPA’s ten component framework was useful in developing standards for the organization as well as for an assessment framework:

–Management. The organization defines, documents, communicates, and assigns accountability for its privacy policies and procedures.
–Notice. The organization provides notice about its privacy policies and procedures and identifies the purposes for which PII is collected, used, retained, and disclosed.
–Choice and Consent. The organization describes the choices available to the individual customer and obtains implicit or explicit consent with respect to the collection, use, and disclosure of PII.
–Collection. The organization collects PII only for the purposes identified in the Notice.
–Use, Retention, and Disposal. The organization limits the use of PII to the purposes identified in the Notice and for which the individual customer has provided implicit or explicit consent. The organization retains these data for only as long as necessary to fulfill the stated purposes or as required by laws or regulations, and thereafter disposes of such information appropriately.
–Access. The organization provides individual customers with access to their PII for review and update.
–Disclosure to Third Parties. The organization discloses PII to third parties only for the purposes identified in the Notice and with the implicit or explicit consent of the individual.
–Security for Privacy. The organization protects PII against unauthorized physical and logical access.
–Quality. The organization maintains accurate, complete, and relevant PII for the purposes identified in the Notice.
–Monitoring and Enforcement. The organization monitors compliance with its privacy policies and procedures and has procedures to address privacy complaints and disputes.

Using the detailed assessment procedures in the framework, the CFE, working with internal client staff, developed specific testing procedures for each component, which were performed over a two-month period. Procedures included traditional walkthroughs of processes, interviews with individuals responsible for IT security, technical testing of IT security and infrastructure controls, and review of physical storage facilities for documents with PII.  Technical scanning was performed independently by the retailer’s  IT staff, which identified PII on servers and some individual personal computers erroneously excluded from compliance monitoring. Facilitated sessions with the CFE and individuals responsible for PII helped identify problem areas. The fraud risk assessment dramatically increased awareness of data privacy and identified several opportunities to strengthen ownership, accountability, controls, procedures, and training. As a result of the assessment, the retailer implemented a formal data classification scheme and increased IT security controls. Several of the vulnerabilities and required enhancements involved controls over hard-copy records containing PII. Management reacted to the overall report positively and requested that the CFE perform recurring views of fraudulent privacy breech vulnerability.

Fraud risk assessments of client privacy programs can help make the business case within any organization for focusing on privacy now, and for promoting organizational awareness of privacy issues and threats. This is one of the most significant opportunities for fraud examiners to help assess risks and identify potential gaps that are daily proving so devastating if left unmanaged.