Breaking News- USA and EU Reach Data Transfer Agreement to Replace Safe Harbor

Today, the European Commission announced that the United States and the European Union reached a trans-Atlantic data transfer agreement called “EU-US Privacy Shield” to replace Safe Harbor. While the written text of Privacy Shield has not yet been released, the agreement introduces a new regime for trans-Atlantic transfer of Europeans’ data. We can expect to see commentary on the new Privacy Shield from the Article 29 Working Party as soon as Wednesday (February 3) when Privacy Shield will be presented to European Data Protection Authorities.

Key provisions include:

1. Binding assurances by the United States to the EU regarding clear limits, safeguards and oversight for national security-related access to European’s data transferred under Privacy Shield.

2. Robust obligations on US companies importing Europeans’ personal data. The focus of the agreement appears to be on how data is collected and processed, as well as guaranteeing individual rights, including publication of US companies’ policies for handling Europeans’ data for enforcement by the FTC. The agreement also imposes more stringent obligations on US companies handling employee data from Europe by requiring compliance with decisions by European Data Protection Authorities.

3. Rights of redress for Europeans who consider their data to have been misused under Privacy Shield, including deadlines imposed on companies for response to complaints by Europeans, ability of Data Protection Authorities to refer complaints to the U.S Department of Commerce and the FTC, and free alternative dispute resolution for Europeans.

The US has several outstanding tasks needed to effectively implement Privacy Shield. For example, it will have to create a new ombudsperson position for oversight relating to access to Europeans’ transferred data for national security reasons. Reportedly, the Department of Commerce also announced that it will hold briefings regarding the new obligations on US companies which are different than under Safe Harbor.

The full press release is available here.

FTC’s First PrivacyCon Event Reveals Cutting Edge Research in Key Data Privacy Issues and Hot Topics on the FTC’s Radar

The FTC held its first PrivacyCon event on January 14, 2016, bringing together scholars, researchers, and the FTC to discuss the latest privacy and data security research in 5 topic areas: 1) state of online privacy, 2) consumer’s privacy expectations, 3) big data in algorithms, 4) the economics of privacy and data security, and 5) security and usability.

The research findings and discussions at PrivacyCon will impact the FTC’s enforcement objectives. The FTC declared that its enforcement policy must be guided by research and data. The FTC cited examples of where research has directly affected its enforcement tactics, referencing its allegations against Oracle that the company failed to disclose that an older, unsecure version of JAVA would not be removed as part of the software update process. The FTC’s prosecution against Oracle was based on security researchers’ discovery of problems with malware exploits for older versions of JAVA, spurring the FTC to investigate the issue. By focusing their attentions on current security research, companies will acquire foresight into where the FTC will focus its enforcement efforts.

1. Online Privacy
Online tracking presents a major concern for online privacy. Some of the research presented shows tracking is greatly increasing – twice as many cookies were detected recently compared to 2012. With the widespread use of HTML5, researchers have observed an increase in the amount of tracking and the percentage of cookies set by 3rd party hosts.

Lack of transparency in web tracking is another major concern. Presenters advocated for transparency as an effective mechanism to return control of web-tracking to users and publishers. The research presented evaluated the effectiveness of privacy tools and browser protections like ad blocking, do not track, and 3rd party cookie blocking.

Researchers presented a critical assessment of the “notice and choice” model. The “notice and choice” model is based on the concept that a rational consumer negotiates for privacy protection by reading privacy policies and selecting services consistent with his/her preferences. Research presented at PrivacyCon shows individuals in the marketplace operate with a knowledge gap. Consumers are under the (often inaccurate) impression that privacy rights are already enshrined in privacy policies and protected by law, so they have a perception that they do not need to bargain for privacy in the marketplace. In other words, consumers think they are already protected so there is no value in reading privacy policies to learn about protections.

Other researchers argued that what looks like tradeoff behavior is actually Americans’ resignation about marketers’ use of their data. Americans want control over their data but believe they will never achieve it.

Researchers proposed that the FTC evaluate the use of privacy policies given the view by consumers of privacy policies as seals. There was also a proposal for a marketing academic consultancy program for privacy to help the FTC better understand the limits of the “notice and choice” model.

We can expect the FTC to focus its attention on online tracking and the tools used to A) disclose this tracking and/or B) obtain consumer consent to this tracking. The FTC may take a critical view of the “notice and choice” model and could eventually endorse a new scheme for how consumers are informed about what is done with their data and how consumers provide consent. Perhaps, we could even see the FTC impose limits on online tracking regardless of whether consumers consent to such tracking, shifting away entirely from the principles of “notice and choice,” which could deal a huge blow to data aggregators given the substantial value derived from users’ conduct on the web.

2. Consumer Privacy Expectations
Related to online privacy, research presented showed that ordinary consumers expect websites to engage in certain data practices that websites do not actually do. Many consumers do not read privacy policies because they are too long, so one proposal was to extract and highlight data practices that do not match user expectations to more effectively give notice to consumers of actual use of their data.

Additional proposals included that the FTC should provide guidelines to those who collect and/or use consumer data conforming to the “average consumer in the digital context’s” expectations of privacy based on the type of data collected – photos, location, click-stream data – while also providing policies that are concise, readable, and designed to be approachable for consumers.

On a more specialized topic, specific problems arise with genetic data in that it is a unique identifier that cannot be fully de-identified. Direct-to-consumer genetic tests create concern because it is a commercial transaction governed by a contract and a privacy policy of the company providing the test. The governing contract may permit sharing or selling of sequenced genetic data or other personal data which is particularly problematic given that consumers often does not read contracts. Perhaps more concerning, online contracts often imply agreement with its terms of use based on the users access of or viewing of the website without requiring explicit consent, and often clauses allow unilateral alteration of terms without notice to the consumer.

We can expect to see the FTC focus on endorsing or requiring better broadcasting to consumers of the way their data is being used. We may also see the FTC target certain practices within specialized industries or transactions as deceptive or unfair given the type of data at stake.

3. Big Data and Algorithms
Another problem addressed by researchers was that ad-tracking source code is opaque and will not be shared with them. However, despite lacking the source code, they were able to test how behavioral advertising programs target individuals and found gender-based discrimination in targeted advertising. The researchers called for accountability and corrective measures but did not propose a specific mechanism for correcting discrimination in targeted advertising.

Other research study showed that data brokers can tell when a consumer is sick or depressed and then sell this information, and credit companies are looking at using Facebook data to make decisions about loans. Researchers proposed tools to enhance transparency and oversight to increase users’ awareness and society’s oversight over web services’ use of personal data. Examples of these tools include X-ray, Sunlight, Data Observatory, and Hubble to reveal targeting. The Sunlight tool detects what personal data is used for targeting and personalization by revealing what data triggers a specific type of output. Researchers also reported testing the Gmail ad observatory and found it violated Google’s privacy statements, wherein Google promised not to target ads based on sensitive information like race, religion, sex orientation, health or sensitive financial information.

We may see the FTC use tools developed by researchers, like Sunlight, to reveal what data is being used to target advertising to consumers. The FTC could foreseeably use the results generated by tools like Sunlight in connection with enforcement actions against companies who promised not to target advertising based on specified categories. We could also see the FTC go further by bringing enforcement actions alleging unfair practices based on targeted advertising on categories traditionally protected by anti-discrimination laws, regardless of whether a company promises in its privacy policies not to target ads based on those categories.

4. Economics of Privacy and Security
PrivacyCon included a presentation on the use of white hats (independent computer security assessors) to test for security vulnerabilities of a given website or company. Researchers assessed two models of using white hats. One model is the company-sponsored model where the company offers a “bounty” to an external actor who identifies vulnerabilities. The other model is a white hat initiated model where the white hats operate independently without invitation from a company. The researchers concluded it is still uncertain whether a company initiated model or a white hat initiated model is more effective.
Other research looked at who is benefiting from targeted advertising, concluding the consumer benefits only in certain circumstances while the advertiser and website publisher benefit more often, though in varying proportion. In sum, sharing of consumer information tends to benefit the platform and the advertiser.

Other research focused on informed consent and genetic testing. This research found that when more control is given over how an individual’s private information is shared, choice to participate in genetic testing is increased. However, the research also found that informed consent deters patients and hospitals from genetic testing, and found that data usage policies had little effect on an individual’s choice whether to participate in genetic testing.

Another key focus was the cost of data breaches and privacy violations in effort to understand costs, impacts and a company’s incentives to invest in cybersecurity controls. Though the research confirmed data breaches and privacy litigation are increasing, the cost of most data breaches was less than $200,000, meaning the annual losses from cyber events were relatively small compared to other categories of potential loss such as loss of intellectual property, cybercrime, insurance fraud, healthcare fraud, and retail shrinkage. The research suggests that, although data breaches and privacy litigation are increasing, the incentives to invest in cybersecurity are not great to the extent that the costs of data breach and privacy litigation do not appear to be as expensive as often believed for most companies.

The FTC may sharply increase the penalties sought in its enforcement actions in an effort to incentivize implementation of more effective (and more costly) cybersecurity measures given that the marketplace is not creating this result on its own.

5. Security and Usability
Smart devices and unpatched Internet of Things (IOT) devices was another topic of research. IOT device network traffic may leak user information, identify how a device is used, and may also identify user activity and behavior. The researchers reviewed several smart devices with security problems. They observed it is difficult to enforce security standards because of multiple manufacturers creating IOT devices, low capability devices, use of non-standard protocols and ports, and difficulty maintaining and patching due to low workforce and/or expertise. Researchers proposed solving these problems by way of the network.

Another topic presented included efforts to protect users from malicious and snooping mobile ads which only achieved intermittent success. Standard web isolation policies used in web browsers in the mobile environment no longer prevent leakage of sensitive information, according to the research presented. The researcher shared his results with Google’s ad mob who corrected the flaw, but other advertisers did not correct vulnerabilities the researcher identified. The researcher hoped by sharing this information, it would spur other advertisers to correct this flaw.

The long-recognized problems with privacy policies—their length, complexity, difficulty to understand, and the fact that they are often not read—have yielded possible solutions like layered privacy notices, privacy nutrition labels, privacy icons, and machine-readable policies such as “do not track (DNT),” but often there is pushback from industry to adopt those measures. Proposals to more effectively combat these privacy policy problems include semi-automatic analysis of privacy policies to extract key data practices and convey them to users. Another researcher proposed using a personal privacy assistant which can selectively inform users about privacy practices they care about but may not expect. The mechanism would be implemented by learning a users’ personalized privacy preference by requiring the user to answer several questions. The assistant would then provide recommendations to adjust a user’s settings with mobile apps – for example, to align with the user’s privacy preferences. The researcher also discussed how this same concept could apply to IOT devices.

The FTC has already flagged its interest in IOT in its report on the topic from January 2015, emphasizing concerns with security in IOT devices and highlighting the special problems with providing notice of privacy policies for IOT devices. PrivacyCon demonstrated that the FTC remains concerned with the security and privacy issues related to IOT devices. We may also see the FTC encourage use of semi-automated tools for consumers’ use to implement their privacy preferences. The FTC may elect to increase its focus on companies and websites failing to correct known flaws leaking sensitive information.

7th Circuit Mulls Whether Dismissal of P.F. Chang’s Breach Suit Was Premature

On January 13, 2016, during oral arguments, a 7th Circuit panel appeared to signal that it felt the lower court moved too quickly to dismiss a putative class action over a data breach at a number of P.F. Chang’s restaurants.

At issue is whether plaintiffs Lucas Kosner and John Lewert can sue for damages over the possibility their data was included in the breach. Kosner and Lewert each ate at one of the chain’s franchises before the company announced it had suffered a breach, but neither ate at one of the 33 affected restaurants P.F. Chang’s China Bistro Inc. identified, according to the suit.

Kosner and Lewert filed separate putative class actions in June 2014, after the restaurant chain announced that it had suffered a data breach. The plaintiffs contended in district court that they had incurred several types of damages from the security breach, including the increased risk of identity theft as well as the time and expense they allegedly incurred to monitor their accounts for potential identity theft. Plaintiffs argued that by accepting their non-public information (including the magnetic strip data and debit card PINs), the restaurant chain had entered into an implied contract requiring it to reasonably safeguard customers’ information. P.F. Chang’s countered that plaintiffs had not eaten at an affected restaurant, that they were at no risk of identity theft because their card data had not been stolen, and even if it had been, information from a credit card or a debit card does not “constitute one’s ‘identity.’” The restaurant chain’s attorneys further argued that affected customers would eventually be reimbursed under federal statutes for unauthorized debit and credit card charges, which plaintiffs argued was an inappropriate argument to raise on consideration of a motion to dismiss, where plaintiffs’ factual allegations must be accepted as true. U.S. District Judge John Darrah dismissed the complaints in December 2014, ruling that Kosner and Lewert failed to allege any harm and their speculation of future harm — if their identities were to be stolen after the breach — was not an actual injury sufficient to confer standing.

On appeal, P.F. Chang’s China Bistro Inc. asked the 7th Circuit to uphold a lower court’s dismissal of a proposed class action stemming from a data breach at the restaurant chain, saying that the plaintiffs were not damaged by the attacks because their debit card numbers had not been stolen.

At oral argument, Chief Circuit Judge Diane Wood and Circuit Judge David Hamilton argued that while the possibility of a breach alone was not enough, the plaintiffs could potentially bring claims if their data could be identified among information disclosed in the breach. This scenario would be consistent with the circuit’s prior holding in Remijas v. Nieman Marcus, that costs associated with protecting one’s identity following a breach could constitute injury and therefore confer standing. According to Judge Hamilton, the plaintiffs’ suits, which claim they pre-emptively spent time and money monitoring their accounts and canceling debit cards after they were alerted to a potential breach at the restaurant chain, is not currently enough for standing.

“Is it a reasonable fear if P.F. Chang’s says 33 stores [were affected] and you didn’t dine at one of the 33 stores?” P.F. Chang’s attorney asked the panel. In response, plaintiffs’ counsel pointed to plaintiffs’ expenditures for credit monitoring services as evidence of actual harm, echoing a comment in the Remijas decision. “It sounds like you are describing modern life,” Judge Hamilton told plaintiffs’ counsel. “These things happen. It’s nice to find someone to pay for those modest expenses.” However, Judge Wood acknowledged that discovery could have revealed evidence that the breach had actually affected Kosner and Lewert because it was possible that the restaurants’ computers were interconnected nationwide and therefore data from the unaffected restaurants could have been disclosed as well.

It remains to be seen whether the 7th Circuit affirms the district court’s dismissal. Similar to the early dismissals of data breach claims against large retailers such as Michaels and eBay (and potentially Barnes & Noble), the P.F. Chang dismissal is among a growing trend requiring plaintiffs to allege actual, demonstrable injury in the form of credit card or identity fraud or harm directly connected to the alleged breach at the very outset of the litigation. However, the 7th Circuit may also conclude that early dismissal was premature and permit the plaintiffs to investigate through discovery whether the scope of the breaches were greater than initially reported, in which case their pre-emptive credit monitoring actions may have been justified after all.

Cybersecurity Law Passed After Late Addition to Budget Bill

Late last month, President Barack Obama signed into law a budget bill passed by Congress. Included in the budget is a section entitled “Cybersecurity Act of 2015”, which contains most of the language from the Cybersecurity Information Sharing Act, a bill introduced by Senator Richard Burr (R-NC) intended to enhance coordination between government agencies and private entities in identifying and preventing cybersecurity threats.

The law encourages corporations to share information with the Federal government on cybersecurity “threats” and “defensive measures.” A cybersecurity “threat” is any action on a corporate network that may result in an unauthorized attempt to adversely impact the network, and a “defensive measure” is any method of detecting, preventing, or mitigating a known or potential cybersecurity threat or other vulnerability.

Under the Act, when a corporation shares information about a cybersecurity threat with another corporation, the receiving entity is required to implement protective measures related to the shared threat. As for the sharing entity, it is incumbent on them to remove any individual’s private personal information “not directly related” to a cybersecurity threat or defensive measure. Thus, the burden is on the corporation retaining the private information for determining whether, if at all, the personal information is related.

When corporations share cybersecurity-related information with the government, any information shared is precluded from disclosure to the public. The only Federal government entity authorized under the Act as a gateway for the submission of cybersecurity threats and defensive measures is the Department of Homeland Security. However, a section is included within the law that enables the President to designate other Federal entities, including the National Security Agency, authorizing them to enact procedures for receiving such information. Importantly, the law deems a disclosure to any approved Federal entity to be voluntary, expressly exempting disclosures from requests pursuant to the Freedom of Information Act.

Corporations sharing information under the Act, either with other corporations or the government, are afforded other protections, so long as they maintain compliance with the relevant procedures. Antitrust violations will be waived, and corporations can maintain legal rights and privileges, such as trade secrets and other proprietary information protections. Further, the law states that the sharing by a corporation to any other entity does not create a right or benefit. Most importantly, the law precludes any liability against that entity, civil or criminal, for monitoring or sharing consistent with the procedures under the law.

Legislators supporting the law view it as a method for the rapid dispersal of awareness and knowledge of cybersecurity threats and defenses between corporations and the government, stressing its voluntary nature. However, some tech companies, security experts, academics, and government officials believe that recent changes could enable the use of citizens’ private information disclosed by corporations as a surveillance tool. Specifically, they view the President’s ability to create procedures for corporations to provide information to Federal entities other than the Department of Defense, including the National Security Agency and other law enforcement agencies, as leaving open further opportunity for privacy liability. Additionally, they argue that allowing corporations to judge for themselves whether private information is related to a threat or defensive measure will lead to a natural tendency of over-inclusion of such information to minimize the risk of violating the law.

Within 60 days of enactment, interim guidelines and procedures are to be disseminated by the Attorney General and Secretary of Homeland Security, with final versions to be completed within 180 days of enactment. Given the almost clandestine inclusion into the budget bill, and subsequent passage, corporations with information on, or subject to, cybersecurity threats, or developing and implementing defensive measures, should initiate policies and procedures to maintain compliance with the law, as well as monitor for the dissemination of future institutional procedures.

No Harm, No Foul – FTC Claims Of Deficient Security Practices Dismissed Based on Insufficient Evidence of Actual Harm

Is it reckless for a bank to leave its vault unlocked? If you accept the reasoning of Federal Trade Commission (FTC) Chief Administrative Law Judge D. Michael Chappell – only if someone actually breaks in and steals something. On this premise, the FTC’s unfair data security practices case against LabMD, a Georgia-based clinical testing laboratory, was dismissed for failing to meet its burden of proving that the healthcare provider’s allegedly deficient security practices caused, or were likely to cause, substantial consumer injury.

BACKGROUND
LabMD was a privately held Georgia corporation formed by Michael J. Daugherty in 1996. Its primary business consisted of providing tissue sample analysis by pathologists specializing in prostate or bladder cancer. Urologists would send LabMD specimens for analysis from patients throughout the country, by which LabMD came into the possession of protected health information (PHI) belonging to thousands of patients.

In February 2008, Tiversa, a security firm based in Philadelphia, Pennsylvania, discovered that a LabMD insurance report was being shared openly by a LabMD billing computer on the Limewire peer-to-peer network. The report (referred to in the matter as the “1718 File”) was found to contain protected health information (PHI) and personally identifiable information (PII) on approximately 9300 patients, including their names, dates of birth, Social Security numbers, CPT codes for laboratory tests conducted, and, in some cases, health insurance company names, addresses, and policy numbers. After discovering that the 1718 File contained patient PHI, Tiversa used the “browse host” function of LimeWire to obtain a list of all other files being shared on the LabMD billing computer. The 1718 File was among 950 other shared files in the “My Documents” directory on the LabMD computer, most of which consisted of music and video files. However, eighteen documents were also being shared at the same time, three of which also contained patient PHI.

Tiversa contacted LabMD in May 2008, disclosed its download of the 1718 File, and offered its remediation services. In July 2008, LabMD rejected Tiversa’s proposal and proceeded to remove the file-sharing software and re-assessed their network’s security (although the FTC later claimed that its remediation efforts were also insufficient). Meanwhile, the 1718 File sat dormant until 2009, when the FTC served a Civil Investigative Demand (CID) on Tiversa’s affiliate, The Privacy Institute. Tiversa responded to the CID by producing a spreadsheet of companies who Tiversa claimed had exposed the personal information of 100 or more individuals. Among the names provided was LabMD, with a copy of the 1718 File. This disclosure led the FTC to open an investigation of LabMD, which ultimately resulted in the action against them for failing to implement reasonable security, an alleged “unfair” practice.

It is at this point in the narrative that the parties’ allegations (and consequently Judge Chappell’s Initial Decision) become mired in conspiracy theories. After the FTC began its action against LabMD, Richard Wallace, a forensics analyst hired by Tiversa in July 2007 who originally found the 1718 File, alleged that Tiversa had adopted a business practice of exaggerating how widely erroneously shared files had spread across peer-to-peer networks and in some cases intentionally misrepresenting that files had been discovered at IP addresses associated with known or suspected identity thieves. Tiversa countered that Wallace’s claims were false, motivated by his termination for cause during the pendency of the case against LabMD. Nevertheless, they resulted in a United States House Oversight and Government Affairs Committee investigation into Tiversa and its involvement with governmental entities. Judge Chappell’s Initial Decision goes into great detail about the allegations of unethical practices by Tiversa, pursuant to which he concluded that Wallace (a witness for LabMD) was more credible than Robert Boback (CEO of Tiversa and a witness for the FTC). This finding had a profound effect on the outcome of the case, with Judge Chappell wholly discounting the testimony of one of the FTC’s consumer injury experts, to the extent his conclusions were based in part on testimony of Tiversa’s CEO. Judge Chappell also challenged the expert opinions of the FTC’s other consumer injury expert, stating that although he “did not expressly rely on the discredited and unreliable testimony from Tiversa’s CEO as to the ‘spread’ of the 1718 File for his opinions on the likelihood of medical identity theft, this evidence was clearly considered … and it cannot be assumed that [the] opinions were not influenced by his review of [the CEO’s ]s testimony.” Initial Decision, p. 67, footnote 31. There was also a potential red herring injected into the case, consisting of 40 LabMD paper “day sheets”, 9 patient checks, and 1 money order discovered in the possession of identity thieves in Sacramento, California in 2012, which resulted in a dispute over how the records had travelled from Georgia to California, with the FTC claiming that they must have been downloaded from LabMD’s insecure network but lacking evidence to prove this theory. Lost in this swirl of accusations was the crux of the case: that LabMD had openly shared a file containing PHI of approximately 9300 patients on an open peer-to-peer network, which the FTC alleged was an “unfair” practice.

THE ANALYSIS
The FTC’s authority relating to data security derives from Section 5(n) of the Federal Trade Commission Act (“FTC Act”), which states that the Commission may declare any act or practice “unfair” that “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” The FTC’s complaint alleged that LabMD failed to provide reasonable security because the healthcare provider:

  • did not develop, implement, or maintain a comprehensive information security program to protect consumers’ personal information;
  • did not use readily available measures to identify commonly known or reasonably foreseeable security risks and vulnerabilities on its networks;
  • did not use adequate measures to prevent employees from accessing personal information not needed to perform their jobs;
  • did not adequately train employees to safeguard personal information;
  • did not require employees, or other users with remote access to the networks, to use common authentication-related security measures;
  • did not maintain and update operating systems of computers and other devices on its networks; and
  • did not employ readily available measures to prevent or detect unauthorized access to personal information on its computer networks.

Judge Chappell began his analysis by citing Congressional reports for the proposition that Section 5(n) of the FTC Act was intended to limit the scope of the FTC’s authority. However, rather than evaluating whether LabMD’s security was unreasonable as alleged, the Initial Decision instead focused solely on the issue of whether “substantial consumer injury” was at stake. The decision went to great lengths to attack the credibility of the FTC’s claims and evidence (largely by attacking Tiversa and its CEO as the FTC’s proxy), and discounts the potential harm of disclosing patient CPT (current procedural terminology) codes by noting that identity thieves would need to look them up on Google or the American Medical Association’s website in order to learn what tests had been performed on specific patients. Although the FTC had presented consumer injury expert witness testimony as well as survey data to demonstrate that the disclosure of consumer PHI/PII could result in various forms of identity fraud and other harms to consumers, the Initial Decision remarked that “the absence of any evidence that any consumer has suffered harm as a result of [LabMD]’s alleged unreasonable data security, even after the passage of many years, undermines the persuasiveness of [the FTC]’s claim that such harm is nevertheless ‘likely’ to occur.” Initial Decision, p. 52. Ultimately, the Initial Decision concluded that because actual harm had not yet resulted from the allegedly unreasonable security practices, then the practices were not “likely” to cause substantial consumer harm. Endorsing a narrow view that the “substantial consumer injury” required by Section 5(n) could not be satisfied by “hypothetical” or “theoretical” harm or “where the claim is predicated on expert opinion that essentially only theorizes how consumer harm could occur,” Judge Chappell opined that “[f]airness dictates that reality must trump speculation based on mere opinion.” Initial Decision, p. 52, 64.

THE UNLOCKED VAULT
There is no dispute that a LabMD employee had placed a file containing PHI of approximately 9300 of its patients in a publicly-shared folder on a billing computer. Anyone with LimeWire or any other Gnutella-based peer-to-peer filesharing software (which was freely available in 2008) could have downloaded any of the 950 files being shared by the LabMD billing computer, including the four containing PHI. From a credential authentication perspective, this is the equivalent of making these confidential files available for download on a public website, without any requirement for a username or password in order to obtain access. It is widely accepted, both in state and federal law, that the types of PHI/PII contained in the 1718 File should not be made publicly available in such a manner, particularly by a healthcare provider subject to HIPAA/HI-TECH’s Security Rule.

The Initial Decision’s analysis focused solely on whether there was an actual or probable injury after the fact based on this specific incident (i.e. the 1718 File being downloaded by Tiversa), instead of whether the practice itself (i.e. openly sharing a file containing PHI on 9300 patients on an open peer-to-peer network which could have been downloaded by anyone) caused or was likely to cause substantial consumer injury. Actual or imminent injury are requirements for standing in civil litigation, but the likelihood of substantial consumer harm is the proper standard for evaluating the FTC’s regulatory authority. In LabMD’s case, there were windfall events that saved them from a much more disastrous result: 1) the 1718 File was found (so far as known) only by Tiversa and not identity thieves, and 2) Tiversa notified LabMD of the exposure shortly after its discovery, which was quickly corrected. Consider what would have happened if the 1718 File had instead been discovered by an identity thief rather than Tiversa – the outcome would have been different (and likely much worse) for reasons totally unrelated to the security practice itself (i.e. their practice of openly sharing PHI had little or no effect on who actually discovered the file). To evaluate the reasonableness of LabMD’s practices in the first instance based on subsequent circumstances over which it had no control (i.e. the identity of the discoverer) judges the wrongfulness of the act solely by its accidental consequences – effectively, “no harm, no foul.”

The Initial Decision also contends that “to base unfair conduct liability upon proof of unreasonable data security alone would, on the evidence presented in this case, effectively expand liability to cases involving generalized or theoretical ‘risks’ of future injury, in clear contravention of Congress’ intent, in enacting Section 5(n), to limit liability for unfair conduct to cases of actual or ‘likely’ substantial consumer injury.” Initial Decision, p. 89. Here the Initial Decision attempts to graft the requirement of actual or imminent harm required in civil litigation onto the scope of the FTC’s authority, in contravention of the language of the FTC Act itself. This claim disregards the plain meaning of the terms “likely” and “risk,” which both relate to the possibility or probability that an event may occur – if the event actually occurs, it ceases to be a “risk” or “likelihood”; it becomes a “fact” or “certainty.” According to the United States Court of Appeals for the Third Circuit “[a]lthough unfairness claims ‘usually involve actual and completed harms,’ … ‘they may also be brought on the basis of likely rather than actual injury.’ And the FTC Act expressly contemplates the possibility that conduct can be unfair before actual injury occurs.” FTC v. Wyndham Worldwide Corp., 799 F.3d 236, 246 (3rd Cir. 2015) (quoting Int’l Harvester Co., 104 F.T.C. 949, 1061 (1984)). By extending the FTC’s authority to regulate practices that both cause or are “likely” to cause substantial consumer injury, Congress granted the FTC authority to pre-emptively address unfair trade practices before innocent consumers are harmed.

To return to the opening of the article, LabMD’s storage of the 1718 File in a shared folder on a peer-to-peer network could be analogized to leaving the doors and vault of a bank unlocked when no one was inside – the critical question is whether such an act (or practice) is likely to cause substantial consumer injury. The answer is not dependent upon whether any money was actually stolen during the months it was left unlocked; the fault is leaving the protected assets inside vulnerable so that the only factor between a potential and an actual theft is whether the wrong person checks if the doors are locked. The mere fact that a thief did not test the doors during that period doesn’t absolve the bank of otherwise reckless behavior. Accordingly, a better analysis may be to focus on the conditions existing during the period that the bank was left unlocked (or the practice existed) and, based on those conditions, evaluate whether the practice was reasonable. In the case of data security, the analysis should consider the type of information that was exposed (i.e. PHI/PII v. public information), how that type of information could be used to harm consumers (i.e. susceptibility to abuse by identity thieves, extortionists, or others), what measures were taken to safeguard the information from exposure (i.e. was it a complex “hack” of a computer network involving exploitation of zero-day vulnerabilities v. downloading a file from a publicly-available website with no authentication requirements), and what security measures are reasonable under the circumstances (in terms of time, cost, manpower, and other factors). These are among the actors identified by FTC’s expert Kim, but which the Initial Decision declined to consider by declining to engage in the reasonableness analysis.

THE AFTERMATH
On November 24, the FTC filed notice that it would be appealing the Initial Decision. The parties have jointly moved for an extension, pursuant to which the FTC will have until December 23 to file its Appeal Brief and LabMD will have until February 5, 2016 to file its Answering Brief. However, a secondary battle emerged on November 20, when LabMD filed suit against three FTC attorneys in the U.S. District Court for the District of Columbia, alleging that they destroyed the company using false evidence illegally and unethically obtained by Tiversa. While the Commission’s de novo review of the Initial Decision could come as early as spring or summer of 2016, the war between these two parties is far from over.

The future of the FTC’s authority to pro-actively regulate cybersecurity depends upon its successful appeal. While the Third Circuit’s opinion in FTC v. Wyndham Worldwide Corp. previously recognized the FTC’s authority to actively challenge deficient cybersecurity practices without first announcing the standards to be implemented, that case involved multiple breaches and actual consequential harm to the customers whose personal information was exposed. At stake now is whether other federal courts will accept Judge Chappell’s analysis and similarly focus on the issue of actual harm in the specific instance as determinative. That would substantially impact the FTC’s goal of pro-actively policing for deficient cybersecurity practices and limit the FTC in this area to only intervene after consumers are demonstrably injured or such injury is deemed imminent. Unless reversed, the LabMD Initial Decision could also create a perception of the FTC’s vulnerability on the issue of its authority and lead other companies threatened with FTC action for deficient security practices to challenge the regulatory agency. What remains to be seen is whether the Commission and federal courts will reassert the distinction between actual or imminent harm required for civil standing as opposed to the FTC’s regulatory authority to prevent likely consumer injuries before they occur.

Sponsored Social Media Posts Riskier than Ever

Five months after the Federal Trade Commission (FTC) issued updated guidance regarding paid endorsements, it is clearer than ever that it plans to take increasing action against retailers for soliciting reviews on social media. The FTC’s plans were reinforced on October 15, 2015, when FTC Commissioner Julie Brill, in a keynote address at the Better Business Bureau’s National Advertising Division Annual Conference, identified paid endorsements as a current priority for the Commission.

As the FTC begins targeting endorsements more aggressively, private actions brought under state laws are also likely to arise. In light of increased FTC enforcement and a higher risk of private lawsuits, retailers should make sure that their in-house and outside marketing teams are complying with applicable guidelines.

The FTC’s Endorsement Guides

For years, the FTC has considered it deceptive for an advertiser to solicit a review or endorsement that may lead consumers to believe that the review is unbiased. The “Guides Concerning the Use of Endorsements and Testimonials in Advertising” (Guides), which have been in effect since 1980, provide:

When there exists a connection between the endorser and the seller of the advertised product that might materially affect the weight or credibility of the endorsement (i.e., the connection is not reasonably expected by the audience), such connection must be fully disclosed.

16 C.F.R. § 255.5. The Guides were revised in 2009 to include examples of how this rule may apply to consumer-generated media, such as blogs and online message boards. On May 29, 2015, the FTC issued an updated version of “What People Are Asking,” a FAQ document created after the Guides were last revised (hereinafter, FAQ’s). This update advises business on how to apply the FTC’s endorsement standards to evolving forms of digital marketing and promotion, many of which were in their infancy in 2009.

The new Guides explain that an endorsement should always disclose a “material connection” between the endorser and the advertiser, even where space is limited, when, “knowing about that gift or incentive would affect the weight or credibility your readers give to your recommendation.” At a minimum, sponsored posts on Twitter, Instagram, Facebook and Pinterest should be accompanied by #ad or #sponsored (which the FTC points out only require three and ten characters, respectively).

The Guides also make clear that a company can have a material relationship with anyone with an incentive to post about it, including: employees that discuss the company’s product on their personal social media pages; bloggers who receive free products (or money) to do reviews on their websites; reviewers who make money each time a visitor clicks an affiliate link on their website; and customers who post about a specific product in order to enter an advertiser’s contest. The disclosure requirement even applies where the reviewer agrees to do a review without agreeing that the review will be positive (and even where it is ultimately negative).

The FTC has explained that where it does take action, it will in most cases not focus on the person who offered the endorsement. Instead, it will target the company whose goods or services are being advertised, and their ad agencies and public relations firms.

Although the Guides and FAQ’s don’t have the force of law, they do offer guidance on practices that the FTC considers violating the FTC Act. There are no fines for violations of the FTC Act, but law enforcement actions can result in orders requiring the advertiser to give up money it received as a result of the deceptive endorsement.

To satisfy the Guides, the disclosure must be clear and conspicuous. This means that consumers must be able to see and understand the disclosure easily — they shouldn’t have to look for it. Bloggers cannot satisfy this requirement by posting a single disclosure on their home page stating that many of the products they review are given to them for free by advertisers. For video endorsements, the FTC advises that the disclosure should be in the video itself, not the video’s text description. And where it is likely that viewers may not
watch the video from start to finish, disclosures should be made throughout the video to ensure that they are viewed.

Despite the thoroughness of the FAQ’s, several questions still remain. First, how will the FTC decide whether a customer would care that the reviewer was given something for his or her review? For example, would a makeup company be liable if it gave a blogger a free lipstick to review? What if a chewing gum company offered free samples to reviewers?

Also, what kind of “endorsement” is material to consumers in the first place? For example, the FTC has noted that it doesn’t know how much stock a consumer puts into “likes” when deciding whether to patronize a business. (The FTC realizes that Facebook’s “like” feature does not allow consumers to make a disclosure, and says that businesses should not encourage endorsements using features that don’t allow for disclosures).

A broader question arises from the ambiguity of the FTC’s Section 5 authority over “unfair conduct.” For example, is it “unfair,” and thus actionable, for a company to ask customers to follow it on Facebook? What about if a company runs a contest on its social media channels that would require customers to participate in those channels in order to compete?

FTC Enforcement of the Guides

Since revising the Guides in 2009, the FTC has investigated a number of companies that have solicited positive reviews on social media or elsewhere:

  • The FTC’s first investigation under the revised Guides was in 2010, when the FTC looked into Ann Taylor’s promotion in which it gave gifts to bloggers the company expected would promote its Loft division. The FTC ultimately declined to take action, given the small size of the promotion, the fact that it was the first of its kind from Ann Taylor, and because the retailer responded to the FTC investigation by creating a policy to notify bloggers that they must disclose any material connection to the company in the future.
  • In August 2010, the FTC entered a settlement agreement with public relations agency Reverb Communications Inc., which agreed to remove any game reviews in the online iTunes stores that were posted by employees posing as ordinary consumers.
  • In 2011, Legacy Learning Systems Inc., maker of at-home guitar DVDs, agreed to pay $250,000 as part of a settlement with the FTC. The company allegedly paid affiliates a commission to promote the DVDs in articles, blog posts, and other online editorial material.
  • In December 2011, the FTC investigated gift certificates that were allegedly given to bloggers who promoted Hyundai Motor America’s then-upcoming Super Bowl ads. The FTC ultimately closed the investigation, noting that Hyundai had a policy in place calling the bloggers to disclose the compensation they received (Hyundai’s advertising firm had hired the bloggers).
  • In April 2014, the FTC investigated a Cole Haan marketing campaign that asked customers to make Pinterest boards titled “Wandering Sole,” and to include Cole Haan shoes on that board; the retailer incentivized these boards by offering a $1,000 shopping spree for the best board, but did not require entries to label their boards as advertisements. Although the FTC ultimately decided not to pursue an enforcement action — largely because it had not previously publicly addressed whether entry into a contest is a form of material connection, or whether a pin on Pinterest may constitute an endorsement — it did issue a “closing letter” that warned Cole Haan that its campaign likely violated Section 5 of the FTC Act.
  • In November 2014, advertising agency Deutsch LA settled with the FTC in response to the FTC’s allegations that Deutsch LA encouraged its employees to use their personal Twitter accounts to generate buzz about the Sony PlayStation Vita without requiring the employees to disclose their affiliations with Deutsch LA or Sony.
  • In April 2015, the FTC approved a final consent order with AmeriFreight, an automobile shipment broker, based on the FTC’s claim that AmeriFreight compensated positive reviewers with discounts and other incentives, and then advertised its goods as being top-rated, based on those reviews.
  • Most recently, on September 2, 2015, the FTC announced a proposed settlement with Machinima Inc. (which touts itself as “the most notorious purveyor and cultivator of fandom and gamer culture”) for paying “influencers” up to $30,000 each to post YouTube videos endorsing Microsoft’s Xbox One system and several games.

Thus far, the FTC has looked kindly on retailers that either had policies in place that called for reviewers to disclose any material relationship, or that ended the allegedly deceptive practice soon after it occurred. In the Machinima case, for example, the FTC decided not to take action against Microsoft, largely because Microsoft had in place a “robust” compliance program that included specific guidance relating to the Guides; the FTC’s closing letter to Microsoft noted that company offered training on the Guides to its personnel, vendors and the employees of its advertising agency that have managed the relationship between Microsoft and Machinima.

Developing a compliance program could therefore have a double benefit for companies looking to protect themselves. First, it would reduce the likelihood that the company would violate the Guides in the first place. Second, a strong program may compel the FTC not to take action, in the case that a violation ever does occur.

Risk of Private Action

Although private individuals cannot bring lawsuits under the FTC Act, retailers may also be subject to lawsuits under state laws or the Lanham Act for the same practices prohibited by the Guides.

As the FTC continues to shine a light on this issue, consumers may be tempted to bring false advertising suits against retailers that fail to make adequate disclosures. This trend is currently taking place in the area of deceptive pricing litigation, where the courts have found the FTC’s “Guides Against Deceptive Pricing” to be persuasive.

So far, the civil actions in this area have targeted companies or individuals that sell online review services, or competitors who use those services and thus benefit from allegedly unfair advertising. On October 16, 2015, for example, Amazon filed a lawsuit in Washington Superior Court against more than 1,100 individuals who have allegedly posted fake product reviews on the site (typically for $5 per review). This is the first suit to target individual reviewers, rather than the websites where these reviewers can be hired. This lawsuit was brought pursuant to Washington’s unfair competition law, and also claims breach of contract based on Amazon’s terms of service. Amazon filed a similar suit in April, also in Washington, against several websites that sell fake reviews.

“Fake review” lawsuits have also been brought against retailers that benefit from solicited reviews. In the last year or so, several lawsuits have been filed against Regal Assets, LLC and its affiliates for its “Affiliate Program,” which, the lawsuits allege, induce people to endorse Regal’s products and services, and to disparage those of its competitors. On April 21, 2015, the Central District denied Regal’s motion to dismiss the claims brought pursuant to California’s False Advertising Law and Unfair Competition Law. That case settled and was voluntarily dismissed pursuant to a settlement agreement in July.

Lawsuits targeting false review practices have also been brought outside of California and Washington, in states including Massachusetts, Texas, New York, Pennsylvania and Delaware. In 2013, the New York attorney general investigated and fined 19 companies for procuring or posting false reviews on websites such as Yelp, Google Local and City Search; together, the companies paid over $350,000.

Contractual Issues

Retailers should also keep in mind that several social media platforms also impose requirements related to endorsements. Facebook, Instagram, YouTube, Pinterest and Twitter all require users to comply with all applicable laws and regulations, including those related to false advertising.

Beyond this general requirement that the law be followed, however, the extent of restrictions on endorsements varies by company. In response to the FTC’s increased action, online video game blog site Twitch updated its rules last fall to require reviewers to reveal when they’ve been paid to post a review of a game. Other companies have taken a less firm stance regarding disclosures on endorsed posts. Twitter, for example, tells users that while they “might want to consider” tagging their contest with the company name or #contest, this is not required.

Facebook’s Advertising Policies specifically prohibit the use of deceptive, false, or misleading content, including deceptive claims, offers or business practices. Facebook allows businesses to administer promotions using Pages or within apps, but prohibits companies from calling on users to participate in a promotion via their personal timelines or friend connections (“share on your Timeline to enter,” “share on your friend’s Timeline to get additional entries,” and “tag your friends in this post to enter” are not permitted).

Like Facebook, Pinterest prohibits users from posting content that is fraudulent or deceptive. Pinterest also has a specific rule against incentivizing users to take actions on Pinterest such as Pinning or following. Its terms explain: “We want people to Pin authentically. We believe that compensating people for doing specific things on Pinterest — like paying them to Pin — can promote inauthentic behavior and create spam-like outcomes.” While Pinterest allows a business to pay a guest blogger to curate a board, it prohibits the business from hiring “BuyMorePins.com” to artificially inflate the popularity of the business’s content. Pinterest also requires anyone using the site for commercial purposes to create a business account and agree to Pinterest’s Business Terms of Service.

YouTube requires users to identify any video that contains sponsored content or product placement by checking a box at the time of posting the video. Videos that contain any endorsement or product placement are subject to YouTube’s Ads Policy. YouTube specifically prohibits certain kinds of advertisements, such as embedding a commercial into the beginning of a video post.

Conclusion

Given the risk of FTC enforcement or expensive litigation, retailers should ensure that their marketing teams and external marketing companies are aware of and complying with the Guides. At a minimum, advertisers should require any endorsers to disclose whether the endorsement was sponsored or otherwise incentivized. Companies considering a social media marketing campaign should consult with counsel with expertise in this area to ensure that the campaign will not put them at risk.

TCPA Application Broadened by Third Circuit to Include a “Zone of Interest” of Protected Individuals

So, you answer your roommate’s telephone and it is a prerecorded marketing call from a bank, alarm company, energy provider, phone company, credit card company, charity, etc.  Does that give you the right to initiate a suit against the caller  under the Telephone Consumer Protection Act?   After all,  it is not even your phone that was called.  That was the issue presented to U.S. District Court Judge Susan Wigenton in Leyse v. Bank of America National Association filed in the United States District Court for the District of New Jersey.  The District Court granted Bank of America’s motion to dismiss the complaint for lack of standing because the bank had not intended to call plaintiff and had  intended to call plaintiff’s roommate who owned  the phone.  The Third Circuit reversed,  finding that plaintiff fell under the protection of the TCPA’s “zone of interests”.  Specifically, the Court stated that [i]t is the actual recipient , intended or not, who suffers the nuisance and invasion of privacy.” Does that mean that the TCPA applies to a visitor, house guest or even a stranger who asks to use your phone?  The line is hazy but, for now,  the Third Circuit stated that house guests and  visitors who answer the phone of the owner fall outside of the “zone of interest” of the TCPA.  The focus is on the subjective connection between the individual that answers the phone and the phone owner such that non-transient occupants of the residence are protected.  The Third Circuit’s decision joins the Seventh and Eleventh Circuits with respect to the “zone of interest” test in contrast to decisions in other Circuits where direct standing as the owner of the phone is required.

A copy of the opinion can be found here.

EU Working Party Speaks on EU/US Safe Harbor Ruling

The Article 29  Working Party is an EU committee which comprises representatives from the data protection regulators of each of the EU Member States.  Its purpose is to advise on the protection of individuals’ data whilst giving effect to the harmonization of data protection regimes so to encourage the free movement of data in the EU.

The Working Party met on 16 October to discuss the landmark ruling of the European Court of Justice in  Maximilian Schrems v. Data Protection Commissioner (C-362-14) and issued a statement with their views which can be found here.  The committee endorsed the concerns of the ECJ that mass surveillance of EU citizens by the US authorities through their jurisdiction over US data controllers and processors is incompatible with the EU’s position on data privacy with the result  the US was not a safe destination for data transfers.

The committee seemed to defer taking any decisions on their next steps until the resolution of the Schrems case by the Irish Courts where the matter will now return for further hearing.  They also signaled they would wait while negotiations take place with between the EU and US on a new Safe Harbour agreement which may provide the necessary judicial oversight, transparency and other measures to give effect to the EU’s required level of respect for individuals’ data privacy.  However, the Working Party set as a deadline the end of January 2016.  If a solution not found by then, the clear indication is that the data protection regulators in each of the EU Member States will begin taking stronger enforcement measures against EU data controllers which transfer data to the US without adequate safeguards.

For those that wish to continue to transfer data from the EU to US in the meantime, the Working Party encouraged the use of their Standard Contractual Clauses and Binding Corporate Rules as methods whereby data controllers and processors could remain compliant with the EU’s and EU Member States’ regulations.  By entering into such agreements with their US-based counterparts now (assuming they have not already done so), an EU-based data controller or processor can hope to avoid any enforcement measures which may come in January 2016.

LexBlog