No Harm, No Foul – FTC Claims Of Deficient Security Practices Dismissed Based on Insufficient Evidence of Actual Harm

Is it reckless for a bank to leave its vault unlocked? If you accept the reasoning of Federal Trade Commission (FTC) Chief Administrative Law Judge D. Michael Chappell – only if someone actually breaks in and steals something. On this premise, the FTC’s unfair data security practices case against LabMD, a Georgia-based clinical testing laboratory, was dismissed for failing to meet its burden of proving that the healthcare provider’s allegedly deficient security practices caused, or were likely to cause, substantial consumer injury.

LabMD was a privately held Georgia corporation formed by Michael J. Daugherty in 1996. Its primary business consisted of providing tissue sample analysis by pathologists specializing in prostate or bladder cancer. Urologists would send LabMD specimens for analysis from patients throughout the country, by which LabMD came into the possession of protected health information (PHI) belonging to thousands of patients.

In February 2008, Tiversa, a security firm based in Philadelphia, Pennsylvania, discovered that a LabMD insurance report was being shared openly by a LabMD billing computer on the Limewire peer-to-peer network. The report (referred to in the matter as the “1718 File”) was found to contain protected health information (PHI) and personally identifiable information (PII) on approximately 9300 patients, including their names, dates of birth, Social Security numbers, CPT codes for laboratory tests conducted, and, in some cases, health insurance company names, addresses, and policy numbers. After discovering that the 1718 File contained patient PHI, Tiversa used the “browse host” function of LimeWire to obtain a list of all other files being shared on the LabMD billing computer. The 1718 File was among 950 other shared files in the “My Documents” directory on the LabMD computer, most of which consisted of music and video files. However, eighteen documents were also being shared at the same time, three of which also contained patient PHI.

Tiversa contacted LabMD in May 2008, disclosed its download of the 1718 File, and offered its remediation services. In July 2008, LabMD rejected Tiversa’s proposal and proceeded to remove the file-sharing software and re-assessed their network’s security (although the FTC later claimed that its remediation efforts were also insufficient). Meanwhile, the 1718 File sat dormant until 2009, when the FTC served a Civil Investigative Demand (CID) on Tiversa’s affiliate, The Privacy Institute. Tiversa responded to the CID by producing a spreadsheet of companies who Tiversa claimed had exposed the personal information of 100 or more individuals. Among the names provided was LabMD, with a copy of the 1718 File. This disclosure led the FTC to open an investigation of LabMD, which ultimately resulted in the action against them for failing to implement reasonable security, an alleged “unfair” practice.

It is at this point in the narrative that the parties’ allegations (and consequently Judge Chappell’s Initial Decision) become mired in conspiracy theories. After the FTC began its action against LabMD, Richard Wallace, a forensics analyst hired by Tiversa in July 2007 who originally found the 1718 File, alleged that Tiversa had adopted a business practice of exaggerating how widely erroneously shared files had spread across peer-to-peer networks and in some cases intentionally misrepresenting that files had been discovered at IP addresses associated with known or suspected identity thieves. Tiversa countered that Wallace’s claims were false, motivated by his termination for cause during the pendency of the case against LabMD. Nevertheless, they resulted in a United States House Oversight and Government Affairs Committee investigation into Tiversa and its involvement with governmental entities. Judge Chappell’s Initial Decision goes into great detail about the allegations of unethical practices by Tiversa, pursuant to which he concluded that Wallace (a witness for LabMD) was more credible than Robert Boback (CEO of Tiversa and a witness for the FTC). This finding had a profound effect on the outcome of the case, with Judge Chappell wholly discounting the testimony of one of the FTC’s consumer injury experts, to the extent his conclusions were based in part on testimony of Tiversa’s CEO. Judge Chappell also challenged the expert opinions of the FTC’s other consumer injury expert, stating that although he “did not expressly rely on the discredited and unreliable testimony from Tiversa’s CEO as to the ‘spread’ of the 1718 File for his opinions on the likelihood of medical identity theft, this evidence was clearly considered … and it cannot be assumed that [the] opinions were not influenced by his review of [the CEO’s ]s testimony.” Initial Decision, p. 67, footnote 31. There was also a potential red herring injected into the case, consisting of 40 LabMD paper “day sheets”, 9 patient checks, and 1 money order discovered in the possession of identity thieves in Sacramento, California in 2012, which resulted in a dispute over how the records had travelled from Georgia to California, with the FTC claiming that they must have been downloaded from LabMD’s insecure network but lacking evidence to prove this theory. Lost in this swirl of accusations was the crux of the case: that LabMD had openly shared a file containing PHI of approximately 9300 patients on an open peer-to-peer network, which the FTC alleged was an “unfair” practice.

The FTC’s authority relating to data security derives from Section 5(n) of the Federal Trade Commission Act (“FTC Act”), which states that the Commission may declare any act or practice “unfair” that “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” The FTC’s complaint alleged that LabMD failed to provide reasonable security because the healthcare provider:

  • did not develop, implement, or maintain a comprehensive information security program to protect consumers’ personal information;
  • did not use readily available measures to identify commonly known or reasonably foreseeable security risks and vulnerabilities on its networks;
  • did not use adequate measures to prevent employees from accessing personal information not needed to perform their jobs;
  • did not adequately train employees to safeguard personal information;
  • did not require employees, or other users with remote access to the networks, to use common authentication-related security measures;
  • did not maintain and update operating systems of computers and other devices on its networks; and
  • did not employ readily available measures to prevent or detect unauthorized access to personal information on its computer networks.

Judge Chappell began his analysis by citing Congressional reports for the proposition that Section 5(n) of the FTC Act was intended to limit the scope of the FTC’s authority. However, rather than evaluating whether LabMD’s security was unreasonable as alleged, the Initial Decision instead focused solely on the issue of whether “substantial consumer injury” was at stake. The decision went to great lengths to attack the credibility of the FTC’s claims and evidence (largely by attacking Tiversa and its CEO as the FTC’s proxy), and discounts the potential harm of disclosing patient CPT (current procedural terminology) codes by noting that identity thieves would need to look them up on Google or the American Medical Association’s website in order to learn what tests had been performed on specific patients. Although the FTC had presented consumer injury expert witness testimony as well as survey data to demonstrate that the disclosure of consumer PHI/PII could result in various forms of identity fraud and other harms to consumers, the Initial Decision remarked that “the absence of any evidence that any consumer has suffered harm as a result of [LabMD]’s alleged unreasonable data security, even after the passage of many years, undermines the persuasiveness of [the FTC]’s claim that such harm is nevertheless ‘likely’ to occur.” Initial Decision, p. 52. Ultimately, the Initial Decision concluded that because actual harm had not yet resulted from the allegedly unreasonable security practices, then the practices were not “likely” to cause substantial consumer harm. Endorsing a narrow view that the “substantial consumer injury” required by Section 5(n) could not be satisfied by “hypothetical” or “theoretical” harm or “where the claim is predicated on expert opinion that essentially only theorizes how consumer harm could occur,” Judge Chappell opined that “[f]airness dictates that reality must trump speculation based on mere opinion.” Initial Decision, p. 52, 64.

There is no dispute that a LabMD employee had placed a file containing PHI of approximately 9300 of its patients in a publicly-shared folder on a billing computer. Anyone with LimeWire or any other Gnutella-based peer-to-peer filesharing software (which was freely available in 2008) could have downloaded any of the 950 files being shared by the LabMD billing computer, including the four containing PHI. From a credential authentication perspective, this is the equivalent of making these confidential files available for download on a public website, without any requirement for a username or password in order to obtain access. It is widely accepted, both in state and federal law, that the types of PHI/PII contained in the 1718 File should not be made publicly available in such a manner, particularly by a healthcare provider subject to HIPPA/HI-TECH’s Security Rule.

The Initial Decision’s analysis focused solely on whether there was an actual or probable injury after the fact based on this specific incident (i.e. the 1718 File being downloaded by Tiversa), instead of whether the practice itself (i.e. openly sharing a file containing PHI on 9300 patients on an open peer-to-peer network which could have been downloaded by anyone) caused or was likely to cause substantial consumer injury. Actual or imminent injury are requirements for standing in civil litigation, but the likelihood of substantial consumer harm is the proper standard for evaluating the FTC’s regulatory authority. In LabMD’s case, there were windfall events that saved them from a much more disastrous result: 1) the 1718 File was found (so far as known) only by Tiversa and not identity thieves, and 2) Tiversa notified LabMD of the exposure shortly after its discovery, which was quickly corrected. Consider what would have happened if the 1718 File had instead been discovered by an identity thief rather than Tiversa – the outcome would have been different (and likely much worse) for reasons totally unrelated to the security practice itself (i.e. their practice of openly sharing PHI had little or no effect on who actually discovered the file). To evaluate the reasonableness of LabMD’s practices in the first instance based on subsequent circumstances over which it had no control (i.e. the identity of the discoverer) judges the wrongfulness of the act solely by its accidental consequences – effectively, “no harm, no foul.”

The Initial Decision also contends that “to base unfair conduct liability upon proof of unreasonable data security alone would, on the evidence presented in this case, effectively expand liability to cases involving generalized or theoretical ‘risks’ of future injury, in clear contravention of Congress’ intent, in enacting Section 5(n), to limit liability for unfair conduct to cases of actual or ‘likely’ substantial consumer injury.” Initial Decision, p. 89. Here the Initial Decision attempts to graft the requirement of actual or imminent harm required in civil litigation onto the scope of the FTC’s authority, in contravention of the language of the FTC Act itself. This claim disregards the plain meaning of the terms “likely” and “risk,” which both relate to the possibility or probability that an event may occur – if the event actually occurs, it ceases to be a “risk” or “likelihood”; it becomes a “fact” or “certainty.” According to the United States Court of Appeals for the Third Circuit “[a]lthough unfairness claims ‘usually involve actual and completed harms,’ … ‘they may also be brought on the basis of likely rather than actual injury.’ And the FTC Act expressly contemplates the possibility that conduct can be unfair before actual injury occurs.” FTC v. Wyndham Worldwide Corp., 799 F.3d 236, 246 (3rd Cir. 2015) (quoting Int’l Harvester Co., 104 F.T.C. 949, 1061 (1984)). By extending the FTC’s authority to regulate practices that both cause or are “likely” to cause substantial consumer injury, Congress granted the FTC authority to pre-emptively address unfair trade practices before innocent consumers are harmed.

To return to the opening of the article, LabMD’s storage of the 1718 File in a shared folder on a peer-to-peer network could be analogized to leaving the doors and vault of a bank unlocked when no one was inside – the critical question is whether such an act (or practice) is likely to cause substantial consumer injury. The answer is not dependent upon whether any money was actually stolen during the months it was left unlocked; the fault is leaving the protected assets inside vulnerable so that the only factor between a potential and an actual theft is whether the wrong person checks if the doors are locked. The mere fact that a thief did not test the doors during that period doesn’t absolve the bank of otherwise reckless behavior. Accordingly, a better analysis may be to focus on the conditions existing during the period that the bank was left unlocked (or the practice existed) and, based on those conditions, evaluate whether the practice was reasonable. In the case of data security, the analysis should consider the type of information that was exposed (i.e. PHI/PII v. public information), how that type of information could be used to harm consumers (i.e. susceptibility to abuse by identity thieves, extortionists, or others), what measures were taken to safeguard the information from exposure (i.e. was it a complex “hack” of a computer network involving exploitation of zero-day vulnerabilities v. downloading a file from a publicly-available website with no authentication requirements), and what security measures are reasonable under the circumstances (in terms of time, cost, manpower, and other factors). These are among the actors identified by FTC’s expert Kim, but which the Initial Decision declined to consider by declining to engage in the reasonableness analysis.

The FTC has not yet announced whether it will appeal the Initial Decision, although there has been speculation in commentary that it will. While the Third Circuit’s opinion in FTC v. Wyndham Worldwide Corp. previously recognized the FTC’s authority to actively challenge deficient cybersecurity practices without first announcing the standards to be implemented, that case involved multiple breaches and actual consequential harm to the customers whose personal information was exposed. At stake now for those on both sides of the issues of the scope of FTC authority and the impact of the LabMD harm analysis is the extent to which federal courts may accept its Judge Chappell’s analysis and similarly focus on the issue of actual harm in the specific instance as determinative. That would substantially impact the FTC’s goal of pro-actively policing for deficient cybersecurity practices and limit the FTC in this area to only intervene after consumers are demonstrably injured or such injury is deemed imminent. Unless appealed and reversed, the LabMD Initial Decision could also create a perception of the FTC’s vulnerability on the issue of its authority and lead other companies threatened with FTC action for deficient security practices to challenge the regulatory agency. What remains to be seen is whether federal courts will reassert the distinction between actual or imminent harm required for civil standing as opposed to the FTC’s regulatory authority to prevent likely consumer injuries before they occur.

Sponsored Social Media Posts Riskier than Ever

Five months after the Federal Trade Commission (FTC) issued updated guidance regarding paid endorsements, it is clearer than ever that it plans to take increasing action against retailers for soliciting reviews on social media. The FTC’s plans were reinforced on October 15, 2015, when FTC Commissioner Julie Brill, in a keynote address at the Better Business Bureau’s National Advertising Division Annual Conference, identified paid endorsements as a current priority for the Commission.

As the FTC begins targeting endorsements more aggressively, private actions brought under state laws are also likely to arise. In light of increased FTC enforcement and a higher risk of private lawsuits, retailers should make sure that their in-house and outside marketing teams are complying with applicable guidelines.

The FTC’s Endorsement Guides

For years, the FTC has considered it deceptive for an advertiser to solicit a review or endorsement that may lead consumers to believe that the review is unbiased. The “Guides Concerning the Use of Endorsements and Testimonials in Advertising” (Guides), which have been in effect since 1980, provide:

When there exists a connection between the endorser and the seller of the advertised product that might materially affect the weight or credibility of the endorsement (i.e., the connection is not reasonably expected by the audience), such connection must be fully disclosed.

16 C.F.R. § 255.5. The Guides were revised in 2009 to include examples of how this rule may apply to consumer-generated media, such as blogs and online message boards. On May 29, 2015, the FTC issued an updated version of “What People Are Asking,” a FAQ document created after the Guides were last revised (hereinafter, FAQ’s). This update advises business on how to apply the FTC’s endorsement standards to evolving forms of digital marketing and promotion, many of which were in their infancy in 2009.

The new Guides explain that an endorsement should always disclose a “material connection” between the endorser and the advertiser, even where space is limited, when, “knowing about that gift or incentive would affect the weight or credibility your readers give to your recommendation.” At a minimum, sponsored posts on Twitter, Instagram, Facebook and Pinterest should be accompanied by #ad or #sponsored (which the FTC points out only require three and ten characters, respectively).

The Guides also make clear that a company can have a material relationship with anyone with an incentive to post about it, including: employees that discuss the company’s product on their personal social media pages; bloggers who receive free products (or money) to do reviews on their websites; reviewers who make money each time a visitor clicks an affiliate link on their website; and customers who post about a specific product in order to enter an advertiser’s contest. The disclosure requirement even applies where the reviewer agrees to do a review without agreeing that the review will be positive (and even where it is ultimately negative).

The FTC has explained that where it does take action, it will in most cases not focus on the person who offered the endorsement. Instead, it will target the company whose goods or services are being advertised, and their ad agencies and public relations firms.

Although the Guides and FAQ’s don’t have the force of law, they do offer guidance on practices that the FTC considers violating the FTC Act. There are no fines for violations of the FTC Act, but law enforcement actions can result in orders requiring the advertiser to give up money it received as a result of the deceptive endorsement.

To satisfy the Guides, the disclosure must be clear and conspicuous. This means that consumers must be able to see and understand the disclosure easily — they shouldn’t have to look for it. Bloggers cannot satisfy this requirement by posting a single disclosure on their home page stating that many of the products they review are given to them for free by advertisers. For video endorsements, the FTC advises that the disclosure should be in the video itself, not the video’s text description. And where it is likely that viewers may not
watch the video from start to finish, disclosures should be made throughout the video to ensure that they are viewed.

Despite the thoroughness of the FAQ’s, several questions still remain. First, how will the FTC decide whether a customer would care that the reviewer was given something for his or her review? For example, would a makeup company be liable if it gave a blogger a free lipstick to review? What if a chewing gum company offered free samples to reviewers?

Also, what kind of “endorsement” is material to consumers in the first place? For example, the FTC has noted that it doesn’t know how much stock a consumer puts into “likes” when deciding whether to patronize a business. (The FTC realizes that Facebook’s “like” feature does not allow consumers to make a disclosure, and says that businesses should not encourage endorsements using features that don’t allow for disclosures).

A broader question arises from the ambiguity of the FTC’s Section 5 authority over “unfair conduct.” For example, is it “unfair,” and thus actionable, for a company to ask customers to follow it on Facebook? What about if a company runs a contest on its social media channels that would require customers to participate in those channels in order to compete?

FTC Enforcement of the Guides

Since revising the Guides in 2009, the FTC has investigated a number of companies that have solicited positive reviews on social media or elsewhere:

  • The FTC’s first investigation under the revised Guides was in 2010, when the FTC looked into Ann Taylor’s promotion in which it gave gifts to bloggers the company expected would promote its Loft division. The FTC ultimately declined to take action, given the small size of the promotion, the fact that it was the first of its kind from Ann Taylor, and because the retailer responded to the FTC investigation by creating a policy to notify bloggers that they must disclose any material connection to the company in the future.
  • In August 2010, the FTC entered a settlement agreement with public relations agency Reverb Communications Inc., which agreed to remove any game reviews in the online iTunes stores that were posted by employees posing as ordinary consumers.
  • In 2011, Legacy Learning Systems Inc., maker of at-home guitar DVDs, agreed to pay $250,000 as part of a settlement with the FTC. The company allegedly paid affiliates a commission to promote the DVDs in articles, blog posts, and other online editorial material.
  • In December 2011, the FTC investigated gift certificates that were allegedly given to bloggers who promoted Hyundai Motor America’s then-upcoming Super Bowl ads. The FTC ultimately closed the investigation, noting that Hyundai had a policy in place calling the bloggers to disclose the compensation they received (Hyundai’s advertising firm had hired the bloggers).
  • In April 2014, the FTC investigated a Cole Haan marketing campaign that asked customers to make Pinterest boards titled “Wandering Sole,” and to include Cole Haan shoes on that board; the retailer incentivized these boards by offering a $1,000 shopping spree for the best board, but did not require entries to label their boards as advertisements. Although the FTC ultimately decided not to pursue an enforcement action — largely because it had not previously publicly addressed whether entry into a contest is a form of material connection, or whether a pin on Pinterest may constitute an endorsement — it did issue a “closing letter” that warned Cole Haan that its campaign likely violated Section 5 of the FTC Act.
  • In November 2014, advertising agency Deutsch LA settled with the FTC in response to the FTC’s allegations that Deutsch LA encouraged its employees to use their personal Twitter accounts to generate buzz about the Sony PlayStation Vita without requiring the employees to disclose their affiliations with Deutsch LA or Sony.
  • In April 2015, the FTC approved a final consent order with AmeriFreight, an automobile shipment broker, based on the FTC’s claim that AmeriFreight compensated positive reviewers with discounts and other incentives, and then advertised its goods as being top-rated, based on those reviews.
  • Most recently, on September 2, 2015, the FTC announced a proposed settlement with Machinima Inc. (which touts itself as “the most notorious purveyor and cultivator of fandom and gamer culture”) for paying “influencers” up to $30,000 each to post YouTube videos endorsing Microsoft’s Xbox One system and several games.

Thus far, the FTC has looked kindly on retailers that either had policies in place that called for reviewers to disclose any material relationship, or that ended the allegedly deceptive practice soon after it occurred. In the Machinima case, for example, the FTC decided not to take action against Microsoft, largely because Microsoft had in place a “robust” compliance program that included specific guidance relating to the Guides; the FTC’s closing letter to Microsoft noted that company offered training on the Guides to its personnel, vendors and the employees of its advertising agency that have managed the relationship between Microsoft and Machinima.

Developing a compliance program could therefore have a double benefit for companies looking to protect themselves. First, it would reduce the likelihood that the company would violate the Guides in the first place. Second, a strong program may compel the FTC not to take action, in the case that a violation ever does occur.

Risk of Private Action

Although private individuals cannot bring lawsuits under the FTC Act, retailers may also be subject to lawsuits under state laws or the Lanham Act for the same practices prohibited by the Guides.

As the FTC continues to shine a light on this issue, consumers may be tempted to bring false advertising suits against retailers that fail to make adequate disclosures. This trend is currently taking place in the area of deceptive pricing litigation, where the courts have found the FTC’s “Guides Against Deceptive Pricing” to be persuasive.

So far, the civil actions in this area have targeted companies or individuals that sell online review services, or competitors who use those services and thus benefit from allegedly unfair advertising. On October 16, 2015, for example, Amazon filed a lawsuit in Washington Superior Court against more than 1,100 individuals who have allegedly posted fake product reviews on the site (typically for $5 per review). This is the first suit to target individual reviewers, rather than the websites where these reviewers can be hired. This lawsuit was brought pursuant to Washington’s unfair competition law, and also claims breach of contract based on Amazon’s terms of service. Amazon filed a similar suit in April, also in Washington, against several websites that sell fake reviews.

“Fake review” lawsuits have also been brought against retailers that benefit from solicited reviews. In the last year or so, several lawsuits have been filed against Regal Assets, LLC and its affiliates for its “Affiliate Program,” which, the lawsuits allege, induce people to endorse Regal’s products and services, and to disparage those of its competitors. On April 21, 2015, the Central District denied Regal’s motion to dismiss the claims brought pursuant to California’s False Advertising Law and Unfair Competition Law. That case settled and was voluntarily dismissed pursuant to a settlement agreement in July.

Lawsuits targeting false review practices have also been brought outside of California and Washington, in states including Massachusetts, Texas, New York, Pennsylvania and Delaware. In 2013, the New York attorney general investigated and fined 19 companies for procuring or posting false reviews on websites such as Yelp, Google Local and City Search; together, the companies paid over $350,000.

Contractual Issues

Retailers should also keep in mind that several social media platforms also impose requirements related to endorsements. Facebook, Instagram, YouTube, Pinterest and Twitter all require users to comply with all applicable laws and regulations, including those related to false advertising.

Beyond this general requirement that the law be followed, however, the extent of restrictions on endorsements varies by company. In response to the FTC’s increased action, online video game blog site Twitch updated its rules last fall to require reviewers to reveal when they’ve been paid to post a review of a game. Other companies have taken a less firm stance regarding disclosures on endorsed posts. Twitter, for example, tells users that while they “might want to consider” tagging their contest with the company name or #contest, this is not required.

Facebook’s Advertising Policies specifically prohibit the use of deceptive, false, or misleading content, including deceptive claims, offers or business practices. Facebook allows businesses to administer promotions using Pages or within apps, but prohibits companies from calling on users to participate in a promotion via their personal timelines or friend connections (“share on your Timeline to enter,” “share on your friend’s Timeline to get additional entries,” and “tag your friends in this post to enter” are not permitted).

Like Facebook, Pinterest prohibits users from posting content that is fraudulent or deceptive. Pinterest also has a specific rule against incentivizing users to take actions on Pinterest such as Pinning or following. Its terms explain: “We want people to Pin authentically. We believe that compensating people for doing specific things on Pinterest — like paying them to Pin — can promote inauthentic behavior and create spam-like outcomes.” While Pinterest allows a business to pay a guest blogger to curate a board, it prohibits the business from hiring “” to artificially inflate the popularity of the business’s content. Pinterest also requires anyone using the site for commercial purposes to create a business account and agree to Pinterest’s Business Terms of Service.

YouTube requires users to identify any video that contains sponsored content or product placement by checking a box at the time of posting the video. Videos that contain any endorsement or product placement are subject to YouTube’s Ads Policy. YouTube specifically prohibits certain kinds of advertisements, such as embedding a commercial into the beginning of a video post.


Given the risk of FTC enforcement or expensive litigation, retailers should ensure that their marketing teams and external marketing companies are aware of and complying with the Guides. At a minimum, advertisers should require any endorsers to disclose whether the endorsement was sponsored or otherwise incentivized. Companies considering a social media marketing campaign should consult with counsel with expertise in this area to ensure that the campaign will not put them at risk.

TCPA Application Broadened by Third Circuit to Include a “Zone of Interest” of Protected Individuals

So, you answer your roommate’s telephone and it is a prerecorded marketing call from a bank, alarm company, energy provider, phone company, credit card company, charity, etc.  Does that give you the right to initiate a suit against the caller  under the Telephone Consumer Protection Act?   After all,  it is not even your phone that was called.  That was the issue presented to U.S. District Court Judge Susan Wigenton in Leyse v. Bank of America National Association filed in the United States District Court for the District of New Jersey.  The District Court granted Bank of America’s motion to dismiss the complaint for lack of standing because the bank had not intended to call plaintiff and had  intended to call plaintiff’s roommate who owned  the phone.  The Third Circuit reversed,  finding that plaintiff fell under the protection of the TCPA’s “zone of interests”.  Specifically, the Court stated that [i]t is the actual recipient , intended or not, who suffers the nuisance and invasion of privacy.” Does that mean that the TCPA applies to a visitor, house guest or even a stranger who asks to use your phone?  The line is hazy but, for now,  the Third Circuit stated that house guests and  visitors who answer the phone of the owner fall outside of the “zone of interest” of the TCPA.  The focus is on the subjective connection between the individual that answers the phone and the phone owner such that non-transient occupants of the residence are protected.  The Third Circuit’s decision joins the Seventh and Eleventh Circuits with respect to the “zone of interest” test in contrast to decisions in other Circuits where direct standing as the owner of the phone is required.

A copy of the opinion can be found here.

EU Working Party Speaks on EU/US Safe Harbor Ruling

The Article 29  Working Party is an EU committee which comprises representatives from the data protection regulators of each of the EU Member States.  Its purpose is to advise on the protection of individuals’ data whilst giving effect to the harmonization of data protection regimes so to encourage the free movement of data in the EU.

The Working Party met on 16 October to discuss the landmark ruling of the European Court of Justice in  Maximilian Schrems v. Data Protection Commissioner (C-362-14) and issued a statement with their views which can be found here.  The committee endorsed the concerns of the ECJ that mass surveillance of EU citizens by the US authorities through their jurisdiction over US data controllers and processors is incompatible with the EU’s position on data privacy with the result  the US was not a safe destination for data transfers.

The committee seemed to defer taking any decisions on their next steps until the resolution of the Schrems case by the Irish Courts where the matter will now return for further hearing.  They also signaled they would wait while negotiations take place with between the EU and US on a new Safe Harbour agreement which may provide the necessary judicial oversight, transparency and other measures to give effect to the EU’s required level of respect for individuals’ data privacy.  However, the Working Party set as a deadline the end of January 2016.  If a solution not found by then, the clear indication is that the data protection regulators in each of the EU Member States will begin taking stronger enforcement measures against EU data controllers which transfer data to the US without adequate safeguards.

For those that wish to continue to transfer data from the EU to US in the meantime, the Working Party encouraged the use of their Standard Contractual Clauses and Binding Corporate Rules as methods whereby data controllers and processors could remain compliant with the EU’s and EU Member States’ regulations.  By entering into such agreements with their US-based counterparts now (assuming they have not already done so), an EU-based data controller or processor can hope to avoid any enforcement measures which may come in January 2016.

California Restricts Warrantless Access of Electronic Data by Law Enforcement

An overwhelming majority of Californians (82%) have spoken up loud and clear that they want change – they want the police to “get a warrant” for digital information. Now, Californians can rest assured that law enforcement cannot poke around in their digital records without first obtaining a warrant. On October 8, 2015, Governor Jerry Brown signed S.B. 178, the California Electronic Communications Privacy Act (CalECPA).

California State Senator Mark Leno introduced the new legislation to update privacy laws around electronic communications. The CalECPA requires state law enforcement to get a warrant before they can access electronic data. CalECPA is sponsored by EFF, ACLU of Northern California, and a wide variety of rights groups and technology companies.

EFF, along with the ACLU and the California Newspaper Publishers Association, sponsored the bill recognizing how the right to be free from unreasonable search and seizure is inherently tied to freedom of speech. Also, Silicon Valley’s major players, including Adobe, Apple, Facebook, LinkedIn, Dropbox, Google, and Twitter, all threw their support behind the bill.

CalECPA protects Californians by requiring a warrant for digital records, including emails and texts, as well as a user’s geographical location. These protections apply not only to your devices, but to online services that store your data. Only two other states have so far offered these protections: Maine and Utah.

Many Californians have long been concerned with law enforcement claims that investigators do not need a search warrant under federal electronic privacy statutes to obtain sensitive information like emails that have been stored on a server for more than 180 days, detailed location information generated by your phone, and sensitive metadata about who you communicate with. While there is an ongoing effort by Congress to update parts of these statutes (which were originally passed in 1986) rather than waiting for Congress to slowly pass reforms, states are taking action in the interim to protect individuals’ private emails, location information, and online activity. CalECPA brings that reform effort to California.

Here’s what the bill’s authors had to say about the victory:

Sen. Mark Leno (D-San Francisco):

For too long, California’s digital privacy laws have been stuck in the Dark Ages, leaving our personal emails, text messages, photos and smartphones increasingly vulnerable to warrantless searches. That ends today with the Governor’s signature of CalECPA, a carefully crafted law that protects personal information of all Californians. The bill also ensures that law enforcement officials have the tools they need to continue to fight crime in the digital age.”

Sen. Joel Anderson (R-Alpine):

“Senator Leno and I helped bridge the gap between progressives and conservatives to make the privacy of Californians a top priority this year. This bipartisan bill protects Californians’ basic civil liberties as the Fourth Amendment and the California Constitution intended.”


The bill’s notice, reporting, and enforcement provisions make sure that there is proper transparency and oversight and mechanisms to ensure that the law is followed. And the bill also includes appropriate exceptions to ensure that the police can continue to effectively and efficiently protect public safety.

Californians should not have to choose between using new technology and keeping their personal lives private and California’s technology companies should not be burdened with privacy laws stuck in the digital dark ages.


For more information, a copy of the bill can be viewed here.



US/EU Safe Harbour Invalidated – The View from London

On 6 October, the European Court of Justice struck down the “Safe Harbour” agreement created by the EU Commission Decision 2000/520 for the transferring of personal data between entities located in the EU and the US, or even between a single company’s servers located in the US and the EU. The agreement was a cornerstone of the ability for transatlantic data transfers and its necessity arose out of the Directive 95/46/EC, commonly referred to as the Data Protection Directive.

The Data Protection Directive outlines the measures that each EU member state must enact for the safeguarding of personal data. One of the requirements is that each member state must prohibit the transfer of personal data outside of the EU to “third countries” that do not have adequate safeguards.   On the basis that the data protection legislation in the US is less stringent than the Data Protection Directive, the EU and US entered into the “Safe Harbour” agreement as a means of allowing US firms to self-certify that they would take adequate measures (over and above those required by US law) to protect personal data. This allowed US firms to receive personal data from the EU without lengthy contractual provisions or changes to US law.

Consideration of the “Safe Harbour” agreement by the European Court of Justice was brought about by Maximillian Schrems, an Austrian national, resident, and a data privacy activist who challenged the decision by the Irish Data Protection Commissioner not to investigate whether the US government was using its powers over Facebook’s US-based entities as a “backdoor” to accessing personal data on EU citizens held on Facebook Ireland’s servers in the EU and/or the US.

Mr. Schrems’ complaint was based upon the allegations of Edward Snowden that the US intelligence services, particularly the NSA, were conducting mass surveillance on EU citizens through their jurisdiction over the US branches of various social media firms. The Snowden allegations had already prompted a joint US/EU working group on data protection and two EU Commission communications that effectively condemned the mass surveillance of EU citizens by the US. Relying on these developments, Mr. Schrems contended that US law did not provide adequate protection within the meaning of the Data Protection Directive.

The European Court of Justice found that the Irish Data Protection Commissioner was wrong to not investigate the complaint as there was no prohibition in EU law which prevented him from doing so. The court determined that the standard which must be met as to whether a Safe Harbour exists is a strict standard and not left to the discretion of the EU Commission. The safeguards must be “adequate,” which means that protection must be essentially equivalent to EU law. Furthermore, the EU Commission is under an obligation to periodically review its decisions as to the adequacy of a third countries’ data protection laws to ensure any Safe Harbour remains factually and legally justified.

In considering whether the Safe Harbour was still valid, the court considered the derogations which allowed for “national security, public interest or law enforcement requirements” to take primacy over a right to privacy. These, coupled with a lack of judicial remedies for the individuals whose privacy would potentially be violated, meant that the required levels of protection to be considered “adequate” within the meaning of the Data Protection Directive were not present. The ECJ therefore ruled the EU Commission decision that granted the US “Safe Harbour” status to be invalid.

In light of this decision, it appears that EU firms can no longer transfer personal data to the US solely on the basis of the Safe Harbour. Instead, a contractual undertaking between the EU and US entities which outlines the level of data protection safeguards to be used by the US entity is required. This includes entities within the same group of companies. In the UK, the Information Commissioner’s Office has published model clauses which can be used by parties seeking to transfer data out of the EU.

Whilst Facebook is at the centre of the judgment, the fact that the Safe Harbour has been abolished means all business – in any industry or endeavour – with US and EU operations and which are likely to share personal data are potentially impacted. Any business which sends any data from the EU to the US should revisit whether that data is “personal data” and whether they have adequate contractual safeguards.

In returning to Mr. Schrems’ original complaint, it would seem that Facebook and other multinational companies may be subject to serious business disruption as they will no longer be able to freely transfer data between their global group companies. However, many businesses that might be expected to be adversely affected have already released statements that they have had in place for some time the relevant inter-company agreements to allow legal data transfers. The companies more likely to bear the brunt of this decision are smaller businesses that outsource certain data processing functions to the US; employee payroll management or customer order processing, for example. They will need to revisit each of their contracts to ensure they have the right contractual protection. Even then, the potential issue remains that the US government may sequester personal data from US firms, in the name of national security or otherwise, and certain US firms may have no means to resist handing over the personal data of EU citizens.

Sedgwick Cybersecurity and Privacy Chair John Stephens Published on Cyberextortion and Ransomware

John Stephens, Sedgwick partner and head of the firm’s Cybersecurity and Privacy Practice Group, has published an article in Corporate Counsel entitled “The Rise of Cyber-Extortion, and How to Fight Back.”  9-23-15 – Corporate Counsel – Stephens

He was also recently published in the National Law Journal for his article “When Hackers Take Your Digital Data Hostage.” 10-5-15 – The National Law Journal – Stephens

Congratulations John and thank you for your contributions to the field of cybersecurity!

California AG Sends Strong Message to Live Up to Privacy Promises or Face Multi-Million Dollar Consequences

A California state court on September 17, 2015 approved a $33 million settlement between the California Attorney General (“AG”), the California Public Utilities Commission (“PUC”) and Comcast for Comcast’s failure to live up to its promise not to disclose customer’s information when those users paid for a non-published telephone number. This settlement illustrates the AG’s office efforts to come down hard on businesses that do not live up to their privacy promises.

The AG’s office filed a complaint against Comcast charging it with false advertising and unfair business practices, as well as violation of the Public Utilities Code. Comcast charged its customers $1.50 per month to obtain a non-published, non-listed telephone number which would not be posted in online directories, phone books, or directory assistance. The AG claimed Comcast permitted the non-published, non-listed numbers of its California customers to be made available for publishing when Comcast made a system-wide account number change in late 2009. According to the AG, from July 2010 through December 2012, 75,000 Comcast customers who had paid the monthly fee for a non-published, non-listed number had their information published with a vendor, in a phone book, or made available by a directory assistance provider.

Before the AG’s complaint was filed, Comcast deleted the non-published numbers from directory listings it controlled, attempted to notify all of the affected 75,000 customers, refunded the amount charged to current customers, and notified former customers how to obtain a refund. Nevertheless, the AG’s office found Comcast’s conduct objectionable to the extent that it sought and obtained an agreement to pay $33 million from Comcast, most of which goes to the California PUC and the AG’s office, with only around $8 million for direct payments to affected customers. The settlement also requires Comcast to implement compliance programs with regular audits and reporting.

The settlement with Comcast confirms the AG’s office will continue to use the false advertising and unfair and deceptive act sections of California’s Business and Profession Code to bring tough enforcement actions against businesses, even when it appears a business made significant efforts to remediate failures or mistakes relating to disclosure of the private information of its customers. This case is also notable in that the consumer information disclosed—name, telephone number, and address—is not deemed personally identifiable information under California’s breach notice law (Civ. Code § 1798.82) which subjects a business to notice obligations if accessed or disclosed without authorization. These elements alone—name, telephone number, and address—are similarly not included in the definition of personal information in California’s law that requires business to implement and maintain reasonable security procedures and practices to protect personal information (Civ. Code §1798.81.5). However, the AG may have taken such a tough position with Comcast because Comcast charged a fee to its customers to keep this information private, even though the disclosed data elements in this case are not normally treated specially under California law.

This case is helpful in that it provides examples of disclosures to customers, contractual provisions with vendors, and notice letters to customers that the AG approved. Modeling similar documents after those approved by the AG’s office in this case may help to reduce the sting of an enforcement action. On the other hand, this case suggests that a mistake or failure relating to disclosure of consumer’s private information will not be tolerated; it also suggests businesses should take the utmost care to ensure the private information of their customers is never disclosed, accessed, or breached because of the incredibly expensive penalties that might ensue. The AG is certainly incentivizing businesses to implement protections of consumer information as close to fool-proof as possible, or conversely, not to provide strong privacy policies so that a business does not have to face stiff fines if it fails to live up to promises relating to consumer’s information.

Sign o’ the Times: DMCA Battle Over Good Faith vs. Fair Use of Prince Song–The Ninth Circuit Rules

The interplay of fair use and the takedown provisions in the Digital Millennium Copyright Act (DMCA) has been controversial since the DMCA was signed into law in 1998.  For example, citing fair use, the John McCain/Sarah Palin campaign protested CBS’ takedown notice of a campaign ad posted on YouTube that used part of a Katie Couric interview.   Groups such as the Electronic Frontier Foundation (EFF) argued that political ads incorporating copyrighted material were “paradigmic examples of fair use” in an open letter to CBS, Christian Broadcasting Network, Fox Networks and NBC Universal asking these networks to desist from issuing takedown notices of political ads. Yet, last year, Gannett sent a takedown notice on the posting of a video of an interview with Alison Lundergan Grimes trying to avoid admitting that she voted for President Obama. In a less political context, but with wide implications, this tension between fair use and copyrights is central to the ongoing battle in Lenz v. Universal Music Corp. Brought on Stephanie Lenz’s behalf by the EFF, Lenz obtained a ruling in 2008 that the owner of a copyright must consider the fair use doctrine in formulating a good faith belief in connection with a takedown notice.  In what it called a “case of first impression,” the court denied Universal Music’s motion to dismiss Lenz’s claim for misrepresentation under 17 U.S.C. § 512(f). Judge Jeremy Fogel refused to allow immediate appeal of this order. The Ninth Circuit opinion, largely affirming the trial court’s later denial of Universal’s motion for summary judgment, underscores these issues and illustrates the discovery/evidentiary issues parties face in litigating a DMCA misrepresentation claim with fair use implications.


If De-Elevator Tries to Bring U Down (or Ur Uploaded Video)

Lenz videotaped her young children dancing in her family’s kitchen to the song “Let’s Go Crazy” by Prince. Lenz titled the video “Let’s Go Crazy # 1″ and uploaded it to YouTube. The audible portion of the song includes the lyrics “C’mon baby let’s get nuts” and the song’s distinctive guitar solo. At one point in the video, Lenz asks her toddler, “Do you like the song?” Universal sent a takedown notice demanding that YouTube remove Lenz’s video from the site. YouTube removed the video the following day and sent Lenz an email notifying her that it had done so in response to Universal’s accusation of copyright infringement. Lenz responded by sending YouTube a DMCA counternotification asserting that her video constituted fair use of “Let’s Go Crazy” and thus did not infringe Universal’s copyrights. YouTube reposted the video on its website about six weeks later.   (“Let’s Go Crazy # 1″ has now been viewed over 1 million times.) Lenz then filed suit against Universal alleging misrepresentation pursuant to 17 U.S.C. § 512(f) on the ground that Universal failed to consider “fair use” before it sent its takedown notice. Lenz alleged that Universal issued the removal notice not in “good faith,” but only to appease Prince because Prince “is notorious for his efforts to control all uses of his material on and off the Internet.”


Look 4 the Purple Banana

As it had in its motion to dismiss, Universal contended in its motion for summary judgment and on appeal that copyright owners cannot be required to evaluate the question of fair use prior to sending a takedown notice because fair use is merely an excused infringement of a copyright rather than a use authorized by the copyright owner or by law, and that even if a copyright owner were required by the DMCA to evaluate fair use with respect to allegedly infringing material, any such duty would arise only after a copyright owner receives a counternotice and considers filing suit. Universal argued that this construction of the law is compelled because fair use is a defense to an action and that Congress had not incorporated defenses into the “good faith” certification required of copyright owners. Further, Universal argued that its reading of the statute finds substantial support in Rossi v. Motion Picture Ass’n of America, Inc., 391 F.3d 1000, 1004-06 (9th Cir. 2004), cert. denied, 544 U.S. 1018 (2005), which held that Congress’ use of the term “knowingly” in § 512(f) made the test for liability for misrepresentation under the DMCA subjective, not objective. As Universal put it in its motion to certify denial of the motion to dismiss:

An inquiry into the propriety of a party’s consideration of fair use inevitably will lead to calls (as Plaintiff makes in this case) for a post hoc assessment of the reasonableness of the copyright owner’s evaluation of whether the material makes fair use of the copyright. (emphasis in original)

The district court’s order denying immediate appeal, however, stated that:

The Court did not hold that every takedown notice must be preceded by a full fair use investigation. . . . Rather, it recognized, as it has previously, that in a given case fair use may be so obvious that a copyright owner could not reasonably believe that actionable infringement was taking place. See Online Policy Group v. Diebold, Inc., 337 F. Supp. 2d 1195, 1204 (N.D.Cal. 2004) (emphasis added).

The district court’s use of the term “reasonable” and citation to Diebold (which Judge Fogel also wrote) are telling. Diebold had sent takedown provisions aimed at removing posts of its internal emails discussing problems with its electronic voting machines. Diebold’s claims of copyright infringement were therefore directed at stifling the very discussion that made posting the emails fair use to begin with. Diebold, however, also included an objective standard as part of its test, holding that “‘knowingly’ means that a party actually knew, should have known if it acted with reasonable care or diligence, or would have had no substantial doubt had it been acting in good faith, that it was making misrepresentations.” This appears to conflict with Rossi, which holds that: “A copyright owner cannot be liable simply because an unknowing mistake is made, even if the copyright owner acted unreasonably in making the mistake.” The proper test for a “bad faith” standard under the DMCA may be the legal equivalent of the “purple banana” of Prince rock-n-roll lore (Sed quid in infernos dicet?).


Dr. Everything’ll Be Alright (or Not)

Ultimately, following discovery, the parties filed cross-motions for summary judgment on Lenz’ § 512(f) claim. The trial court ruled, again, that the copyright holder must consider the fair use doctrine prior to sending a takedown notice, that Lenz could proceed to trial under both the “actual knowledge” theory and the “willful blindness” doctrine, and certified its order for interlocutory appeal. The Ninth Circuit rejected Universal’s legal position that, because fair use is classified as an “affirmative defense,” it “excuses otherwise infringing conduct.” Rather, the court held that 17 U.S.C. § 107 created a type of noninfringing use and that fair use is therefore “authorized by law.”

The Ninth Circuit’s opinion also affirmed that Lenz presented evidence that Universal did not form any subjective belief about the video’s fair use – one way or another – because it failed to consider fair use at all, and knew that it failed to do so. Thus, a jury must decide whether Universal’s actions were sufficient to form a subjective good faith belief about the video’s fair use or lack thereof. In so holding, the opinion expressly followed the holding in Rossi to reject Lenz’s argument to impose “a subjective standard only with respect to factual beliefs and an objective standard with respect to legal determinations.” However, the opinion also cites to Diebold as an example of where a “copyright holder who pays lip service to the consideration of fair use by claiming it formed a good faith belief when there is evidence to the contrary is still subject to § 512(f) liability.”

However, though ruling that the willful blindness doctrine may be asserted in support of a § 512(f) claim as a general conclusion, the Ninth Circuit reversed the district court’s denial of Universal’s motion for summary judgment on willful blindness because its holding, that “Universal has not shown that it lacked a subjective belief” there was a high probability that the video constituted fair use, improperly reversed the burden on the issue. While willful blindness legally remains a ground for misrepresentation under section 107, the quantum of evidence required to overcome a defendant’s motion for summary judgment under this holding will be difficult to ever mount.

Under these holdings on the legal standards for DMCA misrepresentation, discovery will likely be similar to actual malice libel cases, often very extensive with the need to adduce evidence of the defendant’s “state of mind” from circumstantial evidence and the need to controvert and impeach witnesses’ proclamations to prove the case. It also seems likely that at least some cases will see the invocation of the “advice of counsel” defense, which may lead to the always disturbing image of counsel on the witness stand. Given the ongoing interplay of fair use and copyrights, copyright owners, content providers, and Internet service providers may well feel that Lenz had it mostly right: throw a kitchen dance party—just don’t invite the videographer, unless your attorney is also on the guest list.

10 Million Patients Affected by Fifth Major Healthcare Provider Data Breach of 2015

On August 5, 2015, Excellus BlueCross BlueShield discovered that as many as 10 million of its clients’ information may have been exposed in a sophisticated data breach campaign dating back to December 2013. The potentially compromised data included:

  • Credit card numbers
  • Social Security numbers
  • Dates of birth
  • Mailing addresses
  • Telephone numbers
  • Member identification numbers
  • Financial account information
  • Claims information

While some of this information had been encrypted by Excellus, the attackers were able to gain administrative access to the company’s network, thereby circumventing the encryption protection by accessing decryption keys available to administrators. The company has hired Mandiant Incident Response Services of FireEye, Inc. to investigate the breach and counsel Excellus on remediation solutions. Although Excellus claims not to have yet uncovered evidence that any of the exposed data was exfiltrated, Excellus will be mailing a letter to affected parties, providing them with information about the breach and ways that they can protect themselves from identity theft. They will also be providing two years of free credit monitoring for any individuals exposed by the breach.

With this attack, Excellus becomes the fifth major health care provider to disclose a breach since the beginning of 2015. The largest was Anthem Healthcare, which affected 80 million patients, followed by Premera (11 million users), Excellus (10 million users), UCLA Health Systems (4.5 million users), and CareFirst (1.1 million users). Each of these companies has since become the target of class action lawsuits by affected individuals. For a breached company, the damages exceed the cost and potential liability posed by litigation; they also face substantial damage to their brands and reputations, incur credit monitoring costs for exposed individuals, and could be fined by the Department of Health & Human Services for violations of HIPAA’s Security Rule.

It’s no mystery why health care providers are the target of attacks – each of these entities is a repository of correlated personal data for millions of people. Whether for medical record-keeping or billing for services, providers are encouraged to document information thoroughly and retain records for a substantial (and often indefinite) period of time. To draw an analogy, if information were currency, healthcare providers are virtual “banks” – and often poorly secured “banks” at that.

Typically, when you ask a layman about HIPAA requirements, they often refer to the disclosures mandated by the Privacy Rule (which governs the safeguarding of Protected Health Information by covered entities). However, in April 2003, the Department of Health & Human Services enacted the Security Rule, whereby covered entities possessing Electronic Protected Health Information (EPHI) were required to implement three categories of security safeguards: administrative, physical, and technical. For each category, the Security Rule sets forth security standards, each with its own set of “required” and “addressable” implementation specifications. While “required” specifications must be adopted and implemented strictly according to the Security Rule, “addressable” specifications are typically left to the individual covered entities to implement as they deem appropriate (subject to administrative agency review). Enforcement of the Security Rule is left to the Department of Health & Human Services, who conducts investigations and hearing on HIPAA violations and has the authority to level civil penalties of up to $50,000 per violation, with an annual cap of $1.5 million.

While HIPAA’s Security Rule provides healthcare providers with some guidance on how to adequately secure their networks, the task is daunting to say the least. In the face of ever-mounting malpractice claims and complicated billing procedures, providers are advised to create and retain records like no other business (with the possible exception of the finance industry). However, the focus of healthcare providers is on providing high-quality medical care, in as efficient and cost-effective manner as possible. Doctors and patients are more concerned about the quality of the medical care provided than the information security safeguards deployed by the hospital’s IT department. Accordingly, in too many instances, security concerns are focused on necessary regulatory compliance, rather than implementing security best practices. In addition, technologies that can either streamline the provision of care or provide enhanced functionality are embraced before their security implications can be adequately assessed.

Take, for example, the vulnerability of connected medical devices. In June 2015, security firm TrapX released a report claiming that attackers were using unprotected medical devices in order to maintain a foothold in healthcare networks. According to the report, based on investigation of client providers and company-sponsored analysis of common medical devices, TrapX claimed that attackers were able to infect an unmonitored PACS (Picture Archive and Communications System) radiologic imaging system with malware, which spread to a key nurse’s workstation, from which confidential hospital data was exfiltrated to China. These devices are typically not scanned by security monitoring systems and are a stable platform from which to launch attacks through the provider’s networks. In another instance, attackers infected a blood gas analyzer in a hospital laboratory and installed a “backdoor” into the network, through which they were able to harvest credentials from other network systems. These devices, connected to the hospital’s network, pose a substantial risk to the provider if they are not adequately secured.

While it is yet unknown whether vulnerable medical devices played a part in the five major healthcare provider attacks this year, the Food and Drug Administration issued a safety notice in August 2015, warning that an infusion pump used by hospitals throughout the country was vulnerable to a cyber attack. While this was the first time the FDA had issued a warning such as this, it was not the first reported incident of medical record exposure resulting from a vulnerable medical device. In March 2015, Cleveland-based MetroHealth System discovered malware and an installed “backdoor” in three computers in its cardiac catheterization lab, which had apparently infiltrated the network in July 2014. The breach affected as nearly 1,000 patients, whose names, birth dates, dates of service, height, weight, and other medical record data had potentially been compromised. While the MetroHealth breach was miniscule in comparison to the five breaches listed above, each incident reveals that vulnerabilities exist, that attackers are actively seeking to breach healthcare provider networks, and the larger the provider, the greater the risk posed to both the provider and its patients.