Is it reckless for a bank to leave its vault unlocked? If you accept the reasoning of Federal Trade Commission (FTC) Chief Administrative Law Judge D. Michael Chappell – only if someone actually breaks in and steals something. On this premise, the FTC’s unfair data security practices case against LabMD, a Georgia-based clinical testing laboratory, was dismissed for failing to meet its burden of proving that the healthcare provider’s allegedly deficient security practices caused, or were likely to cause, substantial consumer injury.
LabMD was a privately held Georgia corporation formed by Michael J. Daugherty in 1996. Its primary business consisted of providing tissue sample analysis by pathologists specializing in prostate or bladder cancer. Urologists would send LabMD specimens for analysis from patients throughout the country, by which LabMD came into the possession of protected health information (PHI) belonging to thousands of patients.
In February 2008, Tiversa, a security firm based in Philadelphia, Pennsylvania, discovered that a LabMD insurance report was being shared openly by a LabMD billing computer on the Limewire peer-to-peer network. The report (referred to in the matter as the “1718 File”) was found to contain protected health information (PHI) and personally identifiable information (PII) on approximately 9300 patients, including their names, dates of birth, Social Security numbers, CPT codes for laboratory tests conducted, and, in some cases, health insurance company names, addresses, and policy numbers. After discovering that the 1718 File contained patient PHI, Tiversa used the “browse host” function of LimeWire to obtain a list of all other files being shared on the LabMD billing computer. The 1718 File was among 950 other shared files in the “My Documents” directory on the LabMD computer, most of which consisted of music and video files. However, eighteen documents were also being shared at the same time, three of which also contained patient PHI.
Tiversa contacted LabMD in May 2008, disclosed its download of the 1718 File, and offered its remediation services. In July 2008, LabMD rejected Tiversa’s proposal and proceeded to remove the file-sharing software and re-assessed their network’s security (although the FTC later claimed that its remediation efforts were also insufficient). Meanwhile, the 1718 File sat dormant until 2009, when the FTC served a Civil Investigative Demand (CID) on Tiversa’s affiliate, The Privacy Institute. Tiversa responded to the CID by producing a spreadsheet of companies who Tiversa claimed had exposed the personal information of 100 or more individuals. Among the names provided was LabMD, with a copy of the 1718 File. This disclosure led the FTC to open an investigation of LabMD, which ultimately resulted in the action against them for failing to implement reasonable security, an alleged “unfair” practice.
It is at this point in the narrative that the parties’ allegations (and consequently Judge Chappell’s Initial Decision) become mired in conspiracy theories. After the FTC began its action against LabMD, Richard Wallace, a forensics analyst hired by Tiversa in July 2007 who originally found the 1718 File, alleged that Tiversa had adopted a business practice of exaggerating how widely erroneously shared files had spread across peer-to-peer networks and in some cases intentionally misrepresenting that files had been discovered at IP addresses associated with known or suspected identity thieves. Tiversa countered that Wallace’s claims were false, motivated by his termination for cause during the pendency of the case against LabMD. Nevertheless, they resulted in a United States House Oversight and Government Affairs Committee investigation into Tiversa and its involvement with governmental entities. Judge Chappell’s Initial Decision goes into great detail about the allegations of unethical practices by Tiversa, pursuant to which he concluded that Wallace (a witness for LabMD) was more credible than Robert Boback (CEO of Tiversa and a witness for the FTC). This finding had a profound effect on the outcome of the case, with Judge Chappell wholly discounting the testimony of one of the FTC’s consumer injury experts, to the extent his conclusions were based in part on testimony of Tiversa’s CEO. Judge Chappell also challenged the expert opinions of the FTC’s other consumer injury expert, stating that although he “did not expressly rely on the discredited and unreliable testimony from Tiversa’s CEO as to the ‘spread’ of the 1718 File for his opinions on the likelihood of medical identity theft, this evidence was clearly considered … and it cannot be assumed that [the] opinions were not influenced by his review of [the CEO’s ]s testimony.” Initial Decision, p. 67, footnote 31. There was also a potential red herring injected into the case, consisting of 40 LabMD paper “day sheets”, 9 patient checks, and 1 money order discovered in the possession of identity thieves in Sacramento, California in 2012, which resulted in a dispute over how the records had travelled from Georgia to California, with the FTC claiming that they must have been downloaded from LabMD’s insecure network but lacking evidence to prove this theory. Lost in this swirl of accusations was the crux of the case: that LabMD had openly shared a file containing PHI of approximately 9300 patients on an open peer-to-peer network, which the FTC alleged was an “unfair” practice.
The FTC’s authority relating to data security derives from Section 5(n) of the Federal Trade Commission Act (“FTC Act”), which states that the Commission may declare any act or practice “unfair” that “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” The FTC’s complaint alleged that LabMD failed to provide reasonable security because the healthcare provider:
- did not develop, implement, or maintain a comprehensive information security program to protect consumers’ personal information;
- did not use readily available measures to identify commonly known or reasonably foreseeable security risks and vulnerabilities on its networks;
- did not use adequate measures to prevent employees from accessing personal information not needed to perform their jobs;
- did not adequately train employees to safeguard personal information;
- did not require employees, or other users with remote access to the networks, to use common authentication-related security measures;
- did not maintain and update operating systems of computers and other devices on its networks; and
- did not employ readily available measures to prevent or detect unauthorized access to personal information on its computer networks.
Judge Chappell began his analysis by citing Congressional reports for the proposition that Section 5(n) of the FTC Act was intended to limit the scope of the FTC’s authority. However, rather than evaluating whether LabMD’s security was unreasonable as alleged, the Initial Decision instead focused solely on the issue of whether “substantial consumer injury” was at stake. The decision went to great lengths to attack the credibility of the FTC’s claims and evidence (largely by attacking Tiversa and its CEO as the FTC’s proxy), and discounts the potential harm of disclosing patient CPT (current procedural terminology) codes by noting that identity thieves would need to look them up on Google or the American Medical Association’s website in order to learn what tests had been performed on specific patients. Although the FTC had presented consumer injury expert witness testimony as well as survey data to demonstrate that the disclosure of consumer PHI/PII could result in various forms of identity fraud and other harms to consumers, the Initial Decision remarked that “the absence of any evidence that any consumer has suffered harm as a result of [LabMD]’s alleged unreasonable data security, even after the passage of many years, undermines the persuasiveness of [the FTC]’s claim that such harm is nevertheless ‘likely’ to occur.” Initial Decision, p. 52. Ultimately, the Initial Decision concluded that because actual harm had not yet resulted from the allegedly unreasonable security practices, then the practices were not “likely” to cause substantial consumer harm. Endorsing a narrow view that the “substantial consumer injury” required by Section 5(n) could not be satisfied by “hypothetical” or “theoretical” harm or “where the claim is predicated on expert opinion that essentially only theorizes how consumer harm could occur,” Judge Chappell opined that “[f]airness dictates that reality must trump speculation based on mere opinion.” Initial Decision, p. 52, 64.
THE UNLOCKED VAULT
There is no dispute that a LabMD employee had placed a file containing PHI of approximately 9300 of its patients in a publicly-shared folder on a billing computer. Anyone with LimeWire or any other Gnutella-based peer-to-peer filesharing software (which was freely available in 2008) could have downloaded any of the 950 files being shared by the LabMD billing computer, including the four containing PHI. From a credential authentication perspective, this is the equivalent of making these confidential files available for download on a public website, without any requirement for a username or password in order to obtain access. It is widely accepted, both in state and federal law, that the types of PHI/PII contained in the 1718 File should not be made publicly available in such a manner, particularly by a healthcare provider subject to HIPPA/HI-TECH’s Security Rule.
The Initial Decision’s analysis focused solely on whether there was an actual or probable injury after the fact based on this specific incident (i.e. the 1718 File being downloaded by Tiversa), instead of whether the practice itself (i.e. openly sharing a file containing PHI on 9300 patients on an open peer-to-peer network which could have been downloaded by anyone) caused or was likely to cause substantial consumer injury. Actual or imminent injury are requirements for standing in civil litigation, but the likelihood of substantial consumer harm is the proper standard for evaluating the FTC’s regulatory authority. In LabMD’s case, there were windfall events that saved them from a much more disastrous result: 1) the 1718 File was found (so far as known) only by Tiversa and not identity thieves, and 2) Tiversa notified LabMD of the exposure shortly after its discovery, which was quickly corrected. Consider what would have happened if the 1718 File had instead been discovered by an identity thief rather than Tiversa – the outcome would have been different (and likely much worse) for reasons totally unrelated to the security practice itself (i.e. their practice of openly sharing PHI had little or no effect on who actually discovered the file). To evaluate the reasonableness of LabMD’s practices in the first instance based on subsequent circumstances over which it had no control (i.e. the identity of the discoverer) judges the wrongfulness of the act solely by its accidental consequences – effectively, “no harm, no foul.”
The Initial Decision also contends that “to base unfair conduct liability upon proof of unreasonable data security alone would, on the evidence presented in this case, effectively expand liability to cases involving generalized or theoretical ‘risks’ of future injury, in clear contravention of Congress’ intent, in enacting Section 5(n), to limit liability for unfair conduct to cases of actual or ‘likely’ substantial consumer injury.” Initial Decision, p. 89. Here the Initial Decision attempts to graft the requirement of actual or imminent harm required in civil litigation onto the scope of the FTC’s authority, in contravention of the language of the FTC Act itself. This claim disregards the plain meaning of the terms “likely” and “risk,” which both relate to the possibility or probability that an event may occur – if the event actually occurs, it ceases to be a “risk” or “likelihood”; it becomes a “fact” or “certainty.” According to the United States Court of Appeals for the Third Circuit “[a]lthough unfairness claims ‘usually involve actual and completed harms,’ … ‘they may also be brought on the basis of likely rather than actual injury.’ And the FTC Act expressly contemplates the possibility that conduct can be unfair before actual injury occurs.” FTC v. Wyndham Worldwide Corp., 799 F.3d 236, 246 (3rd Cir. 2015) (quoting Int’l Harvester Co., 104 F.T.C. 949, 1061 (1984)). By extending the FTC’s authority to regulate practices that both cause or are “likely” to cause substantial consumer injury, Congress granted the FTC authority to pre-emptively address unfair trade practices before innocent consumers are harmed.
To return to the opening of the article, LabMD’s storage of the 1718 File in a shared folder on a peer-to-peer network could be analogized to leaving the doors and vault of a bank unlocked when no one was inside – the critical question is whether such an act (or practice) is likely to cause substantial consumer injury. The answer is not dependent upon whether any money was actually stolen during the months it was left unlocked; the fault is leaving the protected assets inside vulnerable so that the only factor between a potential and an actual theft is whether the wrong person checks if the doors are locked. The mere fact that a thief did not test the doors during that period doesn’t absolve the bank of otherwise reckless behavior. Accordingly, a better analysis may be to focus on the conditions existing during the period that the bank was left unlocked (or the practice existed) and, based on those conditions, evaluate whether the practice was reasonable. In the case of data security, the analysis should consider the type of information that was exposed (i.e. PHI/PII v. public information), how that type of information could be used to harm consumers (i.e. susceptibility to abuse by identity thieves, extortionists, or others), what measures were taken to safeguard the information from exposure (i.e. was it a complex “hack” of a computer network involving exploitation of zero-day vulnerabilities v. downloading a file from a publicly-available website with no authentication requirements), and what security measures are reasonable under the circumstances (in terms of time, cost, manpower, and other factors). These are among the actors identified by FTC’s expert Kim, but which the Initial Decision declined to consider by declining to engage in the reasonableness analysis.
The FTC has not yet announced whether it will appeal the Initial Decision, although there has been speculation in commentary that it will. While the Third Circuit’s opinion in FTC v. Wyndham Worldwide Corp. previously recognized the FTC’s authority to actively challenge deficient cybersecurity practices without first announcing the standards to be implemented, that case involved multiple breaches and actual consequential harm to the customers whose personal information was exposed. At stake now for those on both sides of the issues of the scope of FTC authority and the impact of the LabMD harm analysis is the extent to which federal courts may accept its Judge Chappell’s analysis and similarly focus on the issue of actual harm in the specific instance as determinative. That would substantially impact the FTC’s goal of pro-actively policing for deficient cybersecurity practices and limit the FTC in this area to only intervene after consumers are demonstrably injured or such injury is deemed imminent. Unless appealed and reversed, the LabMD Initial Decision could also create a perception of the FTC’s vulnerability on the issue of its authority and lead other companies threatened with FTC action for deficient security practices to challenge the regulatory agency. What remains to be seen is whether federal courts will reassert the distinction between actual or imminent harm required for civil standing as opposed to the FTC’s regulatory authority to prevent likely consumer injuries before they occur.