FTC Settles Ashley Madison Data Breach Complaint

The operators of Ashley Madison, the dating website for married people that became famous following its massive data breach in 2015, settled claims brought by the Federal Trade Commission (“FTC”) regarding that breach and their security practices and representations. Ruby Corp., Ruby Life Inc., and ADL Media Inc. (collectively, “Ruby”), named as defendants, were responsible for the operation of ashleymadison.com.

Hackers breached Ashley Madison—a site with over 18 million users in the United States alone—in 2014 and 2015, with intruders reportedly gaining access to Ruby’s networks multiple times. Ruby did not detect the breach until July 2015, when an employee noticed large data transfers.

Ruby has agreed to pay $1.6 million to resolve charges relating to the hack to the FTC and state regulators, with several million more of the judgment suspended in light of financial limitations of the company The FTC, 13 states, and the District of Columbia entered into a settlement to resolve the complaint.  Earlier this year, Ruby also entered into a compliance agreement with the Office of the Privacy Commissioner of Canada and an enforceable undertaking with the Office of the Australian Information Commissioner.  Additionally, multidistrict class action litigation brought by numerous former Ashely Madison customers continues against Ruby.

The Complaint

The FTC filed a complaint in the District Courts for the District for Columbia. Among the allegations raised in the complaint were that Ruby:

  • Failed to have a written organizational information security policy
  • Failed to secure remote access, regularly monitor unsuccessful login attempts, revoke passwords of ex-employees, restrict access to systems based on employee job functions, implement controls to protect against retention of passwords and encryption keys, and permitting employees to reuse passwords
  • Failed to proper train employees to perform data-security measures related to their jobs
  • Failed to ensure that third-party providers utilized reasonable security measures
  • Failed to monitor their system at random intervals to identify security breaches and to ensure the effectiveness of their protective measures.
  • The complaint also alleged that Ruby falsely:
  • Assured users that their information was private and protected
  • Created fake profiles to attract new users, and consumers had no way to distinguish between real and fake profiles
  • Claimed it had received a “Trusted Security Award,” as well as stating that it was “100% secure,” “risk free,” and “completely anonymous”
  • Required consumers were required to purchase the right to fully delete their profiles, and were only told after payment that their information would be retained for 6 to 12 months thereafter. Ruby then either retained the information for up to 12 months, or completely failed to remove the information.

Settlement Agreement

In addition to enjoining Ruby from misrepresentations as to its security practices and its utilization of fake profiles, the Settlement Agreement set forth a series of data security practices that Ruby is required to implement, with initial and biennial assessments of compliance required.The Settlement Agreement requires Ruby to obtain its assessments from an objective third-party professional that will monitor Ruby and the execution of its new security program. The Settlement Agreement also prevents Ruby from using personal information received from the online dating sites obtained prior to the entry of the Settlement Agreement, unless it complies with the requirements discussed above regarding the cessation of its misrepresentations to consumers. Ruby must also submit a compliance report to the FTC.

Some Takeaways

The complaint and subsequent Settlement Agreement is only the latest in the FTC’s exercise of its asserted power to investigate and prosecute companies for inadequate data security. The mandated security program outlined in the Settlement, for example, provides a useful roadmap that proactive businesses may utilize to preemptively show that their compliance is in line with FTC expectations. The Settlement Agreement provides warnings to those who freely throw about statements and self-award seals regarding the security of their platforms. The complaint and settlement also reinforce the importance of restricting access to systems based on services providers, employee job functions, and the importance of internal employee and vendor controls regarding password usage and retention.

  • The FTC noted that 36 million individuals worldwide were affected, making it one of the largest data breaches it has investigated.
  • Finally, Ruby was ordered to pay $8,750,000 in satisfaction of the judgment, but this amount was suspended. Instead, Ruby will pay $828,500 to the FTC, and $828,500 to the 13 states and DC, for a total of approximately $1.6 million. Should Ruby be found to have misrepresented its financial condition, Ruby will immediately owe the full amount of the judgment.
  • The Settlement Agreement outlines a comprehensive data security program for personal information collected. In doing so, it stated that the program was to be “appropriate to Defendants’ size and complexity, the nature and scope of Defendants’ activities, and sensitive of the personal information collected from or about consumers,” signaling that the FTC are not promoting a one-size-fits-all approach to data security. The safeguards ordered, however, are of the type that are likely to be expected of most companies. For example, the FTC requires that Ruby designate an employee to take responsibility for the program, create protocols to identify and resolve internal and external risks, and to conduct a risk assessment to assess the sufficiency of, necessity for, and implementation of various safeguards. The Settlement also requires the creation of a process to select and retain third-party service providers that will be capable of safeguarding any information they receive from Ruby.
  • The parties quickly settled the matter following the filing of the Complaint. The Settlement Agreement—a stipulated order for permanent injunctive and other relief—was entered into by the FTC, 13 states, and the District of Columbia against Ruby.
  • The FTC brought charges alleging unfair security practices, and misrepresentations regarding network security, user profiles, terms and conditions for deleting profiles, and data security seals.

As New York Attorney General Schneiderman stated: “This settlement should send a clear message to all companies doing business online that reckless disregard for data security will not be tolerated.”  (New York will receive $81,330.94 of the payment being made, since up to 652,627 New York residents were members of Ashley Madison at the time of the security breach).  Businesses who want  to take an active approach to data security compliance can glean much from the FTC’s complaint and settlement here.

One Good Deal After Another – Navy Data Breach, Damages and Sovereign Immunity

“One good deal after another” – This old expression from my time of service in the USN popped into my head as I read news of the latest breach of information regarding Navy personnel. In sum, reported the Navy on November 23, the laptop of a government contractor supporting a naval contract was “compromised” and “unknown individuals” accessed sensitive information on over 130,000 sailors and former sailors, including Social Security numbers.  At last report, there is no evidence the leaked data has been misused.

The facts so far, as reported, are facially similar to those at issue in In re Science Applications Int’l Corp. Litigation, 45 F.Supp.3d 14 (D.D.C. 2014) (“SAIC”) where an employee of SAIC, an information-technology company that handles data for the federal government, had her car broken into and back-up tapes containing health care information regarding millions of members of the armed services and their families were stolen.  The SAIC court rejected plaintiffs’ claims for increased risk of identity theft and monitoring costs on the grounds set out in in Clapper v. Amnesty International USA, 133 S. Ct. 1138 (2013) holding that, in addition to a substantially increased risk of harm resulting from the occurrence, there also had to be “a substantial probability of harm with that increase taken into account.” SAIC, 45 F. Supp. 3d at 16 (emphasis in original).  Because there was little likelihood that the thief involved even knew what information he or she had come into, much less possessed the technology to access it, the SAIC court found no injury-in-fact for the bulk of the plaintiffs.  However, the court allowed two claims to go forward, including a claim under the Privacy Act for previously unreceived unsolicited calls to an unlisted number pitching medical products and services targeted at a specific medical condition listed in the stolen medical records. Id. at 33.

We might note that news of this latest “compromise” came out the same month as the long awaited ruling in Welborn v. IRS, —- F. Supp. 3d —–, 2016 WL 6495399 (D. D.C. 2016), brought, as you will recall, as a result of 330,000 tax-related documents stolen during a cyberattack that extended from mid-February to mid–May 2015 and targeted the IRS’s “Get Transcript” program.  Among other causes of action, the plaintiffs brought suit under the Privacy Act and the Internal Revenue Code.  The Welborn court also rejected plaintiffs’ claims for an increased threat of future identity theft and fraud as a result of the IRS security breach as entirely speculative and depending on the decisions and actions of one or more independent, and unidentified, actors. Id. at *8 (quoting Clapper, 133 S.Ct. at 1150).  However, the Welborn court found that three of the plaintiffs, two of whom alleged that they had suffered actual identity theft when someone filed false tax returns and claimed fraudulent refunds in their names, and one who alleged she had “been the victim of at least two occasions of fraudulent activity in her financial accounts, one of which resulted in the removal of funds from a personal financial account, which occurred after the IRS data breach,” had alleged sufficient injury-in-fact to maintain standing.  The latter of these three was dismissed for lack of pleadings of causation as after in time is not sufficient to show causation. Id. at *9 – *10.  The court then held that 1) the remaining two plaintiffs’ claims for  unauthorized disclosure under the Privacy Act were preempted by the tax code, and 2) plaintiffs’ Privacy Act claims for “failure to safeguard” must be dismissed for failure to allege actual damages (as opposed to injury-in-fact). Id. at *12 (to plead any Privacy Act claim adequately, a plaintiff must plead “actual—that is, pecuniary or material—harm”).  Ultimately, the court also dismissed the plaintiffs’ claims for unauthorized disclosure under the Tax Code on grounds of sovereign immunity.  To allege improper disclosure under the Code, a plaintiff must allege (1) knowing or negligent, (2) disclosure, (3) of a return or return information.  The IRS argued, and the court held, that plaintiffs’ attempt to present a “failure to protect” claim couched as an “improper disclosure” claim, but the Code does not authorize suit against the IRS based on a failure to protect.  That is, the plaintiffs’ attempt to expand liability would expand the government’s waiver of sovereign immunity to include a claim not contemplated by the Code.

A very similar argument could also have been made with regard to the Privacy Act claims, but the court did not reach them due to its finding on no actual damages, and the holding has broad implications. Finally, the involvement of a government contractor in the scenario of the Navy breach could also implicate the recent Supreme Court decision in Campbell-Ewald Co. v. Gomez, No. 14-857, 2016 WL 228345 (2016), regarding “derivative sovereign immunity.”  The Navy had contracted with Campbell to develop a recruiting campaign that included sending text messages to young adults, but the contract stated that messages could be sent only if those individuals had “opted in” to receive marketing solicitations.  Campell-Ewald developed a list of cellular phone numbers for contacting users, and then transmitted the Navy’s message to more than 100,000 people.  Gomez, who had not opted in by consenting to receive messages, received one anyway and filed a nationwide class action seeking damages and alleging that Campbell-Ewald had violated the Telephone Consumer Protection Act (“TCPA”).  Campbell-Ewald argued that, as a contractor acting on the Navy’s behalf, it had acquired (i.e. had “derived”) immunity from the Navy’s sovereign immunity from suit under the TCPA.  However, the Supreme Court held that Campbell-Ewald violated both federal law (the TCPA) and the Government’s explicit contractual instructions that messages were to be sent only to individuals who had “opted in.”  The Court held that when a contractor violates both federal law and the Government’s explicit instructions, there is no “derivative immunity” and the contractor is not shielded from suit.

Proper Handling of Biometric Data — Lessons Learned from a $1.5 Million Illinois Class Action Settlement

In 2008, Illinois passed the Biometric Information Privacy Act, 740 ILCS 14/1 (the Act or BIPA), which requires companies to obtain a person’s consent before collecting that person’s biometric data. Illinois, unlike other states such as Texas, provides a private right of action for individuals whose data was collected without proper notification and consent. Under Section 15 of the Act (Retention; collection; disclosure; destruction), a private entity in possession of biometric identifiers or biometric information must develop a written policy establishing a retention schedule and guidelines for destruction of the data.

In what is being reported as the first settlement under the Illinois statute, on December 1, 2016, an Illinois state court approved a $1.5 million class action settlement between L.A. Tan Enterprises Inc. (L.A. Tan) and a class of its customers. Sekura v. L.A. Tan, Ill. Cir. Ct. 2015-CH-16694. The class plaintiffs alleged that L.A. Tan, which used fingerprint scanning technology rather than a key fob for membership purposes, failed to obtain written consent from its customers to use the data. The complaint also alleged that the company failed to provide information about how it would store the biometric data and the circumstances under which it would destroy the data, i.e., when the customer dropped his or her membership or the franchise closed.

What makes this settlement interesting is the fact that the complaint did not allege that the biometrics data was lost, stolen or sold. Instead, the class plaintiffs alleged that the company did not treat the data as carefully as the law requires. Similar to settlements with the OCR over HIPAA violations, the L.A. Tan settlement also requires the company to take corrective action to ensure compliance with the Illinois statute and to destroy all biometric data it still holds.

The sensitivity of biometric data requires companies that conduct business in Illinois to not only properly collect the data, but also store and dispose of the data as required by law. Failure to do so, could expose those companies to unnecessary liability even if the data is not lost, stolen or misused.

Two federal courts, for example, have denied defense motions to dismiss actions brought under BIPA. See In re Facebook Biometric Information Privacy Litigation, Case No. 15-cv-03747, 2016 WL 259385 (N.D. Ca. 5/5/16)(Social networking website users brought punitive class action against an website operator under BIPA, alleging that the operator unlawfully collected and stored biometric data derived from their faces. The court denied the defense motions to dismiss and for summary judgment finding that the users stated a cause of action under BIPA) and Norberg v. Shutterfly, Inc., 152 F. Supp. 3d 1103 (N.D. Ill. 2015)(Consumers brought action against operator of several photo sharing websites, seeking statutory damages for alleged violations of BIPA. Case dismissed with prejudice on April 15, 2016, pursuant to confidential settlement agreement). More recently, however, another federal court in Illinois granted the defense motion to dismiss a BIPA complaint for lack of jurisdiction under Spokeo. See, McCollough v. Smart Carte, Inc., Case no. 16 C 0377, 2016 WL 4077108 (N.D. Ill. 8/1/16).

Strike Three – You’re Out – Data Breach Shareholder Derivative Lawsuit Against Home Depot Dismissed

On November 30, 2016, Judge Thomas W. Thrash dismissed a shareholder derivative action brought against Home Depot as a result of the breach of its security systems and theft of its customers’ personal financial data (“the Breach”) in 2014. In Re The Home Depot, Inc. Shareholder Derivative Litigation, Civ. No. 1:15-CV-2999, 2016 WL 6995676 (N.D. Ga. 2016). In the derivative action, Plaintiffs asserted that Home Depot was harmed as a result of the company’s alleged delay in responding to significant security threats, and thus sought to recover under three primary claims against Home Depot’s current and former directors and officers (“Ds&Os”). These included the following alleged claims: (1) breach of the duty of loyalty by failing to institute internal controls sufficient to oversee the risks in the event of a breach, and for disbanding a Board of Directors committee that was responsible for overseeing those risks; (2) waste of corporate assets; and (3) violation of Section 14(a) of the Securities Exchange Act in connection with Home Depot’s 2014 and 2015 proxy filings. According to Judge Thrash, all of the claims against the Ds&Os “ultimately” related to what they “knew before the Breach and what they did about that knowledge.” Defendants filed a motion to dismiss, which Judge Thrash ultimately granted applying Delaware law. It was undisputed that no demand was made on the Home Depot Board of Directors. Thus, Plaintiffs had the burden of demonstrating that the demand requirement was excused because it would have been futile.

Judge Thrash analyzed each of the three claims against the Ds&Os. As for the primary claim that the Directors allegedly breached their duty of loyalty and that they failed to provide oversight, Plaintiffs were required to show that the Directors either “knew they were not discharging their fiduciary obligations or that the Directors demonstrated a conscious disregard for their responsibilities[.]” When combined with the general demand futility standard, Plaintiffs essentially needed to show that a majority of the Board faced substantial liability because it consciously failed to act in the face of a known duty to act. Judge Thrash stated that this is “an incredibly high hurdle for the Plaintiffs to overcome[.]”

In finding that Plaintiffs’ failed to overcome this hurdle, Judge Thrash rejected Plaintiffs’ arguments about the significance of disbanding the Infrastructure Committee charged with oversight of the risks Home Depot faced in the event of a data breach. Plaintiffs alleged that the Board failed to amend the Audit Committee’s charter to reflect the new responsibilities for data security that had been transferred from the Infrastructure Committee, as required by the Company’s Corporate Governance Guidelines. As a result, Plaintiffs alleged that the Board failed to designate anyone with the responsibility to oversee data security, thereby leaving the company without a reporting system. Judge Thrash concluded that “[t]his argument is much too formal.” Regardless of whether the Audit Committee had “technical authority,” both the Committee and the Board believed it did. Given the factual allegations that the Audit Committee received regular reports from management on the state of Home Depot’s data security, and the fact that the Board in turn received briefings from both management and the Audit Committee, the court concluded that “there can be no question that the Board was fulfilling its duty of loyalty to ensure that a reasonable system of reporting existed.”

The court also rejected Plaintiffs’ argument that the Board’s failure “to ensure that a plan was in place to ‘immediately’ remedy the deficiency in [Home Depot’s data security],” supported the breach of the duty of loyalty claim. Plaintiffs acknowledged in the complaint that the Board acted before the Breach occurred, that it had approved a plan that would have fixed many of Home Depot’s security weaknesses, and that it would be fully implemented by February 2015. Under Delaware law, the court held that directors violate their duty of loyalty only if “they knowingly and completely failed to undertake their responsibilities.” Judge Thrash concluded that “as long as the Outside Directors pursued any course of action that was reasonable, they would not have violated their duty of loyalty.”

In addition, Plaintiffs alleged that there was “a plan,” but that “it moved too slowly.” The court held that this was not the standard under which to evaluate demand futility on a duty of loyalty claim. The court noted that with the benefit of hindsight, “one can safely say that the implementation of the plan was probably too slow, and that the plan probably would not have fixed all of the problems Home Depot had with its security.” However, the court also found that “simply alleging that a board incorrectly exercised its business judgment and made a ‘wrong’ decision in response to red flags…is not enough to plead bad faith.”

Based on the foregoing analysis of the demand futility issue, the court had little difficulty discounting the claim of corporate waste. Plaintiffs alleged that the Board’s insufficient reaction to the threats posed by alleged deficiencies in compliance with contractual requirements for data security caused significant losses to the company, which constituted a waste of Home Depot’s assets. Here, the court concluded that the Plaintiffs’ claim was basically a challenge to the Director’s exercise of their business judgment, and although with hindsight, it “was easy to see that the Board’s decision to upgrade Home Depot’s security at a leisurely pace was an unfortunate one,” the decision nevertheless fell squarely within the discretion of the Board and was protected under business judgment rule.

Finally, Plaintiffs’ Section 14(a) claims, which were also subject to a demand requirement, alleged that Defendants omitted important information from their 2014 and 2015 Proxy Statements by not disclosing that Home Depot had known of specific threats to its data security, and that the Audit Committee’s charter was not amended to reflect that the responsibility for IT and data security had been transferred to it. The court rejected these arguments, noting that regardless of whether the charter was amended, “everyone believed and acted as if the Committee did have oversight over data security during the relative time period.” Further, the court found that Plaintiffs failed to specifically identify which statements in the Proxy Statements were false or misleading and also failed to plead with particularity how the omission caused the alleged loss. Thus, the court held that the claim did not demonstrate the necessary duty to disclose required under Section 14 (a). Moreover, “because [Plaintiffs] had not demonstrated a substantial likelihood that the Defendants would have been liable for a Section 14(a) violation,” the court found that demand was neither futile for the Section 14(a) claims, nor excused.

This decision is in step with two other recent decisions dismissing shareholder derivative actions against companies arising out of high-profile data breaches. See Palkon v. Holmes, et.al. 2014 WL 5341880 (D.N.J. Oct. 20, 2014) (court, applying Delaware law, dismissed a derivative action against Wyndham Hotels brought after that company suffered a large data breach, relying in part on the protections afforded the Ds&Os under the business judgment rule); Davis et al. v. Steinhafel et al., No. 14-cv-203, (D. Minn. July 7, 2016) (court dismissed derivative action against Target because a claim could not be stated in connection with a corporation’s special litigation committee’s decision not to pursue derivative claims against the company’s officers or directors, particularly where it demonstrated that the decision was based on a thorough and impartial investigation).

With the prevalence of security breaches taking place against various corporations, including large retailers, Home Depot is yet another reminder of the potential exposure presented by cyber-liability for the boardroom, including costly litigation even if the corporation prevails. Judge Thrash’s opinion provides guidance on how the business judgment rule can protect Ds&Os for their decision-making with respect to the demands of cybersecurity. Given the numerous references to the “benefits of hindsight,” however, corporate boards should be vigilant in assessing their cybersecurity plans. There may come a time when a court will not so readily apply the “business judgment rule” to a Board’s decision making process in addressing cybersecurity concerns.

Governmental Updates You Need to Know About

In the past few weeks, the government issued alerts and guidance on two noteworthy topics involving data security issues: phishing and ransomware – discussed below:

  • Don’t Get Phished: OCR Warns of Phishing Scheme Targeting HIPAA Covered Entities & Business Associates

As previously reported in the March 21, 2016 and July 12, 2016 Blog Posts, the 2016 HIPAA Audit Season has been underway for the better part of this past year. As stated on its website, “OCR uses the audit program to assess the HIPAA compliance efforts of a range of entities covered by HIPAA regulations.” The OCR intends to use the audits as a proactive measure, in conjunction with its ongoing complaint investigations and compliance reviews, to identify problems before they result in breaches. On July 12, 2016, the OCR sent emails to 167 Covered Entities, including health plans, healthcare providers, and healthcare clearinghouses, advising that they would be subject to desk audits.

On November 28, 2016, the U.S. Department of Health and Human Services (“HHS”) issued an Alert advising that a phishing email is being circulated on what appears to be HHS Departmental letterhead under the signature of OCR’s Director, Jocelyn Samuels. According to the Alert, this email appears to be an official government communication, and targets employees of HIPAA covered entities and their business associates.

The email prompts recipients to click a link regarding possible inclusion in the HIPAA Privacy, Security, and Breach Rules Audit Program. The link directs individuals to a non-governmental website marketing a firm’s cybersecurity services which is not associated with HHS or the OCR.

As in the case of any possible phishing email, HHS reminds the public that if you or your organization have any questions about whether the communication about a HIPAA audit is legitimate, you should contact the agency directly via email at OSOCRAudit@hhs.gov.

This advice applies to any suspicious email communication you or your organization may receive. It also serves as a reminder to review your policies and procedures and training materials to ensure that your employees do not fall for the phishing bait and expose your organization to intrusions.

  • FTC Joins the Chorus on Responding to Ransomware

In August 2016, the Office of Civil Rights (“OCR”) issued a Fact Sheet: Ransomware and HIPAA, which was followed by a U.S. Government Interagency Report entitled “How to Protect Your Organizations from Ransomware”. These materials provided “best practices and mitigation strategies focused on the prevention and response to ransomware incidents.”

In early September 2016, the Federal Trade Commission (“FTC”) announced that it too would offer guidance on how to protect against ransomware and would take action against those that failed to protect consumer’s personal data.

Fulfilling its promises, on November 10, 2016, the FTC issued advice on how to defend against ransomware. This follows the FTC’s session on ransomware that is part of its Fall Technology Series. The FTC noted an uptick in ransomware attacks and that 91% of these attacks come from phishing emails. The FTC also provided guidance on the answer to the question that everyone has: do you pay the ransom? Following the advice of law enforcement, the FTC advises not to pay the ransom, but notes that the decision to pay is a business decision. It does caution that the payment of the ransom may signal to the hackers that the business does not have a back-up or other access to the hacked data and therefore may increase its ransom demand.

If you are concerned that your business may become victim of a ransomware attack or you need assistance with developing a plan to respond to one, the Sedgwick Cybersecurity team can assist you in responding to such an attack or preparing a response plan. Contact us at SedgwickResponder@sedgwicklaw.com or contact Cinthia Motley at 312-849-1972.

New Jersey TCCWNA Developments Affecting Online Retailers

The New Jersey Truth-in-Consumer Contract, Warranty and Notice Act, N.J.S.A 56:12-14, et seq. (“TCCWNA”) is a unique consumer protection statute that prohibits sellers and other commercial entities from providing consumer contracts or notices containing unenforceable terms. As stated by the sponsor of the Act, the inclusion of unenforceable provisions “deceives a consumer into thinking that they are enforceable and for this reason the consumer often fails to enforce his rights.” Sponsor Statement to Assembly Bill cited in Shelton v. Restaurant.com, Inc.,70 A.3d 544, 551 (N.J. 2013).The statute raises the stakes in drafting consumer contracts because sellers not only have to consider the enforceability of every provision of their contracts under traditional criteria such as contract formation and unconscionability; they must also consider that the very inclusion of an unenforceable term in a contract may violate the Act.

The Act has spawned increasing numbers of class actions and threatened class actions against retailers whose websites contain allegedly unenforceable provisions in their Terms of Use (TOU) and Terms of Sale (TOS). Those complaints center primarily on exculpatory and indemnity provisions in the TOU and TOS. This paper reviews the most recent developments that may affect such claims and considers pending legislation to revise the Act.

Section 15 of the TCCWNA

Section 15 of the Act provides that no seller or other commercial entity shall offer a consumer a contract or notice “which includes a provision that violates any clearly established legal right of a consumer or responsibility of a seller … as established by State or Federal Law.” So, for example, exculpatory and indemnification clauses in consumer contracts may violate the statute if they impose on consumers all risks of using the site or purchasing products from the site and fail to specify that the defendant could still be held liable for its own conduct under certain circumstances. See e.g., Walters v. YMCA, 437 N.J.Super. 111, 118-119 (2014) (premises liability cannot be disclaimed) and Castro v. Sovran Self Storage, 2015 WL 4380775 (D.N.J 2015) (self-storage operator cannot sell contents at private sale without notice).

The statute was enacted in 1981, long before the rise of e-commerce, which helps explain why we found only two documented opinions referencing the Act before 2005, but well over a hundred opinions since then. The pace of recent opinions is accelerating as plaintiffs push for broader interpretations of the Act against online retailers. In many such cases, retailers’ website TOU or TOS were drafted based on the law of the jurisdiction specified in their standard terms and conditions, without scrutinizing the enforceability of each provision under New Jersey law. Nevertheless, such online retailers could find themselves in violation of the TCCWNA if any of those terms are deemed contrary to any New Jersey or federal law.

Even in cases where potentially unenforceable provisions have not been enforced against or directly harmed the consumer-plaintiffs, courts have found potential violations of the TCCWNA if overbroad terms “discourage suits, whether or not the provisions are enforceable, and therefore fall directly within the TCCWNA’s ambit.”  Castro, Id. at 9. However, in Sauro v. L.A. Fitness International, 2013 WL 978807, *9 (D. N.J. 2013), the District Court found that an allegedly overbroad exculpatory and indemnification provision did not violate the TCCWNA because the agreement included a savings clause specifying that the limitations were only applicable “to the fullest extent permitted by law” and “as broad and inclusive as is permitted by law in the State of New Jersey.” Accordingly, the District Court interpreted the provision as making the consumer contract co-extensive with New Jersey law, providing the seller with the maximum legal protection offered by New Jersey law without explicitly stating what limitations applied for the consumer.

A more recent case, Kendall v. Cubesmart L.P., 2016 WL 1597245 (D.N.J., Apr. 21, 2016), cast doubt on the effectiveness of a savings clause to prevent violations of the Act. In that case, a leak in plaintiff’s rented storage unit caused water damage to his stored goods. The defendant storage company refused to pay for the lost goods, citing various contract provisions limiting its liability, then it sold the unit contents. Plaintiff sued, alleging that his storage facility agreement violated the TCCWNA and the New Jersey Self Service Storage Facility Act (“SSFA”) because a provision stated that in the event of a default by the renter, the owner may sell personal property at a public or private sale without notice to the renter “in the manner permitted by applicable law.”

The court held that the provision violated Section 15 of the Act because a private sale without notice violates the SSFA. The court distinguished Sauro, which relied on the savings clause to dismiss the complaint, because the provision in Kendall’s agreement “does not merely state that a sale may occur [without notice], as permitted by law, leaving it to the consumer to discover that only public sales are permitted under New Jersey law. Instead, [it] unequivocally states that a private sale may occur,” which is impermissible under New Jersey law in those circumstances. Id. at *7 (emphasis by the court). The court added:

TCCWNA permits sellers to expand valid terms of a consumer contract so that they extend to the fullest degree allowed by law. But sellers cannot include invalid terms, discouraging consumers from exercising their clearly established rights and, at the same time, avoid liability under TCCWNA by including general assurances that those terms of the consumer contract would only be exercised in compliance with applicable law.

Id. at *7 (emphasis by the court). Accordingly, even savings clauses may not prevent liability under the Act if a provision in the agreement is clearly unenforceable as a matter of law.

Recent case law based on the Supreme Court’s ruling in Spokeo, Inc. v. Robins, 136 S. Ct. 1540, 1548 (2016) suggests a new line of defense against TCCWNA claims. In Spokeo, plaintiff filed a class action alleging violations of the Fair Credit Reporting Act because Spokeo created an inaccurate personal profile of him. The Supreme Court held that Article III standing requires an injury to be both concrete i.e., one that actually exists, and particularized to plaintiff. It found that plaintiff failed to allege any injury caused by the inaccurate information, and held that a bare procedural violation of a statute without concrete harm does not satisfy the injury-in-fact requirement of Article III. The court added, a plaintiff  does not automatically satisfy the injury-in-fact requirement “whenever a statute grants a person a statutory right and purports to authorize that person to sue to vindicate that right.” Id. at 1549. The case was remanded to the Ninth Circuit, which had found that plaintiff had standing.

In Candelario v. Rip Curl, Case No. 16-00963 (C.D. Cal., September 7, 2016) the court applied Spokeo to a Section 15 TCCWNA claim alleging unenforceable terms in defendant’s website. Plaintiff alleged that she purchased a tank top through the Rip Curl website that “was not the cut or quality depicted on Defendant’s website.” She then reviewed defendant’s website and found provisions that purportedly barred her from seeking remedies to which she was legally entitled and shielded defendant from liabilities for which it was legally responsible. Citing Spokeo, the court held that plaintiff ‘s mere allegation that the product she ordered was different from the one depicted on the website did not allege a concrete injury-in-fact. The case is on appeal in the Ninth Circuit.

It should be noted that Article III standing would not bar a claim brought in state court. In New Jersey, “standing … is an element of justiciability rather than an element of jurisdiction.” N.J. Citizen Action v. Riviera Motel Corp., 296 N.J.Super. 402, 411, (App.Div.1997), appeal dismissed, 152 N.J. 361, 704 A.2d 1297 (1998). However, the statutory penalties under Section 17 of the Act, discussed below, are payable to the “aggrieved consumer,” an undefined term in the statute. The courts have not yet ruled on whether an “aggrieved consumer” means a person who has sustained an actual injury. If the terms are synonymous, non-injured plaintiffs would lack standing for failure to assert a justiciable claim.

Section 16 of the TCCWNA

Section 16 of the Act prohibits consumer contracts, notices, or signs from stating that “any of its provisions is or may be void, unenforceable or inapplicable in some jurisdictions without specifying which provisions are or are not void, unenforceable or inapplicable within the State of New Jersey…”  In essence, this section requires retailers to note—in any provision stating that enforceability may vary based on state law—the precise extent to which the given section would be enforceable in the State of New Jersey.

This is even more complicated than it sounds, because New Jersey courts have found liability waivers to be unenforceable to the extent they violate public policy. See, e.g., Marcinczyk v. State of N.J. Police Training Commission, 203 N.J. 586 (2010). Accordingly, could a consumer contract provision that is unenforceable as against public policy constitute a violation of the TCCWNA? The answer would depend upon whether the provision is found to violate a “clearly established right,” which would be fact specific. These uncertainties make it difficult for a retailer to know whether and how enforceable a given term may be.  If the retailer over-estimates the enforceability of a given provision, it may run afoul of the TCCWNA, but if it under-estimates the enforceability of a provision, it may risk subjecting itself to claims that the customer otherwise could have waived.

A recent case in New Jersey relied on Spokeo to reject a claim under Section 16 of the TCCWNA. In Hecht v. The Hertz Corporation, ­­­Case No: 2:16-cv-01485 (D.N.J., October 20, 2016), plaintiff rented a car through defendant’s website. The rental agreement provision entitled “Void Where Prohibited” recited that all services may not be available in all locations and that restrictions may apply to the use of services in some jurisdictions. Plaintiff alleged that this provision violated Section 16 of the Act because it failed to describe which restrictions applied or were void in New Jersey. The court rejected the claim because plaintiff sustained no real injury. It added that even if the statute gave plaintiff standing to bring the claim under state law, that legislation did not confer Article III standing, which is a separate requirement in federal court. Id., Slip Op. at *4.

Penalties Under the TCCWNA

Section 17 of the TCCWNA imposes substantial penalties for violations in the context of a class action.  A seller who violates the TCCWNA is liable “for a civil penalty of not less than $100.00 or for actual damages, or both at the election of the consumer, together with reasonable attorney’s fees and court costs.” In the context of a class action, damages can add up dramatically because putative class action plaintiffs typically seek to represent all visitors to the site, which can number in the tens of thousands over a given period of time. Moreover, plaintiffs argue that this penalty applies for each violation in a website’s terms and conditions. One court held that plaintiff stated a claim under both Section 15 and Section 16 of the Act, Martinez-Santiago v. Public Storage, 38 F.Supp.3d 500, 511-512 (D.N.J. 2014), but no court has expressly ruled on whether a separate statutory penalty must be imposed for each provision that violates the Act. Under the argument advanced by plaintiffs, a retailer could be liable for several hundred dollars per unenforceable provision for each visitor to the website, plus attorneys’ fees.

Given this potential liability, and the broad remedial interpretations of the Act by the courts, most retailers targeted by TCCWNA claims opt to settle them quickly.

Proposed Legislation to Expand the TCCWNA

There have been several recent attempts to revise the TCCWNA on both sides of the issue. In January 2016, New Jersey Assembly Bill 759 (and its counter-part Senate Bill 755) were pre-filed for introduction in the 2016 legislative session. The proposed bills sought to prohibit any provision whereby a consumer waives or limits any rights under TCCWNA “or any other federal or State consumer protection law,” prohibits reduction of the time to bring a TCCWNA claim within the otherwise applicable statute of limitations, or renders void and unenforceable any provision which inhibits the ability to bring a class action TCCWNA claim. The bills also sought to prohibit consumer contracts from requiring consumers to consent to venue and jurisdiction outside of New Jersey or waive a right to jury trial. The sponsors of these bills had proposed the same set of amendments in the 216th legislative session in Assembly Bill 4079, which was not passed.

In September 2016, Assembly Bill 4121 was introduced which sought to prohibit class certification of TCCWNA claims “in the absence of an ascertainable economic loss resulting from the alleged violation.” The legislation would also require allegedly aggrieved consumers whose economic loss was $250 or less to first request reimbursement from the seller at least 35 days before filing a TCCWNA suit.

All of these bills are still under consideration by various New Jersey legislative committees, and depending upon what is passed, there could be significant consequences for online retailers selling to New Jersey consumers.

Conclusion

The case law under the TCCWNA remains in flux on several important issues besides those discussed above, and the statute may be expanded by the New Jersey legislature. Accordingly, online retailers will continue to be subject to claims under the TCCWNA.  Given the increasing popularity of these suits, retailers should act quickly to make sure that their terms comply with the Act in its present form and as it may be amended.  At a minimum, any terms should be revised to state how a provision would be enforced in New Jersey.  As noted, the enforceability of a given provision may be difficult to ascertain.  Retailers should therefore consult counsel with expertise in this area, to make sure that their terms are compliant.

FCC Announces New Rules to Protect Online Privacy

On October 27, the Federal Communications Commission (FCC), by a 3-2 vote, approved new rules regarding how Internet Service Providers (ISPs) handle their customers’ browsing history, mobile location data and other sensitive information generated by virtue of their customers’ use of the Internet.

The agency is looking to restrict ISPs ability to share with advertisers and other third parties the information they collect about what their customers do online. This has been a large and mostly unknown to the public source of revenue for the ISPs. The proposal sets forth new rules that would require ISPs to obtain affirmative “opt-in” consent for the use and sharing of data that has not been specifically collected for the purpose of providing communications-related services. ISPs must also take reasonable steps to protect that information and notify affected customers within 10 days of discovering a data breach.

The FCC’s new rule effectively creates some of the strongest privacy regulations for any segment of the technology and telecommunications industries and could have significant impact on how ISPs compete and do business. Such a significant change in their business model will in turn effectuate changes throughout the business of the Internet. Profits will no doubt shrink for ISPs thereby putting pressure on providers to increase fees for users. Because not all Internet sites will be treated as ISPs, there will be an inequity in the treatment of different Internet business entities resulting in a power shift away from ISPs, which will likely compel service providers to exercise legal action in order to revoke the new rules.

The ISP rules stem directly from the FCC’s new so-called “net neutrality,” or Open Internet, rules, which expanded the agency’s authority over Internet service providers. After issuing the Open Internet rules, the FCC began drafting broadband-specific privacy rules. The FCC has taken the position that after it reclassified broadband providers as utilities, it was compelled by law to create privacy rules.

The FCC ISP proposal creates new rules for ISPs regarding collection, disclosure, consent and use of user data in the Internet context. It is very likely that a business that relies on or uses tracking software to gather data on consumer Internet traffic or behavior in any way (e.g., customized ad buys, cookies, big data algorithms, mobile payments processing), will be affected by the proposal either directly as an ISP or an entity that has a business relationship with and ISP.

The inclusion of an opt-in standard for certain data uses is significant. Traditionally in the U.S., privacy guidelines require only that users opt-out of data uses such as ad targeting based on behavioral data. The new FCC rules for ISPs will require that users opt-in for most uses of their data including but not limited to providing this information to marketing and advertising companies.

The proposal will likely significantly impact ISP business models and companies that have formed relationships with ISPs to source user data. These companies will need to look elsewhere for consumer data, which will reduce revenue for ISPs as well as reduce their power by virtue of growth and innovation. The non-regulated entities will become more powerful as sources of data will become more rare. This drop in revenue will likely lead to price increases for users. Ultimately the bottom line must remain at a certain level or greater and as such the cost shifting effect of the new FCC rules will bear directly on end users. It remains to be seen, however, how many users will opt-in, and therefore difficult to predict the impact of the FCC proposal on the cost of doing business on the Internet.

Also, not all Internet entities are covered by the new FCC rules. The rules affect only companies that connect users to the Internet including Comcast, Verizon and Sprint. The new rules do not apply to Internet companies that have huge advertising businesses based on customer data, such as Facebook or Google. Those companies are regulated by the Federal Trade Commission (FTC). The result of the FCC’s new rules will be a revenue and power shift away from ISPs towards already Internet behemoths

Historically, the FCC has had to litigate to defend most of its new rules and regulations. In this instance it is no different, and may in fact be even more litigious. After passing the Open Internet Order in 2015, the FCC found itself defending its regulations in almost a dozen lawsuits by IPS and Telecom companies. Currently the remaining cases are before the U.S. Court of Appeals in Washington, D.C.

Some view the reclassification of broadband providers as utilities coupled with the new ISP regulatory rules as a power grab by the FCC to ensure its relevance in an ever-expanding cyber world. Regardless of the motivation, the FCC regulating ISPs is likely to take hold as long as the reclassification is upheld in the courts. The question then becomes how will ISPs and consumers adjust to these new rules of the information highway?

In the immediate future, what consumers see and experience on the Web is unlikely to change as a result of the rules; targeted advertising has become ubiquitous on the Internet and will stay. But the regulations may lead to new ways in which consumers can control their ISP’s business practices. That could mean dialogue boxes, new websites with updated privacy policies or other means of interaction with companies.

OCR: Businesses Sharing Consumer Health Information Must Also Comply With FTC Act

In October 2016, the OCR issued a bulletin clarifying that businesses collecting and sharing consumer health information must comply with the FTC Act. The OCR specifically called out disclosure statements, declaring “You must also make sure your disclosure statements are not deceptive under the FTC Act.”

Businesses dealing with health information are likely already familiar with HIPAA’s requirements for use of a valid HIPAA authorization for disclosure, release, or sharing of patient health information. However, the OCR explained that businesses are also prohibited from misleading consumers about how their health information is handled, which could constitute a violation of Section 5 of the FTC Act that prohibits businesses from engaging in deceptive or unfair acts or practices in or affecting commerce. It is important to note that the OCR’S warning against misleading applies more broadly to consumers, not only patients.

As a whole, the OCR’s bulletin instructs businesses to consider all of their consumer-facing statements to make sure that together they do not create a deceptive impression.
In connection with the HIPAA authorization, the OCR explained that even if the authorization itself meets HIPAA requirements, if the information “surrounding the authorization” is deceptive or misleading, that could still violate the FTC Act. Some pointers the OCR provided to comply with the FTC Act include:

  • Do not bury key facts by making them accessible only in links to a privacy policy, terms of use, or HIPAA authorization.
  • Don’t make a consumer go to various locations to obtain a comprehensive understanding of how the consumer’s information will be used. For example, if a business claims that a consumer’s information will only go to a doctor, don’t require the consumer to click on a different link to learn that the consumer’s information will also be viewable by the public.
  • Do use graphics for disclosures that make the terms clear and conspicuous. Do not make favorable promises in prominent type but then request authorization to share PHI in hard-to-see font and size.
  • Do assess how consumers’ devices will impact how they view the business’s disclosures. Don’t require scrolling by users to find out if their information will be shared in an unexpected way.
  • Do give consumers “the full story” before asking them to make a significant decision such as deciding to send or post information that may be shared publicly.
  • Don’t have contradictory statements or promises in your user interface.

Additional helpful resources provided by the OCR include links to the FTC’s Disclosure report, tools for mobile health apps, and the FTC’s best practices guidance for mobile health app developers and the OCR developer portal.

While the OCR cautions businesses to comply with the FTC Act in their disclosures whenever businesses share consumer health information, businesses should also be vigilant in how they implement security practices surrounding consumer health information and describe those practices to consumers. Referred to as the “Security Rule”, HIPAA has a regulatory scheme for security of Personal Health Information as defined by HIPAA. Ongoing litigation in the FTC case against LabMD (and the prior case against Wyndham hotels) focus on whether lax data security practices around consumer health information may very well be “unfair” practices under Section 5 of the FTC Act, subjecting a business with lax data security practices to sanctions and potentially long-standing consent decrees.

LexBlog