Breach Notification Update: New Mexico becomes the 48th State Requiring Breach Notification and Tennessee Adds a Safe Harbor for Encryption

As the frequency of data breaches continues, so do legislative developments on notification requirements that must be met in the event of a breach of Personally Identifiable Information (PII). Even as of now, not every state has enacted such legislation.  Until April, there were three holdouts.  Now, however, we are down to two:  Alabama and South Dakota are the only remaining states that do not have data breach notification legislation.  New Mexico, previously the third holdout, has recently joined the majority and enacted its own statute.  On April 6, 2017, the Governor of New Mexico signed into law the Data Breach Notification Act (“Act”), 2017.  The Act will become effective on June 16, 2017, with several exemptions and carve outs, and inclusion of data protection requirements, discussed below.

In another recent development, Tennessee has clarified that the new breach notification statute it enacted last year will include a safe harbor for encrypted PII after all, at least in most situations.

New Mexico

While broad in the scope of PII to which it applies and specific in the number of days in which notification of a breach must be provided (45 days), the New Mexico statute has several exemptions and allows for a “risk of harm” analysis in the decision of whether notification is necessary. It also includes data protection requirements, and thus goes beyond just breach notification.  It is also noteworthy in its deference to federal reporting requirements, providing an exemption for persons subject to certain federal statutes.

The statute exempts the State of New Mexico and its political subdivisions from its provision (Section 12). The provisions of the Act do not apply to a person subject to the federal Gramm-Leach Bliley Act or the federal Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) (Section 8).  This provision of the Act reflects deference to the federal reporting requirements under those two statutes.

While the Act contains similar provisions to those of other states, the following Sections are noteworthy:

*  The definition of PII:   The statute defines PII as including “biometric data”.  This is consistent with the growing trend among states to include biometric data, e.g. the Illinois Personal Information Protection Act, which took effect on January 1, 2017 (Section 2)

*     Notification of a Security Breach:  –Section 6 requires that notification be made in the “most expedient time possible,  but not later than forty-five (45) calendar days following the discovery of the security breach, except as provided by Section 9. Section 9, entitled- Delayed Notification, is typical of breach notification statutes in providing that notification may be delayed if a law enforcement agency determines that notification will impede a criminal investigation or is necessary to determine the scope of the security breach and restore the integrity, security and confidentiality of the data system.  Significantly, Section 6 also includes a risk of harm provision, in that it provides that notwithstanding the provisions of that Section, notification to affected New Mexico residents is not required, if after an appropriate investigation, the person determines that the security breach does not give rise to a significant risk of identity theft or fraud.   However, the Act does not define what constitutes an “appropriate investigation” or “a significant risk of identity theft or fraud.”  When notification it is required, it is also to be provided to the office of the attorney general (with additional information required including the number of affected residents) and major consumer reporting agencies (Section 10).

*     Disposal of PII: –As part of its security provisions, the statute requires that persons who own or license records containing PII of a New Mexico resident arrange for “proper disposal” of records when they are no longer reasonably needed for business purposes, which in turn is defined as meaning shredding, erasing or otherwise modifying the personally identifying information to make it unreadable or indecipherable. (Section 3)

*     Security Measures for Storage of Personal Identifying Information:  – The statute requires that  a person that owns or licenses PII of a New Mexico resident “implement and maintain security procedures and practices appropriate to the nature of the information to protect the personally identifying information from unauthorized access, destruction, use, modification or disclosure.”  (Section 4)  While this gives some discretion as to what is appropriate, it remains to be seen what will end up being considered appropriate by the regulator.

*     Service Provider Use of PII – Implementation of Security Measures: The statute mandates that a person that discloses PII of a New Mexico resident pursuant to contract with a service provider require “by contract” that the service provider also implement and maintain reasonable security procedures. (Section 5)

Attorney General Enforcement – Civil Penalty – Section 11 allows for the Attorney General to bring an action on behalf of individuals and in the name of the state of New Mexico for alleged violations of the Act and seek injunctive relief, as well as damages for actual costs or losses, including consequential financial losses. In addition, for knowing or reckless violations of the Act, the court may impose civil penalties up to a maximum of $150,000.

Tennessee

The Tennessee legislature recently amended its data breach notification statute to add back in the encryption safe harbor in the definition of “personal information.” When Tennessee initially amended its data breach notification statute last year, it eliminated the encryption safe harbor provisions from the existing statute.  Without this recent amendment, Tennessee would have required data breach notification even when the personally identifiable information lost was encrypted.  While this was apparently out of concern arising from reports of situations in which hackers were able at times to decrypt files, it gave rise to a counterbalancing concern that it would disincentivize companies from encrypting data.  A reasonable level of encryption is still considered a good safeguard to most hacking, and thus the safe harbor was added, at least where the key to encryption is not also taken.

Keep an Eye on State Legislative Developments

As data breaches of Personally Identifiable Information continue to expand the type of information targeted and the security measures circumvented, state legislatures in an effort to protect their residents are now often reviewing their statutes directed at data security and breach notification to see if they are keeping up with those developments. Definitions of protected personal information are being expanded by many states, and many are also adding data security requirements either by way of safe harbors from breach notification or by express directives as to minimal data security procedures.  Entities that own or hold Personally Identifiable Information need to monitor legislative developments that may impact data breach security and notification requirements and take them into account in their breach preparedness and response plans.  This ongoing monitoring will help ensure compliance with statutory requirements and minimize regulatory and legal liability issues that may arise in the event of a data breach when requirements are not satisfied.

 

SENATE BILL 547

SENATE BILL 547_2

Sedgwick’s Cybersecurity Team Nominated for Advisen’s 2017 Cyber Risk Awards — Votes Welcomed!

Advisen, a leading provider of technology solutions for insurance companies, has short-listed Sedgwick’s Cybersecurity & Data Privacy group for their Fourth Annual Cyber Risk Awards. Specifically, Sedgwick is a finalist for the Cyber Law Firm of the Year award. The award recognizes the property and casualty insurance industry’s most influential cyber risk professionals.

The Sedgwick Cybersecurity & Privacy Group is a multi-disciplinary group of attorneys with extensive experience working closely with clients to address and reduce their cybersecurity and privacy risks and exposures. These risks affect organizations in every industry we represent, including healthcare, financial institutions, retailers, utilities and manufacturers, among others.

We service clients throughout the U.S. and the U.K. and our Incident Response Team is an approved provider by many of the major insurers. Sedgwick’s Cyber team litigates consumer class actions alleging violations of rights to privacy and consumer protection rights and unfair trade practices, and B2B litigation involving breached entities and their service providers or other third parties involved in the incident.

This is a huge honor, but we need your help and competitive spirit! Please help us by sharing the link below and casting a vote for Sedgwick.

Select Sedgwick LLP in the Cyber Law Firm of the Year dropdown.

Voting ends on Friday, May 19, 2017. Thank you for your support!
VoteNow2

 

 

“W-2 Phishing Attacks Targeting Businesses to Cash in on Busy Tax Season: 10 Tips to Protect Your Business”

Cyber criminals are taking advantage of tax season to lure valuable W-2 information from vulnerable businesses. An example of a common phishing scheme starts with a scammer posing as a legitimate employee of a company, sending an email that looks like it is coming from an internal email address, often the Human Resources department or the Finance department, or even from the CEO of the company. A cyber criminal may even impersonate an employee using stolen personal data from that employee. The email from the scammer attempts to trick the recipient into sending the scammer W-2’s, often creating a sense of urgency for a quick response. As we all know, a W-2 contains valuable information such as an individual’s name, address, social security number, salary and withheld taxes. Cyber criminals can use this information to file fake tax returns and pocket tax refunds.

As recently as February 17, 2017, the IRS warned of a new phishing scam where tax professionals and state tax agencies are sent an email impersonating a software provider with the subject line “Access Locked.” The email tells the recipient that access to the software was suspended due to errors in the recipient’s security details. Then, the email requires the recipient to “unlock” the software by clicking on a link that directs the recipient to a fake web page, prompting the recipient to provide his/her user name and password, which is used by the scammer to steal client information. https://www.irs.gov/uac/newsroom/security-summit-alert-tax-professionals-warned-of-new-scam-to-unlock-their-tax-software-accounts

Other common ways phishing attacks occur are by: (1) embedding a link in an email that redirects the recipient to an unsecured website that asks for sensitive personal information, (2) including with an email a malicious attachment or ad that allows an intruder to use loopholes in security to obtain personal information, or (3) impersonating a known vendor or an employee over the telephone to obtain company information.

We offer some tips to help prevent succumbing to W-2 phishing attacks that are already plaguing this tax season:

  1. Pick up the phone: If you receive an email asking for a W-2 or hear of someone in your company receiving such an email, verify the authenticity of the request. A simple solution is to pick up the telephone and call the apparent author of the email to ask if he/she indeed asked for a W-2. The same rule of thumb should apply if you receive an email asking for a money transfer or other sensitive information.
  2. Check the sender’s email address for discrepancies: Often an email address from a scammer will look almost like a company’s internal email address, but there might be a spelling error with one letter off, or a period added or taken away. Scrutinize the email address from a sender asking for W-2s to see if there are any discrepancies that might provide a clue that the email is fake.
  3. Don’t just reply, forward instead: Instead of automatically hitting reply to an email from what appears to be a known colleague asking for W-2s or credentials that could be used to obtain W-2s, forward the email to the legitimate email address you have for the person who the email looks like it is coming from and ask she/he to verify if he/she sent the forwarded email.
  4. Redact W-2s: If your business is not required to provide or maintain unredacted W-2s, then redact (black out) all but the last 4 digits of social security numbers on W-2s you generate or maintain. This reduces the sensitive personal information available on the W-2 and makes the W-2s much less valuable if a scammer ever was able to obtain them.
  5. Encrypt W-2s (and all sensitive company information): Even if you cannot redact W-2s, all W-2s and sensitive company information should be encrypted, both at rest and when being transmitted (including in the mobile and “own device” environments).
  6. Train your workforce: Regularly educate and train your workforce on phishing attacks. Test your workforce on the training provided. Phishing attacks work because of human error. Training and testing of your workforce to recognize phishing attacks can greatly reduce the risk of success of a phishing attack.
  7. Implement and maintain strong information security: Ensure that your company has robust spam filters that are regularly updated, block malicious websites, enable browser ad-ons that prevent a user from clicking on malicious links, use antivirus software, and keep all security systems current with updates and patches. Apply all of these security programs to mobile environments and “own devices” to prevent exploitation of vulnerabilities in the mobile environment or from “bring your own device” practices.
  8. Restrict access to W-2 information: Ensure that only key personnel have authority to access personally identifiable information, in this case W-2 tax information. Such access should be restricted to only those who require it to perform their job duties.
  9. Restrict outflow of W-2 information: Restrict internal staff’s ability to copy sensitive data into unapproved methods of transmission, such as email and web browsers, including controlling the ability to copy, paste and print sections of documents. Loss prevention endpoint technology and application controls are available in this area.
  10. Implement, practice and update a Data Loss Prevention (DLP) Program: Cyber risks present a fast- evolving landscape. Data loss through cybercrime and internal risks represent increasing business exposures. Prevention is key to mitigation in this area and a better option than facing a breach unprepared. An entity that knows those risks and controls the data that flows within and outside of its walls can best remain competitive in the marketplace. Using this knowledge, a company can most efficiently protect sensitive data and quickly respond to security incidents.

If you would like more information or assistance in this area, please contact a member of our Cybersecurity & Privacy Team at SedgwickResponder@sedgwicklaw.com.You can also reach Cinthia Motley at cinthia.motley@sedgwicklaw.com or 312-849-1972 and Nora Wetzel at nora.wetzel@sedgwicklaw.com 415.627.3478

Recent Trends in Bankruptcy Sales of Customer Data

Introduction
In 2005, Congress amended the Bankruptcy Code to address privacy concerns in connection with sales of customer data in bankruptcy cases. The Code was specifically amended to restrict or prohibit the sale of customers’ personally identifiable information – as defined by the Bankruptcy Code – when in violation of a debtor company’s existing privacy policy.
In practice, the statute mostly has operated to facilitate these sales pursuant to a bankruptcy court approval process, which is conditioned upon satisfaction of certain procedural safeguards.  After quickly reviewing the basic statutory framework, we discuss some recent cases involving bankruptcy sales of customer data.  We then provide our summary of lessons learned and key takeaways.

Statutory Framework
Section 101(41A) of the Bankruptcy Code’s enumerates the specific items of personal information that constitute Personally Identifiable Information within the meaning of the Bankruptcy Code, if provided by an individual in connection with obtaining a product or service from the debtor primarily for personal, family or household purposes.  They are as follows: first and last name, residence, email address, telephone number, social security number, or credit card account numbers.  In addition, Section 101(41A) provides that Personally Identifiable Information can include a birth date, place of birth or any other item of information concerning an identified individual that, if disclosed, would result in identifying such individual physically or electronically, if such information is identified with one or more of the above enumerated items of personal information.

Section 363(b)(1) of the Bankruptcy Code provides that if the debtor has a privacy policy in effect at the time of the bankruptcy filing, which prohibits the transfer of Personally Identifiable Information (“PII”), the Information cannot be sold in bankruptcy unless additional requirements are satisfied.  If triggered, section 363(b)(1) prohibits the sale of PII unless the bankruptcy court finds that the sale is consistent with the debtor’s privacy policy or the court approves the sale at a hearing after (a) appointing a consumer privacy ombudsman to assist the court in reviewing the facts and circumstances of the sale and (b) finding that the sale of the information would not violate applicable nonbankruptcy law.

The bankruptcy court orders the appointment of the consumer privacy ombudsman pursuant to section 332 of the Bankruptcy Code, who may appear and be heard at the sale hearing.  Section 332 provides a non-exclusive list of the information and topics to be included in the ombudsman’s report and recommendations to the court. They include the potential losses or gains of privacy to consumers if the sale is approved, the potential costs or benefits to consumers if the sale is approved, and the potential alternatives that would mitigate privacy losses or potential costs to consumers.

Recent Bankruptcy Sales of Customer Data

  1. BPS Holdings (2017):  The debtor companies manufactured, distributed and sold sports equipment, accessories and apparel under a number of band names.  Products were sold in U.S. and Canada, and the companies operated a number of websites which collected a variety of PII from their customers, in some cases from minors.  After filing bankruptcy, the debtors requested bankruptcy court approval to complete two sales of their businesses:  (1) Sale of their soccer apparel and equipment business (“Soccer Business”) to their co-founder and (2) Sale of their hockey, lacrosse, and baseball businesses (“Other Business”) to a newly formed company.

    The bankruptcy court appointed a privacy ombudsman, who examined the debtors’ privacy policies and data collection practices among the various businesses.  The ombudsman recommended court approval of both sales under certain terms and conditions, and the both sales were recently approved by the bankruptcy court.

    Sale of Soccer Business:  The ombudsman found that the debtors operated two websites for the Soccer Business, pursuant to which they collected customer names, addresses, phone numbers, email addresses and order histories.  They did not collect any other categories of PII, nor track customer activity via cookies or other tracking technologies.  At the time of the bankruptcy filing, a privacy policy was posted on one of the websites, which promised customers that their PII would not be sold or transferred to any other company for any reason whatsoever.

    The privacy ombudsman  recommended that the court approve the sale subject to the following conditions: (1) the buyer must engage in substantially the same line of business, (2) the buyer must adhere to all material terms of the existing privacy policy, (3) the buyer must agree to obtain the customers’ affirmative consent before making any material changes to the privacy practices to the PII collected under the existing privacy policy, and (4) the buyer must agree to comply with applicable privacy and data protection laws.

    The privacy ombudsman did not recommend, and the buyer not agree,  that notice be given to the customers of the proposed sale with an ability to opt-out of the sale of their PII to the buyer.  The sale was approved without any required opt-out notice..

    Sale of the Other Businesses:  The ombudsman found that the debtors operated several websites and Instagram pages among the different sports businesses, collecting customer names, mailing addresses, phone numbers, email addresses, birth dates, ages, genders, zip codes, and payment information, in different combinations.  The debtors also collected anonymized customer usage and demographic data from Google and Amazon.  Certain of the websites also collected personally identifiable from minors.

    The ombudsman reported that some websites for the various businesses posted privacy policies, while others did not.  Most of the privacy policies promised customers that their PII would not be sold without prior notice; one of the websites posted a policy that PII might be shared with affiliated companies or third party service providers for the purpose of conducting business, and promised that PII would not be provided to any third parties for their own marketing purposes. In certain instances, the ombudsman indicated that he had requested, but had not received, any prior or currently applicable privacy policies.

    The ombudsman recommended that the sale be approved on a number of conditions.  As to websites which notified customers that their PII would not be sold without prior notice, the ombudsman recommended (1) email notice of the sale to customers, (2) if the buyer did not agree to be bound by the existing privacy policy, an opt-out opportunity, and (3) the buyer’s agreement to comply with applicable privacy and data protection laws.  As to the website which promised customers that their PII would not be shared, the ombudsman recommended that the buyer obtain the customers’ affirmative consent to the sale of the PII or a showing by the buyer that it would (1) engage in substantially the same line of businesses, (2) adhere in all material respects to the existing privacy policy (3) obtain customer affirmative consent before making any material changes to privacy practices, and (4)agree to comply applicable privacy and data protection laws.

    For websites with no privacy policies, the ombudsman did not recommend any conditions other than the buyer’s agreement to comply with applicable privacy and data protection laws.  For websites in which the ombudsman was unable to confirm the existence or absence of any privacy policy, the ombudsman recommended that the debtors obtain consent from the customers before the sale of their PII to the buyer.  Lastly, the ombudsman objected to the debtor’s transfer of any PII of children under the age 13, consistent with the Children’s Online Privacy Protection Act.

    The court approved the sale without requiring opt-out notices to consumers, but required affirmative customer consent with respect to the sale of PII collected prior to existing privacy policies for certain of the websites.  The court also required the debtors to delete all PII of children prior to the sale.

  2. Aeropostale (2016):  The debtor companies sold clothing in the U.S. and Canada in retail outlets and through 2 websites under a variety of brands. The websites collected customers’ names and addresses (mailing and email).  Phone numbers also could be collected for shipping purposes only. Similar PII was collected in the retail stores.  The websites also tracked and collected historical usage and transaction data, and the customers’ IP address, browser information and reference site domain name.

    The company also conducted certain contests and sweepstakes, which, in certain instances, required customers to provide their social security numbers, in addition to their names and addresses.  The company did not collect credit card numbers or other payment information.

    At the time of the bankruptcy filing, the posted privacy policy on one of the websites stated that the PII would not be shared with others “except with your consent or as described in this Privacy Policy.” The policy described a number of circumstances for the companies’ sharing of PII with affiliates or marketing or service partners, or where required by law, but the policy did not provide for the sharing of the PII in the event of a bankruptcy or sale of the company or its assets.  On the second website, the posted privacy policy explicitly promised customers that their PII would “never” be sold, rented or given away.

    After filing bankruptcy, the debtors conducted an auction of their operating assets, including the customer PII, and thereafter moved for approval of the sale to the winning bidder.  The court-appointed  ombudsman recommended approval of the sale of the customer PII after reporting that under the terms of the sale the proposed transfer of PII was subject to a 60 day opt-out notice to customers after the closing of the sale as to any future use of their PII by the buyer. The ombudsman noted that this opt-out provision was not a specific recommendation of the ombudsman, rather it was agreed to between the debtors and the buyer.

    The ombudsman specifically recommended that the sale be further conditioned upon the buyer’s agreement to (1) employ appropriate security controls and procedures to PII, (2) abide by all application laws and regulations with respect to PII, (3) abide by the debtor companies’ existing privacy policies and related promises, and (4) respect all prior requested opt-out requests by customers.  In addition, the ombudsman recommended that absent prior express consent from customers, the buyer’s future use of PII should be limited to the purposes of continuing business operations that were purchased and providing goods and services to customers.

    Thereafter, the bankruptcy court approved the sale after adopting the ombudsman’s recommended conditions to the sale of the PII.

  3. Golfsmith (2016):  The affiliated debtors were the largest specialty golf retailer in the world, offering customers an extensive selection of golf equipment and related services.  The debtors operated their business as an integrated multi-channel retailer, with retail stores, catalog sales and e-commerce pursuant a website. After filing bankruptcy, the debtors moved to sell their assets pursuant to a court supervised auction.  The winning bidder, a large sporting goods retailer, sought to purchase the business as a going concern.

    Included in the purchased assets were all of the Debtors’ customer information including contact information (name, email, mailing address, and phone number), birthday and gender, and transaction history, with the exception of any credit card information or social security number information that might be in the debtors’ possession.  At the time of the bankruptcy filing, the debtors’ privacy policy disclosed that certain PII would be shared with trusted third party service providers, but phone numbers would not be made available to other companies or organizations and email addresses would not be shared or distributed and would remain in the sole possession of the debtors.  An earlier privacy policy also promised customers that their email addresses would not be sold.

    The privacy ombudsman’s report recommended approval of the sale subject to a number of conditions, including the buyer’s agreement to (1) be bound by and succeed to the debtors’ existing privacy policy, (2) be responsible for any violation of the privacy policy after the closing of the sale, (3) notify the customers of the sale and provide them with an opt-out opportunity for the transfer of any customer PII to the buyer, which such notice to be posted both on the debtors’ website and retail stores, (4) provide further opt-out notice to customers of any attempt to convert the customers to the buyer’s privacy policy, and (5) safeguard all customer PII in a manner consistent with industry standard data protections and applicable information security laws and best practices.

    In addition, the ombudsman recommended that the buyer agree to destroy all PII for which it determined that there was no reasonable business need and that the debtors destroy all customer PII not transferred to the buyer within 90 days after the closing of the sale.

    The court approved the sale as conditioned by the ombudsman’s recommendations.

  4. RadioShack (2015):  After filing bankruptcy, the debtor proposed a sale of its customer records database along with certain IP on a standalone basis. The data was not part of a sale of the debtor’s business to the buyer as a going concern. The data base included customer names, email and mailing addresses, and phone numbers and extensive transaction data, including credit and debit card numbers and social security numbers.  The debtor carved-out the credit and debit card numbers and social security numbers from the proposed sale.

    The debtor’s pre-bankruptcy privacy policies advised customers that, among other things, the company’s mailing list would not be sold, customer PII would not be used for any purpose other than carrying out services requested from the company, and the company would not “sell or rent customer PII to anyone at any time.”

    The proposed sale drew objections from the Federal Trade Commission and State Attorneys Generals from 38 states.  In addition, the court appointed a consumer privacy ombudsman to review the proposed sale. Thereafter, the FTC, States Attorneys General, debtor and successful bidder mediated this dispute and reached a consensual resolution which also was subsequently endorsed by the ombudsman.

    As part of the settlement, the buyer agreed to purchase only a very limited subset of the customer PII, namely (1) email addresses of customers that were active within 2 years prior to the bankruptcy filing along with certain limited transaction data collected in the five years prior to the bankruptcy filing and (2) customer names and mailing addresses with certain limited transaction data associated therewith in the 5 year period prior to bankruptcy.  No customer phone numbers were sold.

    In addition, the buyer agreed to a number of other protections in the mediated settlement, including the buyer’s agreement to (1) become a successor in interest under the debtor’s existing privacy policies, adhering to all material terms and assuming liability for any violations thereof, (2) effectuate an extensive notice and opt-out procedure for affected customers, (3) not make further material changes to the privacy policies without further notice and opt-out opportunity to affected customers, (4)  safeguard all PII in a manner consistent with industry data security protections, applicable information security laws and best practice and (4) destroy all PII for which it had no reasonable business need. In addition, the debtor agreed to destroy any PII not conveyed to the buyer.

    The court approved the sale as modified by this mediated settlement.

Lessons Learned and Key Takeaways

  • Sales of customer PII on a standalone basis, or which are not part of a sale of the debtor’s business in which the buyer will continue to provide the same or similar products or services, will continue to draw greater judicial scrutiny and likely require more limitations and protections, as a condition to their approval by the bankruptcy court.
  • Absent objections by affected consumers, the bankruptcy courts likely will continue to approve sales of customer PII in bankruptcy cases in accordance with the recommendations of the consumer privacy ombudsmen who are appointed by the courts, in many instances with no opportunity for customer opt-out.
  • Although a number of bankruptcy sales of PII have included  some form of opt-out notice to the affected customers, it remains to be seen in future cases whether buyers will continue to agree or be required to provide  such notices.  Much may depend upon the particular factual circumstances, but consumer privacy ombudsman do not consistently recommended such restrictions as a condition to the approval of these sales.
  • Some bankruptcy sales of PII have been conditioned upon the buyer assuming certain liability for breaches of the debtor’s privacy policy and/or obligations to safeguard PII in accordance with applicable law or industry standards.  At the same time, the debtor’s assets are often sold to the buyer free and clear of any liens, claims, or interests, including potential successor liability. It remains to be seen whether significant disputes or litigation will arise after the closing of these bankruptcy sales of customer PII in the event of a subsequent discovery of a data security breach or other breach of the debtor’s prior privacy policies.

Chicago Attorneys Cinthia Granados Motley and Ashley Jackson Discuss Ways to Avoid Wrongful Collection of Data Claims

Chicago based  attorneys Cinthia Granados Motley and Ashley Jackson were published on Law360 February 7, 2017. The article, “10 Ways To Avoid Wrongful Collection Of Data Claims,” discusses tips by using the who, what, where, when and why of consumers to help answer the most asked questions.

FTC Report Highlights Privacy Concerns and Best Practices for Cross-Device Tracking

On January 23, 2017, the FTC released a Staff Report (the Report) on cross-device tracking, a commonly used practice that allows companies to associate multiple internet-based devices with the same consumer in order to track behavior across devices.

The Report follows the FTC’s Workshop on cross-device tracking, and alerts companies engaged in cross-device tracking of certain best practices for avoiding potential violations of applicable law and regulations.

Specifically, the Report recommends that companies engaged in cross-device tracking: (1) be transparent about their data collection and use practices; (2) provide choice mechanisms that give consumers control over their data; (3) provide heightened protections for sensitive information, including health, financial, and children’s information; and (4) maintain reasonable security of collected data.

Overview of Cross-Device Tracking

Tracking allows companies to track a consumer’s activity across smartphones, tablets, desktop computers, and other connected devices. This provides advertisers with a much stronger understanding of the consumer, which has valuable implications for advertising. For example, retailers that use tracking technology would be able to see that a customer made a purchase on her smartphone after seeing an ad on her work computer. It can also help advertisers tailor ads to consumers, for example, to send advertisements about a belt to match a pair of shoes she previously bought from the retailer.

To engage in cross-device tracking, companies use both “deterministic” and “probabilistic” techniques. Deterministic techniques are used to track consumer behavior based on the affirmative use of a consumer-identifying characteristic, such as the consumer’s login credentials. For example, when a consumer logs in to an online platform on a number of devices, the consumer’s behavior on one device can be used to inform targeted advertising through the same platform on the consumer’s other devices.

Probabilistic approaches, by contrast, involve inferring which consumer is using a device, even when a consumer has not logged in to a service. A common example of this is IP address matching, whereby devices using the same IP address — e.g., a cell phone, laptop, and smart television on the same local network — are presumed to belong to the same consumer. Similarly, if a consumer’s smartphone uses the same IP address as her work computer during business hours, and then uses the same IP address as her home computer during non-business hours, an ad platform might infer that the work computer, smartphone, and home computer belong to the same person. Or if several devices visit the same unusual combination of websites, a platform might infer that the devices belong to the same user.

Often, companies that collect and use deterministic data — e.g., email providers, social networks, or shopping sites — will work with entities engaged in probabilistic tracking in order to learn even more about the consumer’s behavior.

The FTC’s Report

The FTC Report is based, in part, on FTC research relating to cross-device tracking, which involved testing 100 popular websites on two separate devices. The study found, among other things, that third-party technology tracking technology was embedded in at least 87 of the 100 websites, and that 861 third parties were observed connecting to both devices. The study also found that 96 of the 100 websites allowed consumers to submit a username or email address, and 16 of the websites shared the username or email with third parties.

The FTC Report recognized that tracking has several benefits, such as giving consumers a more seamless user experience across their devices, providing increased fraud detection and security, and allowing marketers to provide a better experience for consumers by delivering more relevant ads.

The Report focused more heavily on privacy challenges tied to cross-device tracking. For example, many consumers do not realize that they are being tracked across devices, especially by probabilistic approaches. Consumers may also not realize that cross-device tracking is often not limited to cell phones, tablets, and laptops, but that their information may also be tracked from smart televisions, wearable devices, and even in-person purchases made in brick-and-mortar stores. The number and variety of entities with access to consumer information, including third-party advertising networks that have no relationship to the consumer, creates an additional privacy concern. Additionally, data collected through cross-device tracking may include highly-private personal information which, if exposed through a security breach, could result in considerable consumer harm. For example, by connecting searches made from a smart phone about baby monitors to a laptop search for maternity clothes, a company could infer that the user is pregnant; an additional search of “preeclampsia” could lead the data aggregator to infer that the user may have a high-risk pregnancy, a medical condition that the user may not have intended to share.

The Report makes a number of recommendations to companies engaged in cross-device tracking, namely:

  • That companies engaged in cross-device tracking fully disclose to consumers their use of cross-device tracking practices and the extent of those practices, including the nature of any data collected. That such companies provide opt-out tools or other ways for consumers to limit cross-device tracking.
  • That companies refrain from engaging in cross-device tracking of sensitive information, including financial, health, children’s information, or precise geolocation data, without first obtaining the express consent of the consumers to whom the information belongs.
  • That companies take necessary security steps to protect the data they collect in the process of tracking consumers’ activity across devices.

The Report recognized that the Network Advertising Initiative (NAI) and Digital Advertising Alliance (DAA) have already taken steps to self-regulate with regard to non-cookie tracking (and for the DAA, cross-device tracking more specifically), but advises that both organizations could strengthen their efforts to address cross-device tracking.

In a concurring statement on the Report, FTC Commissioner Maureen K. Ohlhausen said, “[T]oday’s Report does not alter the FTC’s longstanding privacy principles but simply discusses their application in the context of a new technology.” The Commission voted 3-0 to issue the Report.

Considerations for Companies Engaged in Cross-Device Tracking

In light of the FTC Report, companies engaged in cross-device tracking should review their current practices, and ensure that their privacy policies and other relevant consumer-facing policies adequately describe any cross-device tracking activities and provide a way for customers to opt out of being tracked. Companies that fail to fully, conspicuously, or accurately disclose the extent of tracking activities may face liability. (See my previous post, here.)

FTC Settles Ashley Madison Data Breach Complaint

The operators of Ashley Madison, the dating website for married people that became famous following its massive data breach in 2015, settled claims brought by the Federal Trade Commission (“FTC”) regarding that breach and their security practices and representations. Ruby Corp., Ruby Life Inc., and ADL Media Inc. (collectively, “Ruby”), named as defendants, were responsible for the operation of ashleymadison.com.

Hackers breached Ashley Madison—a site with over 18 million users in the United States alone—in 2014 and 2015, with intruders reportedly gaining access to Ruby’s networks multiple times. Ruby did not detect the breach until July 2015, when an employee noticed large data transfers.

Ruby has agreed to pay $1.6 million to resolve charges relating to the hack to the FTC and state regulators, with several million more of the judgment suspended in light of financial limitations of the company The FTC, 13 states, and the District of Columbia entered into a settlement to resolve the complaint.  Earlier this year, Ruby also entered into a compliance agreement with the Office of the Privacy Commissioner of Canada and an enforceable undertaking with the Office of the Australian Information Commissioner.  Additionally, multidistrict class action litigation brought by numerous former Ashely Madison customers continues against Ruby.

The Complaint

The FTC filed a complaint in the District Courts for the District for Columbia. Among the allegations raised in the complaint were that Ruby:

  • Failed to have a written organizational information security policy
  • Failed to secure remote access, regularly monitor unsuccessful login attempts, revoke passwords of ex-employees, restrict access to systems based on employee job functions, implement controls to protect against retention of passwords and encryption keys, and permitting employees to reuse passwords
  • Failed to proper train employees to perform data-security measures related to their jobs
  • Failed to ensure that third-party providers utilized reasonable security measures
  • Failed to monitor their system at random intervals to identify security breaches and to ensure the effectiveness of their protective measures.
  • The complaint also alleged that Ruby falsely:
  • Assured users that their information was private and protected
  • Created fake profiles to attract new users, and consumers had no way to distinguish between real and fake profiles
  • Claimed it had received a “Trusted Security Award,” as well as stating that it was “100% secure,” “risk free,” and “completely anonymous”
  • Required consumers were required to purchase the right to fully delete their profiles, and were only told after payment that their information would be retained for 6 to 12 months thereafter. Ruby then either retained the information for up to 12 months, or completely failed to remove the information.

Settlement Agreement

In addition to enjoining Ruby from misrepresentations as to its security practices and its utilization of fake profiles, the Settlement Agreement set forth a series of data security practices that Ruby is required to implement, with initial and biennial assessments of compliance required.The Settlement Agreement requires Ruby to obtain its assessments from an objective third-party professional that will monitor Ruby and the execution of its new security program. The Settlement Agreement also prevents Ruby from using personal information received from the online dating sites obtained prior to the entry of the Settlement Agreement, unless it complies with the requirements discussed above regarding the cessation of its misrepresentations to consumers. Ruby must also submit a compliance report to the FTC.

Some Takeaways

The complaint and subsequent Settlement Agreement is only the latest in the FTC’s exercise of its asserted power to investigate and prosecute companies for inadequate data security. The mandated security program outlined in the Settlement, for example, provides a useful roadmap that proactive businesses may utilize to preemptively show that their compliance is in line with FTC expectations. The Settlement Agreement provides warnings to those who freely throw about statements and self-award seals regarding the security of their platforms. The complaint and settlement also reinforce the importance of restricting access to systems based on services providers, employee job functions, and the importance of internal employee and vendor controls regarding password usage and retention.

  • The FTC noted that 36 million individuals worldwide were affected, making it one of the largest data breaches it has investigated.
  • Finally, Ruby was ordered to pay $8,750,000 in satisfaction of the judgment, but this amount was suspended. Instead, Ruby will pay $828,500 to the FTC, and $828,500 to the 13 states and DC, for a total of approximately $1.6 million. Should Ruby be found to have misrepresented its financial condition, Ruby will immediately owe the full amount of the judgment.
  • The Settlement Agreement outlines a comprehensive data security program for personal information collected. In doing so, it stated that the program was to be “appropriate to Defendants’ size and complexity, the nature and scope of Defendants’ activities, and sensitive of the personal information collected from or about consumers,” signaling that the FTC are not promoting a one-size-fits-all approach to data security. The safeguards ordered, however, are of the type that are likely to be expected of most companies. For example, the FTC requires that Ruby designate an employee to take responsibility for the program, create protocols to identify and resolve internal and external risks, and to conduct a risk assessment to assess the sufficiency of, necessity for, and implementation of various safeguards. The Settlement also requires the creation of a process to select and retain third-party service providers that will be capable of safeguarding any information they receive from Ruby.
  • The parties quickly settled the matter following the filing of the Complaint. The Settlement Agreement—a stipulated order for permanent injunctive and other relief—was entered into by the FTC, 13 states, and the District of Columbia against Ruby.
  • The FTC brought charges alleging unfair security practices, and misrepresentations regarding network security, user profiles, terms and conditions for deleting profiles, and data security seals.

As New York Attorney General Schneiderman stated: “This settlement should send a clear message to all companies doing business online that reckless disregard for data security will not be tolerated.”  (New York will receive $81,330.94 of the payment being made, since up to 652,627 New York residents were members of Ashley Madison at the time of the security breach).  Businesses who want  to take an active approach to data security compliance can glean much from the FTC’s complaint and settlement here.

One Good Deal After Another – Navy Data Breach, Damages and Sovereign Immunity

“One good deal after another” – This old expression from my time of service in the USN popped into my head as I read news of the latest breach of information regarding Navy personnel. In sum, reported the Navy on November 23, the laptop of a government contractor supporting a naval contract was “compromised” and “unknown individuals” accessed sensitive information on over 130,000 sailors and former sailors, including Social Security numbers.  At last report, there is no evidence the leaked data has been misused.

The facts so far, as reported, are facially similar to those at issue in In re Science Applications Int’l Corp. Litigation, 45 F.Supp.3d 14 (D.D.C. 2014) (“SAIC”) where an employee of SAIC, an information-technology company that handles data for the federal government, had her car broken into and back-up tapes containing health care information regarding millions of members of the armed services and their families were stolen.  The SAIC court rejected plaintiffs’ claims for increased risk of identity theft and monitoring costs on the grounds set out in in Clapper v. Amnesty International USA, 133 S. Ct. 1138 (2013) holding that, in addition to a substantially increased risk of harm resulting from the occurrence, there also had to be “a substantial probability of harm with that increase taken into account.” SAIC, 45 F. Supp. 3d at 16 (emphasis in original).  Because there was little likelihood that the thief involved even knew what information he or she had come into, much less possessed the technology to access it, the SAIC court found no injury-in-fact for the bulk of the plaintiffs.  However, the court allowed two claims to go forward, including a claim under the Privacy Act for previously unreceived unsolicited calls to an unlisted number pitching medical products and services targeted at a specific medical condition listed in the stolen medical records. Id. at 33.

We might note that news of this latest “compromise” came out the same month as the long awaited ruling in Welborn v. IRS, —- F. Supp. 3d —–, 2016 WL 6495399 (D. D.C. 2016), brought, as you will recall, as a result of 330,000 tax-related documents stolen during a cyberattack that extended from mid-February to mid–May 2015 and targeted the IRS’s “Get Transcript” program.  Among other causes of action, the plaintiffs brought suit under the Privacy Act and the Internal Revenue Code.  The Welborn court also rejected plaintiffs’ claims for an increased threat of future identity theft and fraud as a result of the IRS security breach as entirely speculative and depending on the decisions and actions of one or more independent, and unidentified, actors. Id. at *8 (quoting Clapper, 133 S.Ct. at 1150).  However, the Welborn court found that three of the plaintiffs, two of whom alleged that they had suffered actual identity theft when someone filed false tax returns and claimed fraudulent refunds in their names, and one who alleged she had “been the victim of at least two occasions of fraudulent activity in her financial accounts, one of which resulted in the removal of funds from a personal financial account, which occurred after the IRS data breach,” had alleged sufficient injury-in-fact to maintain standing.  The latter of these three was dismissed for lack of pleadings of causation as after in time is not sufficient to show causation. Id. at *9 – *10.  The court then held that 1) the remaining two plaintiffs’ claims for  unauthorized disclosure under the Privacy Act were preempted by the tax code, and 2) plaintiffs’ Privacy Act claims for “failure to safeguard” must be dismissed for failure to allege actual damages (as opposed to injury-in-fact). Id. at *12 (to plead any Privacy Act claim adequately, a plaintiff must plead “actual—that is, pecuniary or material—harm”).  Ultimately, the court also dismissed the plaintiffs’ claims for unauthorized disclosure under the Tax Code on grounds of sovereign immunity.  To allege improper disclosure under the Code, a plaintiff must allege (1) knowing or negligent, (2) disclosure, (3) of a return or return information.  The IRS argued, and the court held, that plaintiffs’ attempt to present a “failure to protect” claim couched as an “improper disclosure” claim, but the Code does not authorize suit against the IRS based on a failure to protect.  That is, the plaintiffs’ attempt to expand liability would expand the government’s waiver of sovereign immunity to include a claim not contemplated by the Code.

A very similar argument could also have been made with regard to the Privacy Act claims, but the court did not reach them due to its finding on no actual damages, and the holding has broad implications. Finally, the involvement of a government contractor in the scenario of the Navy breach could also implicate the recent Supreme Court decision in Campbell-Ewald Co. v. Gomez, No. 14-857, 2016 WL 228345 (2016), regarding “derivative sovereign immunity.”  The Navy had contracted with Campbell to develop a recruiting campaign that included sending text messages to young adults, but the contract stated that messages could be sent only if those individuals had “opted in” to receive marketing solicitations.  Campell-Ewald developed a list of cellular phone numbers for contacting users, and then transmitted the Navy’s message to more than 100,000 people.  Gomez, who had not opted in by consenting to receive messages, received one anyway and filed a nationwide class action seeking damages and alleging that Campbell-Ewald had violated the Telephone Consumer Protection Act (“TCPA”).  Campbell-Ewald argued that, as a contractor acting on the Navy’s behalf, it had acquired (i.e. had “derived”) immunity from the Navy’s sovereign immunity from suit under the TCPA.  However, the Supreme Court held that Campbell-Ewald violated both federal law (the TCPA) and the Government’s explicit contractual instructions that messages were to be sent only to individuals who had “opted in.”  The Court held that when a contractor violates both federal law and the Government’s explicit instructions, there is no “derivative immunity” and the contractor is not shielded from suit.

Proper Handling of Biometric Data — Lessons Learned from a $1.5 Million Illinois Class Action Settlement

In 2008, Illinois passed the Biometric Information Privacy Act, 740 ILCS 14/1 (the Act or BIPA), which requires companies to obtain a person’s consent before collecting that person’s biometric data. Illinois, unlike other states such as Texas, provides a private right of action for individuals whose data was collected without proper notification and consent. Under Section 15 of the Act (Retention; collection; disclosure; destruction), a private entity in possession of biometric identifiers or biometric information must develop a written policy establishing a retention schedule and guidelines for destruction of the data.

In what is being reported as the first settlement under the Illinois statute, on December 1, 2016, an Illinois state court approved a $1.5 million class action settlement between L.A. Tan Enterprises Inc. (L.A. Tan) and a class of its customers. Sekura v. L.A. Tan, Ill. Cir. Ct. 2015-CH-16694. The class plaintiffs alleged that L.A. Tan, which used fingerprint scanning technology rather than a key fob for membership purposes, failed to obtain written consent from its customers to use the data. The complaint also alleged that the company failed to provide information about how it would store the biometric data and the circumstances under which it would destroy the data, i.e., when the customer dropped his or her membership or the franchise closed.

What makes this settlement interesting is the fact that the complaint did not allege that the biometrics data was lost, stolen or sold. Instead, the class plaintiffs alleged that the company did not treat the data as carefully as the law requires. Similar to settlements with the OCR over HIPAA violations, the L.A. Tan settlement also requires the company to take corrective action to ensure compliance with the Illinois statute and to destroy all biometric data it still holds.

The sensitivity of biometric data requires companies that conduct business in Illinois to not only properly collect the data, but also store and dispose of the data as required by law. Failure to do so, could expose those companies to unnecessary liability even if the data is not lost, stolen or misused.

Two federal courts, for example, have denied defense motions to dismiss actions brought under BIPA. See In re Facebook Biometric Information Privacy Litigation, Case No. 15-cv-03747, 2016 WL 259385 (N.D. Ca. 5/5/16)(Social networking website users brought punitive class action against an website operator under BIPA, alleging that the operator unlawfully collected and stored biometric data derived from their faces. The court denied the defense motions to dismiss and for summary judgment finding that the users stated a cause of action under BIPA) and Norberg v. Shutterfly, Inc., 152 F. Supp. 3d 1103 (N.D. Ill. 2015)(Consumers brought action against operator of several photo sharing websites, seeking statutory damages for alleged violations of BIPA. Case dismissed with prejudice on April 15, 2016, pursuant to confidential settlement agreement). More recently, however, another federal court in Illinois granted the defense motion to dismiss a BIPA complaint for lack of jurisdiction under Spokeo. See, McCollough v. Smart Carte, Inc., Case no. 16 C 0377, 2016 WL 4077108 (N.D. Ill. 8/1/16).

Strike Three – You’re Out – Data Breach Shareholder Derivative Lawsuit Against Home Depot Dismissed

On November 30, 2016, Judge Thomas W. Thrash dismissed a shareholder derivative action brought against Home Depot as a result of the breach of its security systems and theft of its customers’ personal financial data (“the Breach”) in 2014. In Re The Home Depot, Inc. Shareholder Derivative Litigation, Civ. No. 1:15-CV-2999, 2016 WL 6995676 (N.D. Ga. 2016). In the derivative action, Plaintiffs asserted that Home Depot was harmed as a result of the company’s alleged delay in responding to significant security threats, and thus sought to recover under three primary claims against Home Depot’s current and former directors and officers (“Ds&Os”). These included the following alleged claims: (1) breach of the duty of loyalty by failing to institute internal controls sufficient to oversee the risks in the event of a breach, and for disbanding a Board of Directors committee that was responsible for overseeing those risks; (2) waste of corporate assets; and (3) violation of Section 14(a) of the Securities Exchange Act in connection with Home Depot’s 2014 and 2015 proxy filings. According to Judge Thrash, all of the claims against the Ds&Os “ultimately” related to what they “knew before the Breach and what they did about that knowledge.” Defendants filed a motion to dismiss, which Judge Thrash ultimately granted applying Delaware law. It was undisputed that no demand was made on the Home Depot Board of Directors. Thus, Plaintiffs had the burden of demonstrating that the demand requirement was excused because it would have been futile.

Judge Thrash analyzed each of the three claims against the Ds&Os. As for the primary claim that the Directors allegedly breached their duty of loyalty and that they failed to provide oversight, Plaintiffs were required to show that the Directors either “knew they were not discharging their fiduciary obligations or that the Directors demonstrated a conscious disregard for their responsibilities[.]” When combined with the general demand futility standard, Plaintiffs essentially needed to show that a majority of the Board faced substantial liability because it consciously failed to act in the face of a known duty to act. Judge Thrash stated that this is “an incredibly high hurdle for the Plaintiffs to overcome[.]”

In finding that Plaintiffs’ failed to overcome this hurdle, Judge Thrash rejected Plaintiffs’ arguments about the significance of disbanding the Infrastructure Committee charged with oversight of the risks Home Depot faced in the event of a data breach. Plaintiffs alleged that the Board failed to amend the Audit Committee’s charter to reflect the new responsibilities for data security that had been transferred from the Infrastructure Committee, as required by the Company’s Corporate Governance Guidelines. As a result, Plaintiffs alleged that the Board failed to designate anyone with the responsibility to oversee data security, thereby leaving the company without a reporting system. Judge Thrash concluded that “[t]his argument is much too formal.” Regardless of whether the Audit Committee had “technical authority,” both the Committee and the Board believed it did. Given the factual allegations that the Audit Committee received regular reports from management on the state of Home Depot’s data security, and the fact that the Board in turn received briefings from both management and the Audit Committee, the court concluded that “there can be no question that the Board was fulfilling its duty of loyalty to ensure that a reasonable system of reporting existed.”

The court also rejected Plaintiffs’ argument that the Board’s failure “to ensure that a plan was in place to ‘immediately’ remedy the deficiency in [Home Depot’s data security],” supported the breach of the duty of loyalty claim. Plaintiffs acknowledged in the complaint that the Board acted before the Breach occurred, that it had approved a plan that would have fixed many of Home Depot’s security weaknesses, and that it would be fully implemented by February 2015. Under Delaware law, the court held that directors violate their duty of loyalty only if “they knowingly and completely failed to undertake their responsibilities.” Judge Thrash concluded that “as long as the Outside Directors pursued any course of action that was reasonable, they would not have violated their duty of loyalty.”

In addition, Plaintiffs alleged that there was “a plan,” but that “it moved too slowly.” The court held that this was not the standard under which to evaluate demand futility on a duty of loyalty claim. The court noted that with the benefit of hindsight, “one can safely say that the implementation of the plan was probably too slow, and that the plan probably would not have fixed all of the problems Home Depot had with its security.” However, the court also found that “simply alleging that a board incorrectly exercised its business judgment and made a ‘wrong’ decision in response to red flags…is not enough to plead bad faith.”

Based on the foregoing analysis of the demand futility issue, the court had little difficulty discounting the claim of corporate waste. Plaintiffs alleged that the Board’s insufficient reaction to the threats posed by alleged deficiencies in compliance with contractual requirements for data security caused significant losses to the company, which constituted a waste of Home Depot’s assets. Here, the court concluded that the Plaintiffs’ claim was basically a challenge to the Director’s exercise of their business judgment, and although with hindsight, it “was easy to see that the Board’s decision to upgrade Home Depot’s security at a leisurely pace was an unfortunate one,” the decision nevertheless fell squarely within the discretion of the Board and was protected under business judgment rule.

Finally, Plaintiffs’ Section 14(a) claims, which were also subject to a demand requirement, alleged that Defendants omitted important information from their 2014 and 2015 Proxy Statements by not disclosing that Home Depot had known of specific threats to its data security, and that the Audit Committee’s charter was not amended to reflect that the responsibility for IT and data security had been transferred to it. The court rejected these arguments, noting that regardless of whether the charter was amended, “everyone believed and acted as if the Committee did have oversight over data security during the relative time period.” Further, the court found that Plaintiffs failed to specifically identify which statements in the Proxy Statements were false or misleading and also failed to plead with particularity how the omission caused the alleged loss. Thus, the court held that the claim did not demonstrate the necessary duty to disclose required under Section 14 (a). Moreover, “because [Plaintiffs] had not demonstrated a substantial likelihood that the Defendants would have been liable for a Section 14(a) violation,” the court found that demand was neither futile for the Section 14(a) claims, nor excused.

This decision is in step with two other recent decisions dismissing shareholder derivative actions against companies arising out of high-profile data breaches. See Palkon v. Holmes, et.al. 2014 WL 5341880 (D.N.J. Oct. 20, 2014) (court, applying Delaware law, dismissed a derivative action against Wyndham Hotels brought after that company suffered a large data breach, relying in part on the protections afforded the Ds&Os under the business judgment rule); Davis et al. v. Steinhafel et al., No. 14-cv-203, (D. Minn. July 7, 2016) (court dismissed derivative action against Target because a claim could not be stated in connection with a corporation’s special litigation committee’s decision not to pursue derivative claims against the company’s officers or directors, particularly where it demonstrated that the decision was based on a thorough and impartial investigation).

With the prevalence of security breaches taking place against various corporations, including large retailers, Home Depot is yet another reminder of the potential exposure presented by cyber-liability for the boardroom, including costly litigation even if the corporation prevails. Judge Thrash’s opinion provides guidance on how the business judgment rule can protect Ds&Os for their decision-making with respect to the demands of cybersecurity. Given the numerous references to the “benefits of hindsight,” however, corporate boards should be vigilant in assessing their cybersecurity plans. There may come a time when a court will not so readily apply the “business judgment rule” to a Board’s decision making process in addressing cybersecurity concerns.

LexBlog