Working from Home? Here are 12 Steps to Reduce Data Privacy and Security Risk

Businesses are instituting widespread remote work policies and procedures to facilitate social distancing and “flatten the curve.” Enterprises simultaneously need to be mindful of increased data privacy and security risks. The risks can range from pandemic-related phishing emails to increased pressure on network architecture to well-intentioned employee shortcuts. Hackers will try to take advantage of uncertain and sometimes chaotic circumstances.

Below is a checklist of fundamental measures businesses and employees should implement to mitigate the data privacy and security risks associated with working remotely. Most of these measures require an investment of time, not money, through adoption of sound policies and behavior adjustment. You can maintain good privacy and security as we respond to COVID-19. Here’s how:

  1. Turn on Multi-Factor Authentication Immediately. Implement MFA to ensure no unauthorized party is remotely accessing the company’s networks or user accounts. The popular Microsoft Office 365 service includes MFA for free.
  2. Develop and Follow Sound Data and Funds Transfer Procedures. Implement two-step verification for wire transfers or other transfers of data. For example, if you get an email with an invoice, verify the request by placing a phone call to a known individual (not just a number in the email) to confirm and obtain authorization. Work with business partners to send and receive test transfer payments of small amounts (a few cents or a dollar) before transferring substantial sums.
  3. Maintain Confidentiality. Employees should be instructed that confidential conversations should take place in private and relatively secluded areas. Such conversations should not occur within range of virtual assistants or other IoT listening devices. For example, the Office of Civil Rights for the Department of Health and Human Services has relaxed certain rules to make it easier to use technology to facilitate remote services, but requirements of privacy and security must still be met.
  4. Use Secure Workspaces. Employees should have a secure workspace with reliable connectivity. Remote workspaces should be secure from eavesdropping. Employees should not leave work computers unattended to reduce the risk of theft, and unsecured (“public”) wireless networks should be avoided, such as free wi-fi at coffee shops. If there is no other option, ensure employees are trained to first connect using their VPN client before doing anything else.
  5. Distribute Tech Support Contact Info. Employees should readily have access to company IT policies, procedures, and contact information of critical IT personnel to whom security incidents can be reported and who can assist with technical issues.
  6. Avoid Storing Data Locally. Employees should avoid saving data locally on their computers and instead utilize on company-approved network and cloud storage locations – the ones your company backs up regularly – as much as possible to store data. For convenience and perceived efficiency, employees might be tempted to save data locally or on machines that are not business-issued devices. Remote workers and businesses should resist this temptation as much as possible and adjust expectations and deliverable timing to promote sound practices as we adjust to new realities.
  7. Make Sure You Have Appropriate Insurance. Companies should review the scope of their insurance policies and coverage limitations to ensure their policies cover incidents stemming from employees working remotely.
  8. Stay Connected While Staying Distant. Companies should adopt and implement policies for supervising remote employees, such as instituting frequent team calls to facilitate transparent communication, encouraging employees to report security incidents and risks, learn from experience, and provide tips and training for secure work from home, etc.
  9. Don’t Get Hooked – Beware of Phishing Attacks. When employees receive emails or other electronic communication, they should be trained to identify potential phishing emails. Specifically, employees should be educated and reminded to (1) verify that the sender’s email address matches the address of a known contact (especially on mobile devices, select the sender to see the real address); (2) hover over any link before clicking it to identify the destination; (3) be wary of emails that are unusually brief, unexpected, or out of character; and (4) refrain from opening suspicious attachments. If a seemingly normal email or communication is from an unverifiable or suspicious sender, then employees must be trained to report such phishing incidents to the company. Taking these precautions can reduce the effectiveness of phishing attacks.
  10. Minimize Printing Confidential Information. Employees should not print confidential information, including protected health information, at home. If such information must be printed, then the paper copies of such information should be properly secured until they are properly disposed – for example, by using a level P-4 or better cross-cut shredder.
  11. Use Appropriate Encryption. Employees should not share protected health information or other types of information requiring elevated protection via email or other unsecured modes of electronic transmission. Such information should be shared only by using transmission technology that provides guaranteed end-to-end encryption.
  12. Share These Tips and Other Useful Insight. Share this announcement and other resources discussing data privacy and security measures with all employees, team members, business partners, clients, customers, suppliers, vendors, etc. We are all in this together!

Stoel Rives’ Privacy and Data Security team is prepared to address your COVID-19 related or other data privacy and security questions and concerns. If you have any questions, please ask your primary Stoel Rives contact or reach out to Hunter Ferguson at 206.386.7514 or hunter.ferguson@stoel.com for more information.

Soon, all ransomware attacks may be data breaches

As this recent article illustrates, many ransomware operators are now collecting information from victims before encrypting their data, and then threatening to release what they’ve collected – or actually releasing some of it – to increase the chance they’ll get paid. There have been many cases already where at least a portion of data has been released when the victim doesn’t pay up. If this becomes the norm – and it looks like it will – victims will need to consider all ransomware attacks as possible data breaches.

Ever since the Maze ransomware operators realized they could increase the odds of collecting the ransom by leaking data, many other ransomware groups have started following suit.  In the latest variant to be seen using this tactic, the attackers basically guarantee they can decrypt the files if you pay (proof provided on two random files.) But at that point, the data is already stolen.

While the attackers will only steal a segment of the data they encrypt – a few GB, random emails, etc. – the victim will likely have no idea which portion of the encrypted files were stolen and will have to consider all data that was accessed as “breached”, unless they can assess that there is a reasonably low risk that certain data was not extracted.

As security professionals we strive to prevent the attackers from compromising our organizations in the first place.  But in the event they are successful, following is a sample of additional controls that can be implemented to better detect data exfiltration:

  • Content filters: filters on outgoing traffic can be configured with white-listing/black-listing rules to restrict traffic to known bad (by reputation or by content) sites/IP addresses. They can watch common exfiltration channels such as DNS tunneling, FTP and HTTP and can be configured to alert on and/or automatically stop unusual patterns of data transfer. Content filtering is offered as a standalone service, but it is also a feature included with many secure gateway solutions.
  • SIEM: Security Information and Event Management solutions act as centralized collectors of logs from multiple sources. Consider deploying a SIEM inside your organization and feeding it as many logs as are useful. In order to get the value out of a log collection/analysis solution it must be monitored 24/7/365 by qualified personnel.  Unless your organization is large enough to employ its own security team, consider a managed solution from a reputable service provider.
  • Endpoint Detection and Response (EDR) solutions: These solutions are designed to stop attackers in the first place, but they also alert on potentially malicious activity with continuous monitoring. For example, if your EDR solution lights up because it sees a number of nodes being hit with Emotet – a malware precursor to a ransomware attack that generally steals credentials, but can also steal email – you could be under attack, and should check all endpoints to confirm you don’t have one that might be leaking data (like the “road warrior” salesperson whose laptop is rarely on the network, and always seems to be behind a little on updates…)
  • Deep Packet Inspection (DPI) and Watermarking: For the more advanced organizations out there, you can embed a watermark or ‘digital signature’ that can alert a packet-inspection solution that certain files are being sent out of the organization. In order for this to have value you’d want to be selective and/or have various different watermark labels (for example “internal confidential”, “PII”, etc.) and ensure your watermarks are “permanent.”
  • Honeytokens: similar to the honeypot concept, a honeytoken is the same concept, but as a URL. You can implement honeytokens for free at https://canarytokens.org; some cool tricks for using them in a honeyfile (a file that appears to be highly valuable, but is in fact deceptive bait), databases, links and other traps can be found here.  While honeypots/files/tokens are primarily an intrusion detection tool, if the target can be accessed then it – and anything else at that access level/in that container – can likely also be exfiltrated.

In addition to these controls, as noted in this blog post last month, organizations that fall victim to ransomware should engage experienced outside counsel to commence an internal investigation and to:

  • Retain technical consultants to engage with the threat actors as necessary, determine what data was exfiltrated, manage the decryption process, recover and remediate impacted systems, and eliminate the risk of reinfection.
  • Leverage relationships with law enforcement to cross-reference elements of the ransomware with databases and obtain helpful information.
  • Work with insurers to determine whether and how coverage applies (i.e., cyber risk, kidnap and ransom, cyber extortion, or various other cybercrime policies).
  • Establish separate lines of communication for key personnel in case normal lines of communication are compromised during negotiation, decryption and/or recovery phases.
  • Provide advice relating to what, if any, legal obligations have been triggered by the exfiltration of data and the deployment of ransomware.

Family Educational Rights and Privacy Act in the Age of COVID-19

The U.S. Department of Education released some FAQs related to the Family Educational Rights and Privacy Act (FERPA) and corona virus. The Department’s Student Privacy Policy Office prepared the FAQs to assist officials in educational agencies and institutions such as school districts, schools, colleges and universities in managing public health issues related to COVID-19 while protecting the privacy of students. Below is a summary:

  • A parent or eligible student must provide written consent before an educational agency or institution discloses personally identifiable information (PII) from a student’s education records, unless one of the exceptions to FERPA’s general consent rule applies.
  • FERPA permits educational agencies and institutions to disclose, without prior consent, PII from student education records to appropriate parties in connection with an emergency, if knowledge of that information is necessary to protect the health and safety of a student (or other individuals.)
  • If an educational agency or institution, taking into account the totality of the circumstances, determines that a significant threat exists to the health or safety of a student in attendance at the agency or institution (or another individual at the agency or institution) as a result of the virus that causes COVID-19, it may disclose, without prior written consent, PII from student education records to appropriate officials at a public health department who need the information to protect the health or safety of the student (or another individual.)
  • If an educational agency or institution learns that student(s) in attendance at the school are out sick due to COVID-19, it may disclose information about the student’s illness under FERPA to other students and their parents in the school community without prior written parental or eligible student consent; but, it must make a reasonable determination that a student’s identity is not personally identifiable, whether through single or multiple releases, and while taking into account other reasonably available information.
  • FERPA permits educational agencies and institutions to non-consensually disclose PII from education records in the form of contact information of absent students to the public health department in specific circumstances, such as in connection with a health or safety emergency.
  • If an educational agency or institution determines that a health or safety emergency exists, it may NOT disclose, without consent, PII from student education records to the media.
  • In most cases, it is sufficient to report the fact that an individual in the school has been determined to have COVID-19, rather than specifically identifying the student who is infected.
  • Nothing in FERPA prevents schools from telling parents and students that a specific teacher or other school official has COVID-19 because FERPA applies to students’ education records, not records on school officials. However, there may be State laws that apply in these situations.
  • An educational agency or institution may disclose PII from an eligible student’s education records to the student’s parents if the eligible student has been determined to have COVID-19, generally without obtaining written consent.
  • FERPA permits educational agencies and institutions to release information from education records without consent after the removal of all PII, provided that the agency or institution has made a reasonable determination that a student’s identity is not personally identifiable, whether through single or multiple releases, and while taking into account other reasonably available information.
  • FERPA generally requires educational agencies and institutions to maintain a record of each request for access to, and each disclosure of PII from, the education records of each student.

For specific questions on how FERPA applies to your institution in relation to the COVID-19 pandemic, please contact Hunter Ferguson or Alisha Kormondy.

Your Security Program Must Think Beyond Malware Protection

According to Crowdstrike’s most recent Global Threat Report, in 2019 they observed that malware-free attacks – attacks  where malicious files are not written to disk – outpaced malware attacks by 51% to 49%. In Malware-free attacks, the attackers leverage Tactics, Techniques and Procedures (TTPs) that are less likely to be detected by traditional anti-malware solutions.  For example, attackers can use stolen administrator credentials to roam virtually unchecked if the victim does not have a method of detecting that the authorized person the account is really assigned to isn’t the one behind the activity. More complex techniques like macro-based scripting attacks or attacks where code only executes in memory may go undetected as well, as they require a much more sophisticated approach to identify and contain than traditional attacks that can be intercepted by detecting the execution of malicious software or the use of malicious files.

While organizations with sufficient resources may pivot fairly quickly to combat malware-free attacks, it’s not easy for organizations with limited budgets and no security staff.  Even if you’ve invested in a reputable anti-malware solution and have restricted users from installing software – even if you’ve employed white-listing – you could still be at risk.

Below are three recommendations organizations should consider adding to their arsenal to help combat this rise in malware-free attacks:

  • Subscribe to robust “next-generation” endpoint cybersecurity solutions that are capable of analyzing behavior patterns in real time. Good solutions are anchored in the cloud, run 24/7, and are rapidly scalable.  They are usually managed by the solution provider, are relatively affordable, and are designed to be easily deployed in any environment – even without skilled IT staff.
  • Collect and monitor as much network and application traffic as possible. Unless you’re a large organization that can afford to maintain a Security Operations Centers (SOC) to analyze the data in real time, you will likely be looking at outsourcing this workload to a Managed Security Service Provider (“MSSP”) who collects it in a SIEM. Collecting and monitoring log data in as close to real-time as possible is critical to detecting adversarial behavior on your network. While 24/7 continuous monitoring is not an inexpensive service, outsourcing this will likely be less expensive than managing it yourself. The value of having this level visibility in your environment cannot be understated.
  • Control the use of scripting, and secure the scripts you need to use as much as possible.  For example, sophisticated attackers may send you documents embedded with malicious macros that, when enabled, leverage allowed command-line shells such as PowerShell to execute the attack using only scripts. PowerShell can be made more secure, but these measures alone will not be enough to prevent a script-based attack; this is merely another layer in the onion.  While PowerShell is limited to Microsoft systems, that doesn’t mean Mac users aren’t at risk; AppleScript and Python can be used in malware-free attacks as well.

As the cyber threat landscape evolves, so must our security plans. The risks we remediate today may end up being risks again tomorrow!

Utah Considers a Cybersecurity Safe Harbor as Ransomware Runs Riot

Last year the FTC mandated what an organization’s written cybersecurity program should include to avoid being deemed “unfair and deceptive” to consumers,[1] and this year California consumers whose personal information is compromised may file lawsuits against organizations that failed to implement “reasonable security.”[2]

But several states provide legal safe harbors to organizations with written cybersecurity programs. Now, Utah is considering joining them. Under House Bill 158, referred to as the Cybersecurity Affirmative Defense Act (the “Proposed Act”),[3] if at the time of a data breach a covered entity has created, maintained, and complied with a written cybersecurity program it has an affirmative defense to a civil tort claim. Continue Reading

NIST Releases a Standard for Privacy

As states fill the legal void for consumer privacy rights,[1] a new federal standard has emerged to assist companies with their compliance efforts. The National Institute of Standards and Technology (“NIST”) Privacy Framework (“PF”) was released last month to help organizations manage the risks associated with their data processing activities.

What the PF Does

The PF purports to improve risk management through mitigation, transfer, avoidance, and acceptance principles. It is designed for all types of organizations regardless of category or size and their objectives and needs and can be “adaptable to any organization’s role(s) in the data processing ecosystem.” It can be found here: https://www.nist.gov/privacy-framework/privacy-framework.

How to Use the PF

Like NIST’s Cybersecurity Framework, the PF is divided into five functions with slightly different purposes:

  1. Identify – to understand the risks to individuals from data processing
  2. Govern – to understand an organization’s risk management priorities
  3. Control – to understand privacy risks with enough granularity
  4. Communicate – to have dialogue about data processing and risks
  5. Protect – to prevent cybersecurity-related privacy events

While the PF itself is only 39 pages, it is a living document that will evolve based on lessons learned by various stakeholders with NIST being the convener and coordinator. But already the PF includes additional resources that help understand what is otherwise quite abstract. For example, the PF includes a vast resource repository that provides detailed directions on implementation strategies. In the repository’s Guidance and Tools section, each of the five functions is examined.[2]

In relation to legal risks, if you are an organization that maintains personally identifiable information (“PII”) of Colorado residents you must have “a written policy for the destruction or proper disposal” of that PII. The PF provides guidance on disposal strategies. If you are a covered entity under the California Consumer Privacy Act and are required to provide training to those that handle consumer inquiries, the PF provides guidance. If you collect information about Massachusetts residents, you must limit the amount of PII that is collected. The PF provides guidance here also.

The PF is also designed to help organizations keep up with technology advancements and new uses for data and is likely to have regulatory support. The FTC already supports the CSF[3] and last October it submitted suggestions to NIST on how the PF should be structured, some of which were adopted. Given the heightened emphasis the FTC placed on privacy enforcement in 2018-2019, this is meaningful. Organizations that use the CSF for their cybersecurity programs will have a leg up if they decide to implement the PF for their privacy programs as they are designed to work together.

If you have questions about the application of the PF to your privacy risk management program and its legal risks, please contact Romaine Marshall at romaine.marshall@stoel.com or (801) 578-6905.

 

[1] See, e.g., here, here, and here for brief articles about important aspects of the California Consumer Privacy Act.

[2] https://www.nist.gov/privacy-framework/resource-repository/browse/guidance-and-tools

[3] https://www.stoelprivacyblog.com/2019/10/articles/privacy/achieving-industry-standards/

CCPA Is Here – Is Your Security “Reasonable”?

Under the California Consumer Privacy Act, any California consumer whose personal information is compromised “as a result of the business’ violation of the duty to implement and maintain reasonable security procedures and practices … may institute a civil action.”[1]

Consumers can initiate this private right of action right now, whereas other consumer rights can only be enforced by the Attorney General beginning in July.[2]

Why This Matters

Most civil actions filed against companies during the last decade were dismissed. Why? Consumers were unable to demonstrate a suitable harm. Sure, cybersecurity incidents are a hassle for consumers to deal with, but that alone was not enough. Recently, however, courts have said “the hassle” is enough, at least for cases to proceed past their initial stages. This has led to a steady rise in both the number of cases that are settled and their dollar amounts.

Complicating things further, under the CCPA proving harm doesn’t necessarily matter. If personal information is compromised because of a failure to implement and maintain reasonable security, the CCPA quantifies harm to be “not less than one hundred dollars ($100) and not greater than seven hundred and fifty ($750) per consumer per incident” or an amount higher if proven.[3] What matters is whether your security is reasonable.

Google’s search engine defines reasonable as “as much is appropriate or fair.” For those who reminisce about how they spent three years in law school learning the many ways “reasonable” can be interpreted, the CCPA may trigger déjà vu; neither the CCPA nor its proposed regulations defines “reasonable security.” But reliable guidance is available. Continue Reading

CCPA is Here – Are Your Agreements Ready?

On January 1, 2020, if your company sells goods or services to California consumers and meets certain criteria,[1] the agreements you have with companies that handle personal information on your behalf should be analyzed and, if necessary, updated just as your privacy notices should be updated.[2]

Examples of companies that handle personal information on a company’s behalf include marketing companies, managed security service providers (MSSP), and software-as-a-service (SaaS) providers such as payment processing, document and email management, and customer analytics companies.

Why this Matters

Under the California Consumer Privacy Act (“CCPA”), companies that handle consumer information on behalf of a company are “service providers.”[3] The CCPA requires that a company enter into an agreement with a service provider that

prohibits the entity receiving the information from retaining, using, or disclosing the personal information for any purpose other than for the specific purpose of performing the services specified in the contract for the business … [4]

This is important because the CCPA exempts a company for any violation of the CCPA if its service providers have executed an agreement and they, not the company providing the personal information, violates any of the rights given to California consumers under the CCPA.[5] Continue Reading

CCPA is Here – Is Your Privacy Notice Ready?

Last year towards the end of May, a barrage of emails and pop-ups informed online users about how companies use cookies – small bits of software that track website activity – in accordance with a requirement under the European Union’s General Data Protection Regulation.

On January 1, 2020, many companies will inform consumers about updates to their privacy notices – agreements between companies and their consumers about how personal information is processed – in accordance with a requirement under the California Consumer Privacy Act (“CCPA”).

Why this Matters

A privacy notice (aka privacy policy or privacy statement) is typically the first place a company explains its practices for handling the personal information it collects.  Privacy notices have received considerable attention this year, not all of it positive.  You do not have to read all of the New York Times article, “We Read 150 Privacy Policies. They Were an Incomprehensible Disaster,” to know what it concluded.[1]  Similarly, an article titled “Are Organizations Ready for New Privacy Regulations?” summarizes the Online Trust Alliance’s analysis of 1,200 privacy statements and its view that many of these privacy notices could result in penalties for failing to follow new laws such as the CCPA.[2]  In addition, privacy notices have been the subject of litigation in cases asserting that the sale of customer information to non-affiliated entities for marketing purposes,[3] and the transfer of customer data in a merger, asset sale, or sale of customer information, were all improper because they violated companies’ privacy notices.[4] Continue Reading

Trickbot and Emotet Financial Malware Now Attacking the Healthcare Industry

In a recent Cybercrime Tactics and Techniques Report focusing on the health care industry, cybersecurity company Malwarebytes discovered a significant 82% spike in Trojan malware attacks on health care organizations in Q3 2019. Emotet and TrickBot, two especially sophisticated and dangerous forms of malware, were mostly responsible for this surge.

Used primarily as ’banking Trojans” to steal credentials and financial information, these intrusive, fast-replicating Trojans spread quickly. Emotet is polymorphic, which makes it difficult for traditional antivirus solutions to detect.  It worms its way through a network, generally using phishing emails from compromised systems to spread as quickly as possible. Once it’s infected enough computers, it will “drop” (install) other malicious programs, especially TrickBot, which has all sorts of modular, built-in tools to discover system information, compromise that system and steal data.

The presence of either of these Trojans on a network is a serious threat. Both of these Trojans are closely related; where you see one, you often see the other. To help visualize how they work, think about them like a team of professional robbers:

  • Emotet is the ‘strike team’ hired to get Trickbot through as many doors as possible, by exploiting vulnerabilities or by stealing keys
  • Trickbot is the professional ‘safe-cracking team’ the Emotet strike team gets in the door
  • Trickbot might install ransomware to collect a ransom, or maybe just cover their tracks when they’re done. When it installs ransomware, it’s often Ryuk.

Healthcare continues to be a prime target of scammers, as:

  • The industry has known weaknesses, primarily due to the proliferation of connected but vulnerable devices. For example, it’s not practical to throw away a multi-million dollar MRI machine that still works just because it runs an outdated operating system
  • Healthcare organizations have a significant amount of valuable Personally Identifiable Information (PII) such as SSNs, dates of birth, drivers licenses, etc. Of course, they also possess Protected Health Information (PHI) such as blood test results, genetic history, diagnoses, etc. – data that is difficult to come by elsewhere, and can be used to fake medical claims and purchase controlled substances
  • If malicious actors can cripple a healthcare organization with ransomware, the victim may not be able to provide care. Creating a crisis that threatens lives can be a strong motivator to pay a ransom

Criminals are likely re-purposing Emotet and Trickbot in response to improved cybersecurity controls and awareness programs more successfully blocking and repelling their attacks. To keep pace with their attackers, healthcare organizations should:

  • Ensure budgeting for a strong cybersecurity program is a priority, not an afterthought
  • Conduct regular training to help avoid phishing and social engineering attacks
  • In a Windows environment, use Microsoft User Account Control to require that all personnel log in as “users” and not “administrators” of their workstations, and that applications run in the “user context” as often as possible
  • Consider adding application white-listing to their arsenal of cybersecurity defenses

The privacy team and  health care lawyers at Stoel Rives are prepared to help you minimize risks and mitigate losses posed by internal and external threats. Give us a call to learn more about how we can help you protect your business.

LexBlog