We get it. You feel like the flavor has gone out of this gum. You’ve chewed on the security stick long enough. But chew on this for just a minute longer.
In early March, 2023, the password manager LastPass revealed their second data breach from the same bad actor. The attack point wasn’t malware. It was unpatched software on an employee’s home computer. The data thieves scored a goldmine of unencrypted customer data from LastPass’ 25 million users. This jackpot included login credentials for many companies' accounts.
There’s no magic solution to protect your personal, company, customer, and employee data. It’s obvious that staying ahead of cyber risks is like a game of Whack-a-Mole. There’s always that next grinning head popping up.
The LastPass breach checks so many common risk exposure boxes that we can’t help but share some practical safeguards with you.
Work from home blurs the lines between company-owned assets and personal ones. Clearly, a company of LastPass’ size and resources has the ability to implement the policies governing equipment use and accessing company data.
But when personal equipment is used for company business without tight controls, it’s easy to see how attackers gain access. If you or your employees are using any personal equipment for company work, then it’s time to implement strict controls.
Unpatched software on any equipment is an open invitation for intrusion. This brings us back to the need for centralized software management. IT should implement quick turnaround of updates on all devices without user intervention.
If your organization uses custom or third party applications, they should be monitored regularly for security holes. Applications often use third party APIs to perform functions. These APIs are blackbox chunks of code that rely on the original developer to maintain securely.
Create and maintain your software inventory. Your IT team will use this as a reference to ensure these tools are actively managed.
And now we come to one of our favorite hot buttons: shadow IT.
Shadow IT is the practice of users acquiring apps, tools, and software that have not been reviewed and approved by IT. For small companies without an experienced IT team, shadow IT is especially risky.
Employees are often looking for the most effective way to get a particular job done, and their current toolkit doesn’t do it for them. They aren’t aware of the potential risks these one-off tools create for the company. For this reason, every company should adopt and enforce a shadow IT policy that’s appropriate for them.
Acquiring login credentials is easier and less likely to be detected than malware. Data theft, like the one at LastPass, often includes massive amounts of logins that increases the value of your data. One way to reduce the risk of data loss is through zero trust implementation. Instead of the usual one-time verified login, users are required to reconfirm their identity before accessing a particular function.
While this can be annoying for your users, it introduces a more granular verify-then-trust approach to data security. This isn’t a simple practice to implement, but it’s worth exploring with your experienced technology team.
And then there is ChatGPT, Bing, and whatever new AI flavor of the day appears. Without realizing it, users are handing over company data that should never become public. It’s important to remember that each piece of data you feed to one of these tools becomes part of the ever-growing shared data pool.
AI uses each data point to learn and build its knowledge library. Imagine, for example, the actual case where a physician fed ChatGPT a patient’s medical information that included social security number and other personal identifying information. There’s no undo button for this exposure.
Some companies such as Wal-Mart have already implemented a strict ban on ChatGPT. That’s one approach, easily implemented with DNS web content filtering. Whether you choose to take a hard line against AI tools completely or a more nuanced approach, you do need to be thinking carefully about user education and responsible guardrails.
ChatGPT says it’s learning to listen for prompts that might expose confidential information. While that is a positive step forward, every user and organization has to assume responsibility for the data that is being fed to the AI learning pool.
"We're in the cloud so they take care of it for us" isn't a plan. In fact, it's a familiar but flawed assumption. Remember, the cloud is just someone else's server somewhere else. It's your data so guard it carefully.