cloud-server-vulnerabilitiesDo you support ancillary cloud services?

Last week when I was helping an MSP recover from a cloud server account shut down.

This was a client that was NOT using their standard cloud solution. They were mostly geared to sell Azure solutions, but this client is on Linode.

What was wrong?

2-Factor Authentication (2FA) was turned off and no one had known. The server was also wide open to all SSH requests. What that means is hackers had the ability to easily access these servers in a variety of ways. SSH provides remote access into systems, making it critical to be tracked and controlled. Since many organizations don’t have centralized oversight and control of SSH, the risk of unauthorized access is increasing. The open doors—unauthorized accessibility to these Linode servers—made easy entry into the server for attackers on this MSP’s client.

What happened in this attack?

The hacker broke into this ‘non-standard’ Linode server and changed the password. The company had been without access to theses sets of servers for nearly two weeks by the time I was called.

The MSP had lost complete access and control of a server farm—they had been without access for over two weeks. The hacker had shut down services to the couple of clients who had data on the server—one of which was a large account that completely relied on the Linode Servers hosting their reporting system (a system that everyone knew was critical to their operations).

SSH was wide open—on top of no 2FA being configured, I was able to directly SSH into the server. Even after we restored access to the server, which took some maneuvering, I was able to SSH into the Linode servers.

No NOC was set—as we were evaluating how the hacker got in, what was evident is there really were no lock out procedures enabled on the servers. That means that the hacker was hounding the server with a brute force attack and was able to ultimately break in without anything to stop such an attack.

Bottom line: don’t forget to shore up all of your cloud accounts—especially those services that might fall off the radar.

What could an attack like this do to you?

In this case, the client left. Maybe this wasn’t a good fit client, you might say. What the owner of this MSP told me is that the client represented nearly 20% of his monthly recurring revenue. They were not the standard and were not using all standard products simply because they ran platforms that would have been hard to migrate completely over to a specific cloud solution.

If you’re not looking under the hood at all of the different cloud services that your clients are using—but especially ones that your client sees as critical to their business continuity, are you doing enough?

What can you do to prevent a situation described above?

Inventory your cloud service platforms—keep regular tabs on what platforms your organization is supporting. Make it a point to evaluate this list every time you add a new client and at minimum check that all services are accounted for on a monthly basis.

Double check that minimum security is deployed—in the event of odd ball cloud services, there is a higher chance that your team misses a couple of things simply because they’re not accustomed to working with a platform. Make sure to double check at regular intervals—at least monthly—that all systems are getting updated AND have your minimum access requirements initiated and tested. More often than I’d like to admit I see rogue cloud components leaving an MSP vulnerable.

Explicitly check 2FA is enabled—this point is probably obvious, but I’ve come across so many 2FA systems that have had 2FA disabled or misconfigured that I want to bring complete visibility to the problem. I know that most people think of 2FA as being safe. The problem that keeps popping up is passwords that are weak or reused, or even ones that my team can directly hack from your system are tolerated because of 2FA in place. Is it really?

Managed services providers are getting hacked mainly because things that get overlooked or forgotten. That’s not to say that you don’t have the tools or personnel to do the work. The problem (like most issues) is that things inherently get missed. That’s why I needed to create a service to help MSPs make sure missed items don’t turn into major ransomware disasters.