Ransomware Recovery Guide for MSPs

Data, and the value to a business that it represents, is the key to not only a business’ profitability, but its entire existence. Bad actors (hackers) want to exploit that value for their financial gain. When your client gets hit with Ransomware, they lose access to their data – the lifeblood of their businesses. When it happens, the inevitable question you’ll ask yourself as an MSP will be: what could I have done differently? Can I get my client’s data back? Will both of our organizations survive and if so, what can I do differently next time?

data_layers_graphic-01-1024x557

Ransomware /noun/:
a type of malware that encrypts your client’s data files making then unusable by your client’s applications. It generally comes in two forms, Locker, which encrypts the whole hard drive of the computer or Crypto, which only encrypts specific, yet extremely important, files. Once encrypted, the hacker will require that you pay a fee (usually in bitcoin) to unencrypt your data.

Ransomware is the most malicious malware derivative to which one can be exposed. So malicious in fact, it can leave your clients struggling to operate. According to a 2016 State of Ransomware report conducted by Osterman Research, 20% of ransomware cases leave the organization unable to recover and ultimately lead to the closure of their business.

Read the rest here: https://axcient.com/ransomware-recovery-guide-for-msps/

Advertisements

Data Migration – The Role of Data in a Cloud World

As companies embark on their journey to cloud, one often overlooked component is the migration of their data. Neglecting the strategic impact of moving your data to the cloud undermines the value that you place on your data and ultimately drives up your operational expense. And data, and the insight that it provides, is a core component of your company’s ability to take advantage of business insight, competitive leadership, product differentiation, and the relationship you have with your customers.

Application migration strategies to the cloud abound from Gartner, IDC and others. I tend to like the 5R model from Gartner detailed here:

Rehost – moving the entire application to the cloud in an IaaS model framework

Refactor – moving the entire application to the cloud in a PaaS model framework

Revise – modify or rewrite the application to work on a cloud provider’s environment

Rebuild – complete rewrite of the applications functionality into a PaaS environment

Replace – purchase a SaaS solution that mirrors the original applications functionality

As important as these models are, the role of your data is often minimized or overlooked in their migration model. When I consult with customer on their cloud strategies, most often their focus is purely on the applications. Companies that have migrated quickly learn that beyond the unexpected cost increase of data in the cloud, sometimes 3x – 5x greater per GB, there is the unexpected consequence of access, control and governance of their data.

Access to your data, and the benefits you can derive from that data, is directly proportional to where that data lives. Where once all enterprise data lived within the confines of a corporate data center, now it’s highly distributed across multiple locations with various access protocols that create barriers to achieving heterogeneous data insight.

As you build out your cloud migration strategy, elevate the role of your data and consider how data plays in that migration. A hybrid approach to where the data resides is a critical component to your strategy. Options abound on strategies to placing your data in the cloud, near the cloud, or even implementing a segmented approach to location based on use, metadata, localization, and data sovereignty.

Where your data lives matters.

Building Applications with NFS at DeveloperWeek San Francisco

 

Most people on a normal weekend don’t pull an all-nighter writing code. Especially when that code is in response to a challenge to create an application that incorporates technology from hackathon sponsors. For over 800 developers, that’s exactly what they did at the DeveloperWeek Hackathon San Francisco. Billed as the nation’s largest challenge-driven hackathon, over 100 teams competed to win prizes and be recognized by their peers as the top application developers.

As a featured sponsor, NetApp once again led the contest with well over 20 teams accepting our challenge to build creative and innovative applications using NetApp’s Cloud Volumes for AWS, a new cloud service that delivers more predictable performance over traditional cloud NFS options. Since consuming Cloud Volumes is straightforward, many of the team’ questions revolved around how they can take advantage of all the cloud volume features to distinguish their application over the competition.

Once again, NetApp was thrilled with the submission. “The quality and ingenuity of the teams was the strongest we’ve ever seen”, remarked Saleem Mohamed – Senior Cloud Evangelist for NetApp. “Ranking the winners came down to single point differences with each of the winners really impressing the judging panel.”

Taking first place in this year’s DeveloperWeek NetApp challenge was team MedSync. Focused on improving doctor / patient information and communications, MedSync automated and integrated the transcription and processing of medical records into a fully integrated and patient available system. Taking advantage of the Express framework for Node.js, MedSync was able to incorporate AWS Lambda, Docker containers, and NetApp cloud volumes into their design and demonstrated an impressive working prototype. Ken Lee, NetApp Data Protection & Migration Services manager remarked “MedSyncs command and incorporation of the technology distinguished them as the challenge winner.”

Tacking the challenge of an on-demand smart trash pickup earned team Garbage Collection (GC) a strong second place prize. Utilizing the features of NetApp Cloud Volumes to mount, clone and connect to an SCM build process, team GC was able to validate data migrations, perform data algorithm tests including validation of processing and execution times, and build a machine learning model for training and evaluation of their garbage collection application. Nikash Narendra of team GC remarked that “this capability extends Gitlab’s build and test tool to a full-fledge CI that required for current data drven systems.”

Achieving third place in the hackathon by combining Google connection with Alexa voice integration to simplify Kubernetes deployment, monitoring and scaling was team Captain Flint. One of the key components of their solution was the challenge of persistent volumes on containers, a feature that has been addressed in the release of NetApp Docker Volume Plugin (nDVP). “Integration of these key components distinguished team Captain Flint above their peers” remarked Ram Kodialbail, storage administrator for NetApp. “Their application marks another evolution of the deployment of containers in the data center.”

 

 

This year marked the 4th year of NetApp’s participation in the DeveloperWeek hackathon series. Each year brings stronger competition and this year was our most difficult competition to judge. In addition to the winners, we gave out 9 honorable mention prizes for teams that created truly impressive applications utilizing NetApp Cloud Volumes as a part of their solution.

Turn your technology stack into rocket fuel for innovation: NetApp® Cloud Volumes for Amazon Web Services (AWS) delivers enterprise-class performance and capabilities to your cloud applications. Our native cloud service enables analytics, DevOps, enterprise applications, backup, and disaster recovery. Discover more at the NetApp Cloud Volumes webpage.

 

 

Hackathon Winners Impress at DeveloperWeek New York

For most teams, building a working and demonstrable application from the ground up in 24 hours is an impossible task. To achieve the impossible, you need to trust your teammates, be proficient in your coding skills, and have a development environment that enables you to focus on the application without worrying about the infrastructure. Sleep is clearly optional. For three teams, building a prize-winning application is exactly what they achieved at the DevWeek NY Hackathon.

Read the Entire Blog Here:

Rationalizing the Move Toward the Software-Defined Data Center to Achieve IT as a Service

Repost from my article on Data Center Knowledge: Software Defined Data Center

With the shifts in the economic purchase cycle, organizations are embarking on digital transformation, the development of new competencies built around the capability to be more agile, consumer-oriented, innovative, connected, aligned and efficient. IT is expected to be agile, responding to reduced time to market pressure, all while managing cost and reducing complexity. Follow the link to understand where SDDC fits into your data center strategy.

Building in the Cloud: How Teams Design and Architect Cloud-based Companies

Interested in learning how born-in-the-cloud companies get started? Join me on BrightTalk as I lead a discussion with two organizations: one a potential startup born over a weekend to address the real-estate sell-by-owner environment, and another that is now thriving in the cloud, focused on building and implementing solutions to address high velocity technology change. In this webinar, we’ll cover the many intricacies of building in the cloud and scaling operations to address real business challenges. We’ll then open it up to your questions on how to startup and operate in the cloud. Join here: Born in the Cloud Webinar

Exploring the Software Defined Data Center – Live September 21st

Webinar: Tips for Recognizing Key Data Center Trends and When to Incorporate SDDC into Your Data Center Architecture 

 

Abstract: In the evolutionary journey of the enterprise data center, key inflection points drive adoption of new technologies such as virtualization, converged infrastructure, cloud, and containers. As business demands drive faster IT acceleration and innovation, new architectures have emerged giving rise to the Software Defined Data Center. Join this webinar to gain knowledge and understanding of the key trends affecting today’s data center, the components of a SDDC, and what you need to know to align your data center plans with the future of your business.

Join me live September 21st as I explore the Software Defined Data Center

The Power of Pi: And the Open Source Future

Sometimes, you just have to get into the game. Such was the case for me in gaining deeper knowledge of how developers code. And how the cloud frees everything.

It’s been a very long time since I did meaningful coding (COBOL) and figured it was time to gain hands on knowledge of today’s coders. I had played with UNIX a bit 20 years ago while with KPMG but wanted to get my hands dirty and better understand how to actually code in today’s environment including writing API’s to AWS. I didn’t have a spare computer around to reimage so did the next best thing.

I picked up a Raspberry Pi through Amazon and went all in learning how to SSH into the Pi, code in Python, and write API’s to move files to S3. And I feel good. Here’s a couple of quick observations:

The Open Source community is amazing. Everything you need to develop applications is readily available, and essentially free. I’ve been speaking and presenting on the rise of LINUX and OpenStack for years however I now have an even greater appreciation. The new breed of developers of Open Source.

Developers have unprecedented power. They can write incredibly powerful programs on very small and affordable computers. Take that power and amplify it with the power of cloud computing, and they could rewrite any application. Can you imagine the banking industry with time honored dependency on mainframe computing, having its code written for the cloud. It would be the final dagger that ends the mainframe for good.

The future bellows to 20 somethings (and younger). They get it. The world has changed. Dependencies have been broken down. My generations needs to adopt, or just get out of the way.

 

 

Blog at WordPress.com.

Up ↑