I was reading an article the other day about the large number of people who traveled during the pandemic by car or by RV. I believe it indicated that RV and trailer sales and rentals were up significantly over the past years. I thought about the many times when I was younger of how my family traveled from Texas to Boston Massachusetts, by car, every summer for about 10 years to visit my grandparents. I remember the significant effort my dad put into every detail of making sure the car was ready. Everything from changing oil, changing spark plugs, tuning the carburetor, checking the tires and air pressure, and certainly all the lights, headlights and brake lights to be sure that he knew every inch of that car and it was all in good working order. He knew the journey we were taking was going have challenges, it always did, but I know he felt he significantly reduced the opportunity for issues or errors by the analysis he did of our transportation before we left. In addition, he would go through the Rand McNally Atlas, latest edition of course, to review the highways that we had taken before and the potential new highways that could make our trip easier and faster. We always started with a plan! We knew where we wanted to go and we had a strong comfort level of how we were going to get there.
Experiences like these remind me of cloud migrations and the need to properly prepare as it can affect essentially every aspect of your cloud journey from the time of arrival to the quality and cost of the trip.
I have had the opportunity to speak to technology leaders, CTO’s, cloud architects and engineers, from many industries about their journey of migrating and managing their environment in the cloud. In addition, I have also spoken to leaders in several national and regional managed service providers (MSP) about their experiences and issues with supporting clients on that journey. In both sets of conversations several common points or issues always seem to be present.
I am going to cover those things today. They all fall under the category of “know your environment”. It should not be surprising with the complexity of today’s IT environments made up of on-premise data centers, private clouds, and public clouds, that IT leadership has a significant challenge in fully understanding their current environment across all of those areas. Many of these IT environments can add to their complexity if the business is an acquisition mode. That introduces a totally new environment which many times must be integrated into a common operations model. Many of those acquisitions depend upon the synergies that can be developed in IT. So, in those cases, you not only need to know your environment, but you must also learn the environment of the acquired company.
Unfortunately, most Generation 1 Discovery tools employed to assess and model a current IT environment are not able to fully evaluate the environment and therefore create lots of manual effort for IT teams to truly understand what they have. The concepts I will be talking about are valid whether you are a company with nothing in the cloud today and just evaluating what you could put in the cloud, a company totally committed to on-premise only with no aspirations of migrating to the cloud, or a company who has deployed applications in one or several clouds and wants to constantly evaluate how to bring new efficiencies to their environment as well as identify new opportunities to leverage the clouds.
The first key point in a “know your environment” project should be to deploy a tool for Discovery of that environment which not only analyzes the infrastructure but also can provide you a deep dive analysis of all your applications. This multidimensional visualization across the datacenters and your multi-cloud environment is absolutely critical to so many decisions your IT team will need to make in support of the environment. So, the tool should provide a profile of your complete environment across network components, services, APIs, databases, applications, and clusters. It should provide details of individual dependencies and processes, application names, descriptions, release and patch levels which are critical to ensure a seamless migration and mitigate security issues. This information is critical even if there is no consideration for migration to the cloud at this point. To be able to mitigate issues which can come down the road, much like my father did with his deep assessment of our vehicle before the long trip from Texas to Boston, you must have access to this information. Most tools in the marketplace stop short and require the IT team to gather and input much of this information manually. The challenge of getting to that information, in most instances, means that this assessment will only be done occasionally. In a fully automated tool, the assessment can run as often as required and provide significant insight into potential security vulnerabilities and other key information needed to run an efficient and secure shop. This deep analysis also aids your team in finding dormant applications which are not being accessed. These dormant applications can possibly be eliminated and their costs can go away. It can be also critical to understand “end-of-life” applications which require either being refreshed, rebuilt, or replaced whether they are on premise or in the cloud.
The second key point also relates to the Discovery tool and its ability to support your migration to the cloud either from an on premise environment or from cloud to cloud environment once the Discovery tool has identified all information about the environment. Second (2nd ) generation tools have the ability to triage applications into categories such as lift and shift, lift and refresh, lift and transform, as well as create prioritized groupings of applications based on that triage, to feed your migration plan. The Discovery tool should now be able to take this information and help build a blueprint of the “to be” environment in any of the major cloud providers. This blueprint should provide costing information for each of the major cloud providers allowing you to evaluate and build a budget for the “to be” environment. Again, with most Gen 1, tools these various steps and analyses are manual. In Gen 2 tools, this information is gathered and presented in a way which gives you and your team the ability to use the tool for modeling scenarios. This provides you the critical ability to reduce surprises post-migration.
The third area came up very consistently from both companies and manage service providers. The focus is the handoff between discovery tools and migration tools which is typically manual. This creates additional manual work for the IT team to ensure that the wealth of data which is now gathered about the current environment can be integrated accurately and seamlessly into the tool your IT team will use to migrate you to the cloud. This Discovery- Migrate handoff is critical to a smooth migration which reduces or eliminates surprises. The coordination of these tools also provides the IT team the ability to evaluate and define consistent governance strategies across all environments. It is this area of governance that was mentioned many times as a challenge encountered by managed service providers when trying to help clients improve consistency and reduce complexity when moving from on-premise into a single cloud or multi-cloud environment.
Today’s IT environments are significantly more complex to assess and understand than the old cars my dad would assess before our cross-country trip. That being said, the need to do that deep evaluation, “know your environment”, whether you are migrating applications or not is the most important step an IT department or managed service provider (MSP) can do to be sure that, whatever your destination, you get there safely securely, efficiently and on time.