• Skip to main content
  • Skip to primary sidebar

Profectus Associates

Management and Business Performance Consulting

  • HOME
  • ABOUT
  • BLOG
  • CONTACT

Ralph Burns

Continuous Enterprise Optimization (CEO)

October 20, 2023 By Ralph Burns

Throughout my lifetime there have been so many technological game changers. Each one has certainly had impact on our business world and then another is announced. Over the last few years it’s been blockchain, cloud and edge computing, 5G and Artificial Intelligence in the headlines. It’s hard to read any business or tech article that does not mention Artificial Intelligence (AI) at some point. I have had the great fortune of working with a number of technology driven companies whose leadership understood the value and the opportunity these new technologies could provide.

While purging some old files, I came across an article that I’d written back in around 2000 that really got me thinking about whether it had relevance today. At the time I wrote it, the world had just survived the Y2K transition.  Offshore development and outsourcing was just beginning to take root driven by shortage of U.S. based tech personnel (especially COBOL programmers).  This led many companies to move offshore for their code remediation needs.  The primary location was India. At that time, the IT outsourcing world leaders were all U.S. based with a focus on data center (and desktop) outsourcing and management, systems development, off the shelf enterprise software package implementation and beginning to move heavily into all things internet. Those of us who had been in this industry for some time understood that technical resource labor arbitrage would soon change the game. At that time I was responsible for marketing and portfolio management globally for one of the largest systems integrators. Our high value consulting arm of the company, focused on management consulting and they developed a series of programs which identified critical issues most companies face and defined it as the CEO Agenda.

Our unit’s focus was much more solutions consulting and IT oriented. If you remember back in that timeframe, the “.com bubble” was still bubbling. Companies were faced with how to leverage their investments to keep up in a changing marketplace while continuing to evolve and improve internal processes and operations. We delivered lots of solutions aimed at helping companies stay ahead, get ahead, or chart new territory with IT investments.

The concept I envisioned was based on what I called Continuous Enterprise Optimization (CEO) or the CEO Operations Agenda. Reading through my notes and rekindling my thoughts relating to what’s going on today, I felt very strongly that even 20 years later the concept still holds.

Today I work with several small companies, some being startups, they all have one thing in common.  It is the need to fully optimize the use of all of their resources, especially capital. The very survival of these small and young companies depends on it.

While businesses have been optimizing their operations for centuries. I believe the pace of change accelerated in the years 2000-2010, thanks to the rise of new technologies and the increasing globalization of the economy.

Here are several ways that businesses “optimized” in the years 2000-2010:

  • Successful businesses embraced technology: Back then we helped companies install, migrate to, and upgrade new technologies, such as enterprise resource planning (ERP) systems, customer relationship management (CRM) systems, and e-commerce platforms to improve their efficiency and productivity and overall improve revenue.  These systems helped businesses manage the ever increasing amount of data that was being collected.
  • Companies started to outsource non-core functions: This was the golden age of outsourcing.  Businesses began to focus more on their core competencies and outsource non-core functions, such as manufacturing, IT, and customer service, to specialized providers. This allowed them to reduce costs and focus on core strengths.
  • Businesses continued to move toward improvement methodologies: The programs of W.E. Deming and Phillip Crosby, who were the gurus of the day, became the center of many company’s learning investments.  Their books and classroom materials were a staple for me and most professionals in our industry.  Businesses continued to adopt lean manufacturing principles (which began in the late 70’s and accelerated in the 80’s and 90’s) to cut waste and improve efficiency in their production processes. Six Sigma (improving processes by reducing defects and variability), Kaizen (the Japanese philosophy of continuous improvement) and Business Process Management (BPM) were standard approaches to improve the performance of business processes.
  • Globalizing operations: Creation and adoption of improved telecommunications as well as the rise of the internet, allowed businesses to expand their operations into new markets to reach new customers and take advantage of labor arbitrage to reduce costs.
  • The shift to self service driven by the internet: Self-service and the internet caused companies to become focused on customer experience and on improving that experience to build loyalty and drive sales.

Some of the more well known success stories of companies who I believe embraced the concepts of CEO in the 2000 – 2010 period were:

  • Walmart: Walmart using EDI and just in time technology to improve its supply chain and inventory management, allowing prices to customers.
  • Amazon: Amazon used e-commerce to disrupt the traditional retail industry and become the world’s largest online retailer.
  • Dell: Dell used outsourcing to reduce its manufacturing costs and become the world’s leading computer manufacturer.
  • Toyota: Toyota used lean manufacturing to become one of the most efficient automakers in the world.
  • Starbucks: Starbucks focused on improving the customer experience by creating a welcoming atmosphere and offering a wide variety of coffee and food options.

These are just a few examples of how businesses optimized in the years 2000-2010. The pace of change continues to accelerate, and businesses today are still looking for new ways to improve their operations.

Now let’s shift focus to what has happened since 2010 and into today.  Here are some ways that businesses are optimizing in the years 2010 – the present:

  • Successful businesses wholeheartedly began to embrace digital transformation: Many businesses are undergoing a digital transformation which involves adopting digital technologies to improve various aspects of their operations. This included the implementation of cloud computing, artificial intelligence, and data analytics to improve their operations and customer service.
  • The growth of remote work: The COVID-19 pandemic in 2020 accelerated the adoption of remote work, with many businesses optimizing their operations to support remote teams. This included the implementation of virtual collaboration tools, remote access technologies, and flexible work policies.
  • Increasing focus on customer experience: Businesses have increased their focus on customer experience and morphed into customer anticipation. This involves using data and analytics to better understand customer needs and to personalize their experiences. They are increasingly turning to automation to improve efficiency and reduce costs. This has included the implementation of robotic process automation (RPA), chatbots, and automated customer service systems.
  • Supply Chain Optimization: Many businesses focused on optimizing their supply chain operations, which included the adoption of just-in-time (JIT) inventory management, predictive analytics, and blockchain technologies. Walmart’s success was a clear signal that these changes worked well.
  • Addressing the need for Cybersecurity Measures: With the increasing reliance on digital technologies, businesses also optimized their operations by implementing critical cybersecurity measures. This included the adoption of multi-factor authentication, encryption technologies, and threat detection and response systems.
  • Applying Sustainability Practices: Many businesses optimized their operations by adopting sustainable practices, including the implementation of energy-efficient technologies, waste reduction strategies, and sustainable sourcing practices.

Some specific examples of continuous optimization in the years 2010 – 2023:

  • Netflix: Netflix morphed from a video rental by mail company to a streaming powerhouse. Leveraging data analytics and machine learning to provide personalized content, recommendations and personalize the viewing experience for each customer. This has helped Netflix to become one of the most popular streaming services in the world. Netflix just stopped it’s mail order DVD business last month.
  • Amazon: Amazon has been a leader in optimizing its business through technology. Amazon has used cloud computing to scale its business and to offer a wide range of products and services to customers. The company has implemented robotics in its fulfillment centers to improve efficiency and reduce costs. Amazon’s use of artificial intelligence and machine learning algorithms has also optimized its inventory management, pricing strategies, and customer recommendations. They have been able to leverage the incredible infrastructure they built for themselves into the undisputed Number 1 provider of cloud services to commercial clients with over 80% of the market.
  • Tesla: Tesla has optimized its supply chain by using a vertically integrated model, which involves controlling all aspects of the production process, from raw material sourcing to manufacturing to distribution. This has allowed Tesla to reduce costs and improve the quality of its products. Tesla has used artificial intelligence to develop self-driving cars. Tesla has also used big data analytics to improve its manufacturing processes and to reduce costs.
  • Google: Google has used cloud computing to power its search engine and other products and services. Google has also used artificial intelligence to develop new products, such as Google Assistant, Google Translate and BARD (their AI chat platform)

These are just a few examples of how businesses have been optimizing in the last few years. Companies who have embraced digital transformation, took advantage of new technologies and focused on customer experience have been most successful in recent years. I read recently that over 50% of the Fortune 500 companies in 2000 have either gone bankrupt, been acquired of ceased to exist by 2023.  Failure to adapt to changing technologies and optimization of their businesses I believe is a significant contributing factor.

Continuous Enterprise Optimization (CEO) refers to the ongoing process of improving and refining a business’s operations, processes, and strategies in order to maximize efficiency, productivity, and profitability. This involves regularly analyzing performance data, identifying areas for improvement, and implementing changes that can enhance the overall effectiveness of the organization.

CEO is based on the fact that there is always room for improvement, and that even small improvements can have a significant impact over time. It is also based on the principle that improvement should be a continuous process, not just a one-time event.

The goal of continuous enterprise optimization is to ensure that a business is always operating at its highest potential and adapting to changes in the market or industry. This may involve the use of various tools and methodologies, such as lean management, process improvement, and data analysis to identify and eliminate waste, streamline workflows, and optimize resource allocation.

CEO can be applied to all aspects of an enterprise, including operations, customer service, product development, and marketing. It can be used to improve efficiency, reduce costs, increase quality, and improve customer satisfaction.

Overall, continuous enterprise optimization is a proactive approach to business management that requires the creation of a culture of continuous improvement and drive long-term success.

So here are my recommendations for any business to adopt a continuous enterprise optimization strategy.

  1. Embrace change and establish a Culture of Continuous Improvement: The business world is constantly changing. Businesses need to be able to adapt to change in order to stay ahead of the competition. This means being open to new ideas and new ways of doing things. Regularly review and optimize your business processes to eliminate inefficiencies, reduce costs, and improve productivity. This might involve automating repetitive tasks, reorganizing workflows, or implementing new technologies.
  2. Foster a company culture which encourages innovation, learning, and a willingness to adapt and change: Encourage employees to seek out opportunities for improvement and provide them with the tools and resources they need to succeed. Employees are a company’s most valuable asset. Businesses need to empower their employees to make decisions and to be creative. This means creating a culture of innovation and collaboration. Provide opportunities for your employees to develop new skills and knowledge. An informed and skilled workforce can contribute significantly to your optimization efforts.
  3. Businesses must continue to Invest in Innovation: Innovation is essential for businesses that want to stay ahead of the curve. Businesses need to invest in research and development for their products and services. They must stay up-to-date with the latest technologies and tools which can help improve operations, enhance products or services, and better meet customers’ needs. Artificial Intelligence (AI) is a great example.  It is evolving fast.  I highly recommend 2 books that helped me put it all in context.  AI 2041 by Kai-Fu Lee and Chen Qiufan which uses short stories to describe a possible 2041 future with AI. And The Coming Wave by Mustafa Suleyma with Michael Bhaskar which presents what AI does today and can do tomorrow.  It also highlights the potential dangers.
  4. Have a relentless focus on Customer Experience: Customer experience is more important than ever before. Businesses need to focus on creating a positive customer experience at every touchpoint. This means providing excellent customer service, offering convenient and easy-to-use products and services, and resolving customer issues quickly and efficiently. Listen to your customers’ feedback and use it to improve your products, services, and customer service. Understanding your customers’ needs and expectations can help you stay competitive and relevant in the market.
  5. Learn to master the use of Data and Analytics: Data and analytics can be used to improve every aspect of a business. Businesses can use data to understand their customers better, to improve their operations, and to make better decisions. Collect and analyze data from various aspects of your business, including sales, operations, customer service, and more. Use this data to identify trends, patterns, and areas for improvement.
  6. Set Clear Goals and Objectives: Define specific, measurable, achievable, relevant, and time-bound goals that align with your business’s overall vision and mission. These goals will guide your optimization efforts and help measure your success. Establish and monitor KPIs that are aligned with your business objectives. These will help track progress and make data-driven decisions.
  7. Building Strong Relationships with Partners and Suppliers: Optimize your supply chain and logistics by developing strong relationships with your partners and suppliers. This can help you reduce costs, improve quality, and ensure timely delivery of goods and services.

Continuous optimization is an ongoing effort that requires dedication, collaboration, and a willingness to change and adapt. By following these steps, businesses can improve performance and stay competitive in the future.

Filed Under: Uncategorized

Cloud Readiness Starts with Knowing Your Environment

July 14, 2021 By Ralph Burns

I was reading an article the other day about the large number of people who traveled during the pandemic by car or by RV. I believe it indicated that RV and trailer sales and rentals were up significantly over the past years. I thought about the many times when I was younger of how my family traveled from Texas to Boston Massachusetts, by car, every summer for about 10 years to visit my grandparents. I remember the significant effort my dad put into every detail of making sure the car was ready. Everything from changing oil, changing spark plugs, tuning the carburetor, checking the tires and air pressure, and certainly all the lights, headlights and brake lights to be sure that he knew every inch of that car and it was all in good working order. He knew the journey we were taking was going have challenges, it always did, but I know he felt he significantly reduced the opportunity for issues or errors by the analysis he did of our transportation before we left. In addition, he would go through the Rand McNally Atlas, latest edition of course, to review the highways that we had taken before and the potential new highways that could make our trip easier and faster. We always started with a plan!  We knew where we wanted to go and we had a strong comfort level of how we were going to get there.

Experiences like these remind me of cloud migrations and the need to properly prepare as it can affect essentially every aspect of your cloud journey from the time of arrival to the quality and cost of the trip.

I have had the opportunity to speak to technology leaders, CTO’s, cloud architects and engineers, from many industries about their journey of migrating and managing their environment in the cloud. In addition, I have also spoken to leaders in several national and regional managed service providers (MSP) about their experiences and issues with supporting clients on that journey. In both sets of conversations several common points or issues always seem to be present.

I am going to cover those things today. They all fall under the category of “know your environment”. It should not be surprising with the complexity of today’s IT environments made up of on-premise data centers, private clouds, and public clouds, that IT leadership has a significant challenge in fully understanding their current environment across all of those areas. Many of these IT environments can add to their complexity if the business is an acquisition mode. That introduces a totally new environment which many times must be integrated into a common operations model. Many of those acquisitions depend upon the synergies that can be developed in IT. So, in those cases, you not only need to know your environment, but you must also learn the environment of the acquired company.

Unfortunately, most Generation 1 Discovery tools employed to assess and model a current IT environment are not able to fully evaluate the environment and therefore create lots of manual effort for IT teams to truly understand what they have.  The concepts I will be talking about are valid whether you are a company with nothing in the cloud today and just evaluating what you could put in the cloud, a company totally committed to on-premise only with no aspirations of migrating to the cloud, or a company who has deployed applications in one or several clouds and wants to constantly evaluate how to bring new efficiencies to their environment as well as identify new opportunities to leverage the clouds.

The first key point in a “know your environment” project should be to deploy a tool for Discovery of that environment which not only analyzes the infrastructure but also can provide you a deep dive analysis of all your applications. This multidimensional visualization across the datacenters and your multi-cloud environment is absolutely critical to so many decisions your IT team will need to make in support of the environment. So, the tool should provide a profile of your complete environment across network components, services, APIs, databases, applications, and clusters. It should provide details of individual dependencies and processes, application names, descriptions, release and patch levels which are critical to ensure a seamless migration and mitigate security issues. This information is critical even if there is no consideration for migration to the cloud at this point. To be able to mitigate issues which can come down the road, much like my father did with his deep assessment of our vehicle before the long trip from Texas to Boston, you must have access to this information. Most tools in the marketplace stop short and require the IT team to gather and input much of this information manually. The challenge of getting to that information, in most instances, means that this assessment will only be done occasionally. In a fully automated tool, the assessment can run as often as required and provide significant insight into potential security vulnerabilities and other key information needed to run an efficient and secure shop. This deep analysis also aids your team in finding dormant applications which are not being accessed.  These dormant applications can possibly be eliminated and their costs can go away. It can be also critical to understand “end-of-life” applications which require either being refreshed, rebuilt, or replaced whether they are on premise or in the cloud.

The second key point also relates to the Discovery tool and its ability to support your migration to the cloud either from an on premise environment or from cloud to cloud environment once the Discovery tool has identified all information about the environment.  Second (2nd ) generation tools have the ability to triage applications into categories such as lift and shift, lift and refresh, lift and transform, as well as create prioritized groupings of applications based on that triage, to feed your migration plan. The Discovery tool should now be able to take this information and help build a blueprint of the “to be” environment in any of the major cloud providers. This blueprint should provide costing information for each of the major cloud providers allowing you to evaluate and build a budget for the “to be” environment. Again, with most Gen 1, tools these various steps and analyses are manual. In Gen 2 tools, this information is gathered and presented in a way which gives you and your team the ability to use the tool for modeling scenarios. This provides you the critical ability to reduce surprises post-migration.

The third area came up very consistently from both companies and manage service providers. The focus is the handoff between discovery tools and migration tools which is typically manual. This creates additional manual work for the IT team to ensure that the wealth of data which is now gathered about the current environment can be integrated accurately and seamlessly into the tool your IT team will use to migrate you to the cloud. This Discovery- Migrate handoff is critical to a smooth migration which reduces or eliminates surprises.  The coordination of these tools also provides the IT team the ability to evaluate and define consistent governance strategies across all environments. It is this area of governance that was mentioned many times as a challenge encountered by managed service providers when trying to help clients improve consistency and reduce complexity when moving from on-premise into a single cloud or multi-cloud environment.

Today’s IT environments are significantly more complex to assess and understand than the old cars my dad would assess before our cross-country trip. That being said, the need to do that deep evaluation, “know your environment”, whether you are migrating applications or not is the most important step an IT department or managed service provider (MSP) can do to be sure that, whatever your destination, you get there safely securely, efficiently and on time.

Filed Under: Uncategorized

Let the Buyer ‘Be Aware’

March 11, 2020 By Ralph Burns

My last Blog Post highlighted the Matilda Cloud Solutions assessment module called Discovery. Discovery provides agentless discovery and multi-dimensional visualization across data center and multi-vendor cloud environments in real time to profile your complete environment across network, components, services, API, databases, applications and clusters.  It provides a tremendous amount of information about the client’s  current IT environment and workloads for security, regulatory and financial compliance. The Discovery module looks at every workload, application and service in the IT environment and creates a complete inventory which includes hardware inventories (if attached to the network), software inventories and release levels, licensing information and patch levels of all applications. It provides information about  clusters, storage allocations and traffic throughout the entire environment. It helps identify security vulnerabilities, application performance bottlenecks, asset outages and much more. This ability to discover the entire application environment and display the compute and network topology including relationships usage statistics and service details certainly is critical to any company whether contemplating a cloud migration or just getting a handle on their current IT assets. That capability can be useful in a variety of circumstances. I’d like to discuss a couple of those with you.

Over my career I have been responsible for corporate development in several companies. In addition, my operating roles have always included a focus on mergers and acquisitions. I therefore have a keen understanding of the M&A process and see tremendous value of having a tool like the Matilda’s Discovery module during due diligence and post merger integration.

As discussed in the last blog the increasing importance of the CIO’s role in any organization is without question. CIOs are also a critical member of any M&A deal team.  It is their job to perform or contribute to the performance of the  due diligence activities in the IT environment of an acquisition target.  They should also be highly involved in the planning and execution of all post merger integration activity within the IT areas.

Technology, its use and management, within most companies can be a competitive differentiator and, in some companies, it may be the most critical element to the company’s ability to do what they do and grow. Therefore, the level of analysis applied against the IT environment could be as critical as the company’s contracts and relationships with it’s customers.

As companies consider an acquisition, the CIO should be involved to help identify the answers to the following questions: How critical is technology, delivered by the IT department, to the target company’s ability to deliver goods and services? Is the company’s IT environ­ment a strategic asset which is efficiently managed (and invested in) or could it be a significant problem waiting to show itself in the future?  Are IT systems adequately secured against intrusion or known vulnerabilities? Is there a disaster recovery plan? Are backup/recovery procedures implemented and tested? I will expand on these issues later in this paper. These are items which should be part of the initial due diligence efforts.

In the past technical due diligence was  treated as part of the general activities performed by the accounting team. This is no longer a good practice if the professionals involved do not have in-depth knowledge of IT or routine experience conducting IT-specific due diligence.  Again, the CIO should be heavily involved in post merger integration planning as well. This plan which focuses on all activities once the acquisition is closed, is critical to achieving the positive synergies that an acquiring company expects to gain through the acquisition of the target company. Questions here include: Will the target company run as a totally autonomous entity, or will it be merged into the current company? In either scenario understanding the status and inventory of all IT assets is critically important.

Today’s IT environments are complex.  The number of network attached devices/hardware can be mind-boggling in a large organization with multiple locations, multiple data centers, and the utilization of one or many cloud platforms. The CIO must be able to ascertain if the target company’s IT department has applied the appropriate diligence to their environment to ensure that all software and firmware is fully compliant with both licensing and required upgrades and patches.  In my research I came across a few facts which are, or should be, a wake-up message for any company contemplating or executing an acquisition without deep thought around how complete their IT due diligence plan.

Did you know that Equifax was hacked because it didn’t install a patch for its Apache web server that had been available two months previously? (https://www.zdnet.com/article/equifax-confirms-apache-struts-flaw-it-failed-to-patch-was-to-blame-for-data-breach/).  Here are some additional interesting and scary statistics:

  • 80% of companies who had a data breach or a failed audit could have prevented it by patching on time or doing configuration updates – Voke Media survey, 2016.
  • 20% of all vulnerabilities caused by unpatched software are classified as High Risk or Critical – Edgescan Stats Report, 2018.
  • 18% of all network-level vulnerabilities are caused by unpatched applications – Apache, Cisco, Microsoft, WordPress, BSD, PHP, etc. – Edgescan Stats Report, 2018.
  • Microsoft reports that most of its customers are breached via vulnerabilities that had patches released years ago – Microsoft’s Security Intelligence Report, 2015.
  • Durham, N.C.-based Burt’s Bees paid a $110,000 fine to Washington-based BSA, a software industry watchdog group, after a software audit found unlicensed copies of applications from Adobe Systems, Apple Computer, and Microsoft on company computers.  https://www.pcworld.com/article/124377/article.html.
  • BSA announced that Emeryville, Calif.-based Wham-O paid a $70,894 fine to settle claims that company employees had used unlicensed copies of Adobe and Microsoft software on office computers.

Any C- level executive or member of the Board of Directors of a company considering acquisitions should make sure their diligence includes a detailed evaluation of the status of the IT department assets before close.  Otherwise the gains expected from the acquisition could become headaches, fines and severely impact organization/shareholder value.  The impact of these discoveries when evaluating the IT department can be used as negotiating leverage in the final price. All too often these issues are not surfaced until much farther down the road when something bad happens.

To avoid these issues, part of any due diligence process on a medium to large size company should include a detail assessment of the IT and network environment as provided by the Matilda Discovery tool.

Having that information documented and available to the people who will develop the post merger integration plan for the IT assets is critical. Putting two large IT environments together is a significant challenge even if you have all the information. Absent some of the critical information, the process will take more time and cost more money than it needs to. No one wants to pay a high price later in unexpected conversion costs, out of date licenses, expensive hardware upgrades driven by capacity and environmental improvements.

Another area where having this information is critical to success is in Disaster Recovery and Business Continuity Planning and execution. I plan to cover that topic in a future post.

I took my hypothesis on the ability of Matilda’s Discovery module to provide the critical analysis and reporting necessary to evaluate the IT environment for both due diligence and post merger integration. Matilda’s executive management team indicated to me that several clients were already making use of the Discovery tool for support during their acquisitions. These clients were excited about the Discovery tool’s ability to provide insight into the target companies IT environment from an overall asset inventory as well as risk mitigation.

I hope my views on the critical need to incorporate sophisticated analysis tools into the IT due diligence and post merger integration processes can be of benefit to you. This is one of the areas which can all too often be minimized in importance during the fever that overtakes an organization during the overall acquisition process. With the growing importance of IT as a critical component of the success of any organization, coupled with challenges most organizations face to manage their own environments through software and hardware upgrades and patches, drives home the term ‘let the buyer beware’.  I think it’s time, at least in IT due diligence, that we change that saying to let the buyer ‘Be Aware’!

©2020 Ralph Burns

Filed Under: Cloud Computing

The Great Cloud Quandary

October 10, 2019 By Ralph Burns

I was talking to a friend of mine, Steve Montague,  the other day about the changes we’ve witnessed in the role of IT organizations and environments in our careers. While there certainly have been unbelievable changes with security, compliance, network and hardware capacity and capability over the last 20 to 30 years, our discussion centered on the role of the CIO and the shift of IT from the back office of the company clearly to the front office. It’s hard to imagine today that a little over 30 years ago the CIO position, if it existed at all, was in a relatively small number of companies.  Back then IT for most companies primarily provided record-keeping such as the creation of financials, human resource records, the order to cash process, and inventory management. IT is now synonymous with the very fabric and in every organization, the core identity of the company. IT’s pace of innovation is growing exponentially, and I believe  successful companies will evolve all aspects of their business to enable better customer experience and operational efficiency in a secured and seamless way.

CIOs are at the forefront of the headline grabbing technology adoption like Artificial Intelligence (AI), Machine Learning (ML), the Internet of Things (IoT), Digital Transformation, Cloud Computing, and Cyber Security. In today’s age of hyper personalization and digital transformation, all companies have to evolve to deliver new applications and features to market faster, scale them smartly and operate efficiently to deliver delightful customer experiences. My friend and I agreed that the role of the CIO, now part of the company’s executive management team is one of the most important, changes in our technology environment over the last 30 years.

Steve is currently working with a company that is contemplating moving its applications to the cloud. He was lamenting the fact that while there are innumerable companies out there who offer to help migrate you or provide tools that help you migrate to the cloud including the major cloud providers, he could not find a platform solution that provided full life cycle management of before, during and post cloud management of business critical applications. The consulting methodology that I grew up using starts with an assessment, moves to planning, then shifts to implementation and full operation followed by ongoing optimization of the project. In research on my friend’s behalf, I too found it difficult to find a partner or platform that solved for all cloud life cycle challenges. I was introduced to Rajesh Reddy who is CTO and founder of a company called Matilda Cloud Solutions to understand their approach to cloud life cycle management. They’ve developed a full-stack solution covering enterprise workloads for before, during and post cloud transformation and ongoing operations extremely efficiently, in public, private or hybrid cloud environments.

Their platform is proven at scale with several clients including one of the largest wireless and telecom providers in the country.

Rajesh provided me a blueprint of their platform and I learned that their tool suite automates enterprise analysis, transformation design, migration, deployment & optimized management of the hybrid infrastructure with built in security; enabling institutions to realize business-aligned cloud strategy in a matter of days with guaranteed business results with all cloud providers i.e. AWS, Azure, Google Cloud and Oracle Cloud.

The assessment module called Discovery which provides the customer with a tremendous amount of information about their current IT environment and its workloads for security, regulatory and financial compliance. Once the IT environment is discovered it can also help them model and cost, looking at every cloud provider, the most efficient approach to safely and securely migrate applications to the cloud. Their Discovery module looks at every workload, application and services in the IT environment and creates a complete inventory which would include release levels, licensing information and patch levels of all applications. It provides information about  clusters, storage allocations and traffic throughout the entire environment. It helps identify security vulnerabilities, application performance bottlenecks, asset outages and much more. This ability to discover the entire application environment and display the compute and network topology including relationships usage statistics and service details would be critical to any company. In addition, the ability to cost the migration considering all of the major cloud providers including AWS, Azure, Google Cloud, and Oracle Cloud, provides the company with a tremendous opportunity to view all options before making a decision to undertake such a massive project. This Discovery module seems to be a very powerful starting point for any company thinking about or moving to the cloud.

Although this is the first module of their platform that I have investigated, I couldn’t help but feel that Matilda Cloud Solutions platform may have a very pragmatic approach for companies who are either thinking about cloud migration or already in the cloud to, in an extremely cost effective approach, discover critical IT environment, workloads and applications and set the stage for a better informed plan for whatever is next.

I was intrigued enough to ask Rajesh to help me understand the other modules (Post Cloud Management, Dev/OPS, and Migration) in more detail.  I’ll cover them in another post.

Filed Under: Cloud Computing

Primary Sidebar

Recent Posts

  • Continuous Enterprise Optimization (CEO)
  • Cloud Readiness Starts with Knowing Your Environment
  • Let the Buyer ‘Be Aware’
  • The Great Cloud Quandary

Archives

  • October 2023
  • July 2021
  • March 2020
  • October 2019

Categories

  • Cloud Computing
  • Uncategorized

Copyright © 2025 Profectus Associates LLC