Every company on the planet has technology assets to manage. It doesn’t matter if they’re traditional businesses or start-ups, they’ll have a combination of physical, virtual, on-premises, outsourced, SaaS and cloud. The evolution of this landscape often reflects the structure of an organisation with multiple sub-organisations or teams, each with their own slightly different agenda and in many cases, approaches and technology. With each business process comes increased services, applications, APIs, data and infrastructure which adds orders of magnitude to the problem.
Simple, complicated and complex (IT) problems
A discussion paper, “Complicated and Complex Systems” by Glouberman and Zimmerman provides a description of simple, complicated and complex problems. Their analogies were as follows: that simple problems are like following a recipe—follow the recipe you get the right, good- quality cake as an outcome; complicated problems are like sending a rocket to the moon—it is very difficult, it requires high levels of expertise in a variety of fields, but if you put all the bits together, it can be done; and complex problems are like raising a child—every child is unique and after raising one, there are no guarantees of success with the next, and the goalposts keep moving. But maybe more worrying, someone tackling any of these problem types will have an optimistic mindset on the outcome.
In my experience, most organisations try to address the problem of managing their technology assets in a counterintuitive way. It’s dealt with like it is a complex problem. In Snowden and Boone’s paper, “A Leader’s Framework for Decision Making,” they set out ordered domains (Simple and Complicated) and unordered contexts (Complex and Chaotic). In terms of known knowns, unknown knowns and unknown unknowns, the world has moved on. It’s no longer the case that a technology estate is complex. We have learnt in retrospect why architectures or patterns succeed or fail. We can also link technology to the business outcomes they address and measure their business impact. It’s not easy, but it is a known, complicated problem.
The Three Ways
This transformation from the complex to the complicated is illustrated in the recent DevOps movement. The seminal book, “The Phoenix Project,” by Gene Kim, Kevin Behr and George Spafford, set out The Three Ways. These are: First Way—work always flows in one downstream direction; Second Way—create, shorten and amplify feedback loops; and Third Way—continued experimentation in order to learn from mistakes and achieve mastery. The impact of this methodology and the resulting move towards automation and process dexterity of provisioning, networks, security and policies—often using the deployment of code—has revolutionised IT departments. This change from the unknown to the known is very complicated, but it has helped deliver predictability at scale and improve business outcomes as organisations move towards an integrated deployment pipeline with continuous delivery. On the downside, however, technical debt is amplified and when related to the existing technology estate, the issue is compounded—providing complete visibility across the entire build-or-buy technology landscape is impossible with manual processes.
To address the problem of figuring out what technology they have, what it does and how they use it, organisations typically employ a range of manual processes, automated discovery and disparate tools or analytics while collecting millions and millions of rows of data as they go. The issue is, what do you actually do with this data before it’s used to inform decision making on your technology estate?
What do we have and what does it do?
The traditional approach has resulted in siloed processes and datasets with haphazard planning, sprawling estates and increased technical debt. It’s not a lack of data that’s the problem; it’s a lack of quality data and a data ecosystem. As with any data project, preparation is critical. Depending on the countless surveys, anywhere between 60 – 85 percent of big data or AI/ML projects are reported to have failed. In many cases this is due to the difficulty in preparing the data to the required quality to ensure it’s accurate, clean and consistent. It’s clear that organizations have technology data lakes, CMDBs, EA tools, SAM tools, HAM tools, SaaS tools, cloud tools, logging tools, security tools and many others focused on managing their technology assets. Some have developed sophisticated analytics or intelligence to appear like they have visibility of their estate, but just because something is there, it’s often not possible to see it or understand what it is.
We have all this data. How do we make it make sense?
There is little aggregating and transforming of siloed technology data within an IT department because it is too difficult or time consuming to properly achieve. Many approaches result in partial, incomplete or inadequate data sets which cannot connect the dots and improve decision making.
Psychological studies into perception by Gibson in 1966 and Gregory in 1970 led to the introduction of Bottom-Up and Top-Down processing theory. The theory sets out bottom-up processing begins with our senses uploading data via sight, hearing, smell, touch and taste. This data stream takes place in real time and feedback is instant, such as that of pain. Top-down processing is the cognition or interpretation of data based on prior knowledge and expectations. This interpretation, based on learning, is critical to perception and helps us to interpret our sensations and help improve decision making. For example, next time, be more careful with that sharp knife or else it could be painful.
I believe organizations struggling with too much technology data need processes to interpret their sensory data so they can recognise what they actually have. What is required first is a data standard to provide a common language and understanding of the assets to be managed based on prior knowledge. Once this is established, the dataset can be synchronised with point solutions which results in a single source of truth becoming distributed into any operational tool which requires it.
How can I contextualize to make my data actionable?
Technopedia is a technology catalogue that can provide this common language and understanding. It has within it more than 4 million assets, from over 100,000 manufacturers and 100 million definitive rules to identify technology assets. It’s been around for more than 15 years and many lessons have been learnt in managing technology data at this scale. What it means is that organisations get a clean, categorised and enriched view of the assets in use at their organisation across desktop, server, software, SaaS, IaaS and PaaS.
The current reality is technology data includes multiple names for manufacturers, publishers and products. It’s not certain who the current legal owning entity is for some assets as we live in a tech industry which is constantly evolving with M&A activity or the buying and selling of products. It’s usually not possible to automatically align all the assets with entitlement, cost or risk intelligence or be able to filter the data to remove irrelevant noise. Some organisations take a more modern approach, but in most situations, much of the discovery and enrichment is based on predictions via heuristics, machine learning and crowd-sourced information rather than a researched, accurate and definitive data engine. Wikipedia is great for general knowledge but it is not authoritative and I wouldn’t want to train my learning algorithm on a similar dataset. Sometimes it’s better to do the research using deep industry experience in partnership with technology.
What pulls all of this data together in a way that’s tailored to my business?
Flexera One is that solution. It has combined tried and trusted automation methods developed for properly managing very complicated problems like IT asset management with the market-leading applications to manage modern problems like SaaS and cloud. It provides an IT asset data ecosystem that is built on a cloud, API-first architecture that is scalable, responsive, and available. What this means is organizations can establish a single source of truth that not only exists in a single system or repository but a single source of truth that lives in multiple systems and repositories as an open-data standard.
The world will continue to see an increase in the speed of change. The world will continue to implement agile approaches for operating this hybrid environment. The world will also need tools that can provide insight across the environment, quickly adopt approaches to manage new technologies while integrating this with existing techniques to manage the evolving landscape. As organizations start to see the world using Gartner’s composable business methodology, we need to bring these different business and technology puzzles together and bridge the gap to value optimization.
IT Visibility
The business promise of your IT is huge. But it takes a complete, up-to-date view of your hybrid environment to make the most of it.
Optimise your value by knowing how your ecosystem is affected
Implementing a business-service oriented view of the estate is critical in delivering context. This concept provides the glue to understand what the business is trying to achieve in relation to how supporting technology is used to achieve it. IT Visibility running on Flexera One automates the collection, cleansing and enriching of technology data as a foundational capability. It also groups technology by business service or technology stack so that organisations can understand the business impact of technology in the organisation. Because data is automatically grouped, it is simple to train the system to identify your CRM, procurement or billing systems. Due to the data ecosystem Flexera One provides, it’s possible to link this business usage with important IT management viewpoints such as licensability, obsolescence, security risk, and cost to not only see how optimized the estate is but also drive automation to act and manage IT assets proactively.
Feed your business ecosystem with reliable information, make sound decisions
This action can also be shared across the organization’s data ecosystems. For example, Flexera enhances solutions such as ITSM platforms by providing a clean and enriched inventory of assets populated in the CMDB, covering 100 percent of the configuration items. Providing unrivaled breadth and depth is only possible due to reliable technology data within Technopedia, curated by veteran researchers over many years. It’s also possible to investigate data quality within the CMDB and in turn improve business outcomes that depend on accurate and up-to-date data. For example, it is possible to identify where the CMDB has not been updated with known configuration items in the estate or where systems are running software that are unknown, unsupported or vulnerable.
Flexera One IT Visibility is able to provide a single source of truth across multiple systems in an organization. Customers have integrated clean and enriched data across service management, enterprise architecture, procurement, and operational intelligence teams. Success is often measured by how much the organization focuses attention on the ownership and delivery of the chosen outcome, direction, or decision made. In my experience, organizations struggle with this delivery due to some of the foundational issues of addressing data standards. Flexera supports the delivery of tangible business outcomes by building a solution that can automate execution on these well-known, now complicated problems.