Imagine discovering that every piece of sensitive data your organization holds has been quietly harvested. Worse, it’s a state actor, and they’re using the data to blackmail your employees. Some of these employees have already submitted to the actor’s demands, and now your organization is caught in the middle of a legal fiasco, as well as a public relations nightmare.
Your operations team stays up to date on patches, minimizes the attack surface, the infrastructure has passed all security scans, employee access to production servers and data is properly limited, and the team uses automation for all changes in production. How could this happen?
Quantum computers…
Today, it would take 100s or 1000s of years to decrypt data protected with current public/private key cryptographic algorithms. But, with the upcoming dawn of quantum computing, this data is vulnerable to a type of mathematical attack that allows unauthorized decryption in minutes, or even seconds. The strategy is to harvest encrypted data today and decrypt it later with quantum computers. The Year to Quantum (Y2Q) is rapidly approaching.
And this is just one macro trend facing contemporary IT decision-makers globally. There are other security challenges with patching CVEs and meeting security compliance certifications such as Common Criteria, SOC 2, or CIS. There’s also the problem of attracting and retraining talent, managing drift amid the immense growth of the cloud, and planning new workload deployments, upgrading current workloads, and migrating non-standard workloads inherited from mergers and acquisitions!
CIOs face a plethora of asymmetric challenges, and they’re speeding up…
Security
But it’s not just the bad guys innovating. Researchers have already developed quantum-resistant algorithms, essentially new mathematical problems that are difficult to solve on both traditional and quantum computers. When data is encrypted with these new algorithms, it’s protected even if hackers harvest it today, and get access to a quantum computer later, your data will be safe. Innovative Linux platforms are already deploying this cryptography, but it’s a journey that will take years for the entire industry to tackle.
IT decision makers also face the challenge of rapidly expanding software vulnerabilities. As software has grown more open, so too has the complexity. Almost every developer relies on open source libraries to deploy new applications more quickly and reduce support costs (writing from scratch is expensive).
With the advent of AI, organizations can potentially go it alone, but it still makes sense to find trusted partners for common software dependencies, especially a partner that can contribute to these upstream dependencies and truly influence what does and doesn’t get patched. This allows organizations to focus their precious resources on their core competencies rather than maintaining a distribution and patching CVEs.
Finally, security hardening standards such as FIPS, CIS, and SOC 2 are crucial as laws expand to enforce higher standards on companies that want to do business globally. While LLMs will make it easier to review these standards, organizations still need expertise to analyze how they apply to their specific software systems. Partners can be relied upon for parts of the software stack, such as the Linux platform, but organizations still need internal expertise to integrate large software portfolios.
Management
Agentic is the hottest topic since the Internet. Every organization has access to foundation models and can unleash them, as agents, to act on behalf of developers, architects, and systems administrators. But agents need tools and data to act in coherent and rational ways.
Can agents be wired into your automation and patching infrastructure? The short answer is yes, and you need to invest in these capabilities. The longer answer is that it will require better planning capabilities and data.
Since the advent of DevOps, organizations globally have invested heavily in automation and patching infrastructure. These investments have enabled teams to patch large numbers of systems in hours rather than weeks. But challenges remain with the planning process. While the updates themselves can take less than an hour (more on that later), the planning process can still take weeks for a minor update at 2 am, or even months for a major upgrade.
It’s not as simple as standing up an MCP server in front of your management platform and turning an agent loose on it. Agents need access to the tool, the platform itself, but they also need access to information. Vendors need to provide your organization with specific data on how patches will affect software systems, how configuration files may change, which features and capabilities are deprecated, and what new features are coming, etc.
With this information, agents will make much better decisions, and humans will be able to audit what they’ve done in your environment. This extra data will lead to better decisions and an easier path to rolling back changes should a problem arise.
Performance
As mentioned earlier, it might take under an hour to patch a fleet of servers today, but we can do better.
Today, when a server is updated or upgraded, each individual piece of software installed is analyzed and upgraded. Then, recursively, every library on which the software depends is analyzed and upgraded. This can result in hundreds or thousands of transactions happening locally on each server in your fleet. In a word, complexity. And complexity costs time and money, especially when things inevitably go wrong.
Open source technologies like bootc are making it faster, easier and less complex to update, upgrade and downgrade fleets of servers. An update to a fleet of servers that might have taken just under an hour before can now be reduced to five minutes. This reduces risk, lowers cognitive load for your administrators, and simplifies the process of rolling back should a problem occur.
It also reduces the drift between production systems and the core builds on which they are built. This gives your operations teams the ability to safely roll back with a single command if an operating system upgrade breaks your applications. These capabilities are useful at the edge, at scale in a data center, and even with workstations.
Conclusion
We’ve talked to customers globally, and the one thing we constantly hear is that CIOs are thinking about how this faster world is changing how they invest in their organizations and what they demand of their partners.
With the advent of faster-moving threats and mitigations, security becomes more dynamic. With agentic AI, planning becomes more volatile, and as software stacks grow in complexity, we must leverage new technologies to simplify system deployment. We have to design our organizations and technologies to deal with this volatility while mitigating costs (tokens and talent are expensive).
Much of our management stack will become bionic, enabled by RAG and agentic capabilities. Your team no longer has to research and remember cryptic commands. They will interact with the infrastructure in natural language, with the system recommending which commands to use —or, better yet, asking for permission to make changes itself.
Our strategy must bring innovation to the global community, address massive scale, and enable organizations to keep pace with rapidly accelerating innovation cycles without leaving anyone behind. That is the open source way.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends. Image credit: iStockphoto/metamorworks