Srikanth Gorle’s research contributes to embedding policy, trust, and observability into modern computing, making systems more transparent and reliable.

Software delivery pipelines are expected to operate at near real-time speeds but compliance teams request proofs of rigor. Moreover, artificial intelligence models are creating new opportunities in numerous fields, such as medicine, but reliability and factual correctness slow the adoption of models. Finally, real-time data pipelines are increasingly being used for fraud detection and analytics, but when you observe the inherent lack of observability, it’s hard to diagnose an observable fault. The balance between speed and assurance has emerged as the defining element of computing today.

Research on Governance of Continuous Delivery

When thinking about this field, Srikanth’s works provide an interesting look at how the research community is responding to these questions. One particular article that could be considered significant to the field is Consent-Driven Continuous Delivery with Open Policy Agent and Spinnaker, which was published in the Journal of Knowledge Learning and Science Technology, Vol 4, No 2, May 2025. Generally, a continuous delivery pipeline either has to rely on something human to approve or it has to encode some rigid rules, which leads to longer lead times for code to get out and likely won’t account for contextual flexibility. Srikanth and their co-authors suggested a framework that externalizes that policy into Open Policy Agent so that the Spinnaker pipelines can ask the policy for consent decisions on the fly. In this way, human intervention was not needed while deployment decisions were being made based on auditable rules. They presented a case study demonstrating an almost two-thirds reduction in deployment lead time when providing compliance. What is novel about this is that from a use perspective regarding a governance standpoint of consent, the policy is a living code base and can adjust in real time to risk

Research on AI safety in healthcare:

Srikanth’s interest in trust also included healthcare applications of artificial intelligence. About a month after his publication, in August 2025, the Journal of AI-Powered Medical Innovations (Vol. 1, No. 1) published his article called Detecting and Mitigating Hallucinations in Large Language Models (LLMs) Using Reinforcement Learning in Healthcare. The paper describes a study of the very well-known issue of AI hallucinations – outputs that sound impressive but may not be factually correct. Previous research looked at addressing this problem with prompt engineering or information retrieval augmentation, but this research utilized reinforcement learning along with clinical knowledge bases and expert feedback in their respective domains. The LLM was trained to appropriately penalize unsupported claims and reward relevant and accurate claims, consistent with guidelines. The findings were impressive – hallucination rates dropped from about 28% to less than 7% for outputs in medical situations, and meaningful improvement in the adherence to clinical guidelines. The study utilized model behavior framed in explicit medical evidence and is a systematic approach to bridge AI systems to some level of dependability to what is often needed in practice and clinical situations.

Research on streaming observability

Another aspect of his work lies in observability over real-time data pipelines. In July 2024, he published eBPF-Enhanced Streaming Observability for Flink Pipelines in Kubernetes (in the Journal of Artificial Intelligence General Science (JAIGS), Vol 4, Issue 1). This paper addressed a primary gap in distributed stream processing systems–the inability of current tools to trace application anomalies to their root causes in the kernel. Using eBPF technology, Srikanth and his co-authors designed a framework to capture telemetry in-kernel and map it to Flink operator behavior. There was now an association between network retransmissions, CPU scheduler delays, and application backpressure which previously could not be detected by any other set of metrics under the auspices of JMXor Prometheus monitoring.

Evaluations showed that root-cause detection was 46x better, with observability still at low-enough system overhead that their method could be applied to production workloads (e.g, fraud detection pipelines). Our work took the depth of observability from monitoring of surface metrics, to the observation of the depth of structure of distributed computation.

A common theme in all three contributions:

Even with various themes throughout, the common thread across all of Srikanth’s work is that mechanisms are created for bringing accountability to fast-paced systems. For example, a policy codified as rules is used in the context of continuous delivery; reinforcement learning based on knowledge bases is used in the context of healthcare AI; and telemetry at the kernel level correlated with application semantics is used in the context of distributed pipelines. All these papers together demonstrate a framework for embedding governance, safety, and transparency in systems’ architecture than modifying it later. These contributions are consistent with growing trends throughout industries: the large uptake of frameworks for policy-as-code implementations in the cloud; regulatory scrutiny over design safe protocols with AI (e.g. US Food and Drug Administration); and the increasing uptake of eBPF as a basis for cloud native observability.

Implications for future systems

It is clear what we need to be worried about. As systems gain increasing complexity and autonomy, simply putting governance in place will not resolve trust, we’ve got to have it built into their design. Srikanth’s book presents design opportunities again through self-governing pipelines, self-correcting AI models, and observability systems that reveal what has previously been hidden.

About Srikanth Gorle

Srikanth Gorle is a researcher and technologist with more than eighteen years of experience in data engineering, platform systems, and applied artificial intelligence. He has written peer-reviewed articles on topics such as governance of continuous delivery, AI and safety in healthcare and aspects of cloud-native streaming observability. He has author books and implemented advances into best practices for data platforms, automation, and DevOps. He has an important career focus on embedding compliance, transparency, and resilience into technical systems. His practice combines a relentless pursuit of innovation with resiliency and trustworthiness as a professional passion.