New Delhi, India – February 2026 – Researcher Rahul Vadisetty has been honored with the Best Paper Award at the 9th International Conference on Innovative Computing and Communication (ICICC-2026), held on February 6–7, 2026, at Shaheed Sukhdev College of Business Studies (SSCBS), University of Delhi.
ICICC-2026 is an internationally recognized, peer-reviewed academic conference jointly organized by SSCBS, University of Delhi, in association with the National Institute of Technology (NIT) Patna, India, and the University of Valladolid, Spain. Since its launch in 2018, ICICC has built a strong reputation for academic rigor, global participation, and high-quality scholarly output across computing and communication disciplines.

This year’s conference received approximately 2,500 research paper submissions from around the world, maintaining a competitive 15% acceptance rate. All accepted papers were published in Springer’s Lecture Notes in Networks and Systems (LNNS) Series, an internationally indexed academic publication platform recognized for its editorial and peer-review standards.
Leadership and Steering Oversight
ICICC-2026 was guided by an internationally distinguished Steering Committee and Conference Leadership Team, comprising senior academic leaders and globally recognized scholars.
The conference was held under the patronage of Dr. Poonam Verma, Principal of SSCBS, University of Delhi, and Prof. Pradip Kumar Jain, Director of NIT Patna. The General Chairs included Prof. Dr. Bal Virdee of London Metropolitan University (UK) and Dr. Prabhat Kumar, Professor at NIT Patna.
Honorary Chairs were Prof. Janusz Kacprzyk, Head of the Intelligent Systems Laboratory at the Polish Academy of Sciences, Warsaw, and Prof. Vaclav Snasel, Rector of VSB-Technical University of Ostrava, Czech Republic.
The conference was chaired by Prof. Punam Bedi (University of Delhi) and Prof. R. K. Agrawal (Jawaharlal Nehru University, India). The Technical Program was led by Prof. A. K. Singh (NIT Kurukshetra) and Dr. Kumar Bijoy (SSCBS, DU), while Editorial Chairs included Prof. Aboul Ella Hassanien (Cairo University, Egypt) and Prof. Abhishek Swaroop (St. Andrews Institute of Technology and Management, India).
The organizing structure was further strengthened by Conveners Dr. Ajay Jaiswal and Dr. Sameer Anand, Co-Conveners Dr. Moolchand Sharma, Dr. Jameel Ahmed, and Dr. Amrina Kausar, Publicity Chairs Dr. Jafar A. Alzubi and Dr. Hamid Reza Boveiri, and Organising SecretaryDr Ashish Khanna.
This leadership structure ensured international academic oversight, transparency in evaluation standards, and adherence to established peer-review protocols.
Only five papers received the Best Paper Award at ICICC-2026, reflecting significant selectivity among accepted submissions.
Rahul Vadisetty’s paper emerged as one of the top research contributions of the conference, earning unanimous approval during the final evaluation stage.
Breakthrough Innovation in Responsible Generative AI
Rahul Vadisetty received the Best Paper Award for his research titled “Human-in-the-Loop AI Agents for Cloud-Based Generative AI Systems: Ensuring Ethical Oversight and Adaptive Learning,” which addresses one of the most pressing global challenges in artificial intelligence: how to responsibly scale generative AI systems within cloud environments while ensuring ethical oversight, transparency, regulatory compliance, and sustained human accountability. As generative AI technologies are increasingly deployed across enterprise platforms, healthcare systems, financial services, and public-sector infrastructure, the risks associated with automated decision-making—such as bias amplification, opaque reasoning, regulatory exposure, and systemic vulnerabilities—have intensified. Vadisetty’s research introduces a governance-integrated AI architecture that embeds structured human oversight directly within the operational layers of cloud-based generative AI pipelines, fundamentally rethinking how accountability is engineered into AI systems. Instead of treating ethics and compliance as external monitoring functions applied after deployment, the framework integrates human-in-the-loop validation checkpoints into model training, retraining, prompt-response evaluation, and high-risk decision pathways, ensuring real-time expert supervision. The architecture further incorporates continuous compliance verification mechanisms that evaluate outputs against policy constraints, ethical guardrails, and domain-specific governance requirements during both training and inference processes. Through adaptive feedback-driven learning models, the system enables structured recalibration of AI agents, mitigating bias and aligning outputs with governance policies without compromising scalability or performance. Designed using a cloud-native microservices architecture, the framework supports distributed deployment across enterprise-scale environments while maintaining modular compliance enforcement, secure orchestration, and API-driven governance controls. Additionally, transparent audit logging and traceability mechanisms are embedded within the infrastructure, allowing organizations to document decision pathways, demonstrate regulatory adherence, and maintain explainability across jurisdictions. Unlike traditional monitoring systems that intervene only after deployment or after failures occur, Vadisetty’s approach embeds governance directly into AI infrastructure, shifting oversight from reactive correction to proactive accountability. This integrated model establishes a scalable and technically robust pathway for responsible generative AI deployment, contributing significantly to the advancement of trustworthy artificial intelligence systems in mission-critical global environments.
National and International Significance
As generative artificial intelligence systems are rapidly integrated into healthcare platforms, financial services, enterprise cloud infrastructures, and government technology ecosystems, the question of responsible governance has evolved from a technical concern to a matter of strategic national and international importance. These systems increasingly influence diagnostic recommendations, financial risk assessments, automated customer interactions, cybersecurity analytics, and public-sector decision support tools. In such high-impact environments, failures in oversight can result in regulatory violations, systemic bias, data privacy breaches, or operational instability. Ensuring that AI systems operate transparently, ethically, and accountably is therefore critical not only for technological advancement but also for economic security, institutional trust, and public safety.
Vadisetty’s research directly contributes to strengthening secure and accountable AI ecosystems by providing an architectural framework that aligns with national technology modernization initiatives and cybersecurity priorities focused on protecting digital infrastructure and sensitive data assets. His work supports emerging international AI governance and regulatory frameworks that seek harmonized standards for ethical deployment, cross-border compliance, and transparent algorithmic decision-making. For enterprise cloud providers managing large-scale distributed AI workloads, the integration of governance mechanisms into infrastructure layers offers a practical pathway to balance scalability with regulatory adherence. Additionally, industries operating under strict compliance requirements—such as healthcare, finance, and public administration—benefit from enhanced transparency, traceability, and structured human oversight within automated decision systems.
By embedding governance mechanisms directly into scalable cloud-native AI architectures, the research advances global efforts aimed at ensuring safe, ethical, and trustworthy artificial intelligence deployment. Rather than treating compliance as an external auditing function, Vadisetty’s approach integrates accountability as a foundational design principle, enabling proactive risk mitigation and sustained system integrity across jurisdictions. His recognition at ICICC-2026 underscores not only the competitiveness and academic rigor of the conference but also the growing global emphasis on responsible AI innovation in an increasingly complex technological landscape. All award details are supported by official ICICC conference records, Springer publication listings, and archival documentation.
Conclusion
Rahul Vadisetty’s recognition as a Best Paper Award recipient at ICICC-2026 reflects both the exceptional selectivity of the conference and the growing global importance of responsible artificial intelligence innovation. In an era where generative AI systems are increasingly embedded into critical national infrastructure, enterprise cloud ecosystems, and high-compliance industries, research that integrates ethical governance directly into scalable AI architectures is of substantial strategic value. Vadisetty’s work demonstrates technical rigor, originality, and practical relevance, contributing meaningfully to the advancement of trustworthy and accountable AI deployment at scale.
ICICC-2026 continues to serve as a respected international platform for high-impact research in computing and communication technologies. Additional details regarding the conference, its organizing institutions, leadership, and published proceedings can be accessed through the official conference website:
Conference Website: https://icicc-conf.com/


