AI/Quality Sentinel
AI/Quality Sentinel is a repository for a proof of concept focused on using AI to analyze software quality, detecting misalignment between ticket requirements and implemented code.
🎯 Hackathon Theme
LLM-as-a-judge — Uses AI to evaluate code quality against business requirements and architectural patterns, acting as an intelligent quality auditor between project management and code repositories.
🚀 Technology Stack
💡 Approach: LLM-as-a-Judge for Software Quality Governance
Our solution leverages Large Language Models as intelligent judges to:
- Evaluate Alignment: Compare what tickets specify vs. what code implements
- Assess Quality: Judge code against Thoughtworks best practices and architectural patterns
- Reduce Manual Review: Delegate pattern analysis to AI, freeing tech leads for strategic decisions
- Catch Gaps Early: Identify requirement mismatches before code merge
- Enable Context-Aware Analysis: Understand business intent, not just syntax
Phase 1 (MVP)
- Backend: .NET (C#), REST/GraphQL APIs, Azure Cloud
- Frontend: React or Angular, TypeScript
- LLM: GPT-4 / Claude (via AI/works™ secure infrastructure)
- Integrations: Jira API, GitHub API
Future Phases
- Technology-Agnostic: Scalable to Python, Node.js, Go, Java, Rust, Vue, Svelte, multi-cloud platforms
- Focus: Language and framework-independent quality assessment principles
📋 Core Concept
The “Quality Triangle”:
- Intention — What the Jira ticket specifies
- Execution — What the code implements
- Pattern — Thoughtworks best practices
The LLM judges alignment across all three dimensions, reducing manual code review overhead and catching requirement gaps before merge.
👥 Team Roles Required
This project requires a multidisciplinary team. See the team structure and roles needed:
📚 Documentation