Agentic AI Governance — Control Plane for AI Agents
Challenge
Autonomous AI agents are executing tool calls — database queries, API requests, file operations — with minimal human oversight. Enterprises deploying agents face regulatory requirements (EU AI Act Art. 12-14, Singapore MGF) for human oversight, audit trails, and authorization controls. Existing agent frameworks (LangChain, AutoGPT, CrewAI) have no built-in governance layer.
Build a control-plane overlay that intercepts, authorizes, logs, and audits every tool call made by an AI agent — without modifying the agent or tool code.
•
AI GovernanceEU AI ActAgent SecurityOWASP LLMHuman-in-the-LoopAudit Trail
EU AI Act Doc Generator — Automated Compliance Artifacts
Challenge
The EU AI Act (Regulation 2024/1689) requires AI system providers to produce extensive documentation: risk classification per Annex III, technical documentation per Annex IV (model cards, risk assessments, data governance records, human oversight procedures), and conformity assessment evidence. Manual documentation is time-consuming, inconsistent, and difficult to maintain over the 7-year retention period.
Build a platform that automates the generation of EU AI Act-compliant documentation artifacts.
•
EU AI ActAI DocumentationModel CardsRisk AssessmentConformity AssessmentRegulatory Compliance
Job searching across multiple platforms is time-consuming. Telegram channels post hundreds of vacancies daily, LinkedIn requires manual browsing, and HH.ru (Russia’s Indeed) needs separate attention. Manually reviewing all postings and assessing fit is inefficient.
Build a pipeline that:
Aggregates jobs from Telegram, LinkedIn, and HH.ru
Ranks every posting against your CV using AI
Runs entirely locally (no cloud API costs, no data leakage)
Supports multiple job search profiles (DevSecOps, Data Analyst, PM)
# Using profilepython -m jobs_finder pipeline --profile devsecops -v
# Skip specific sourcespython -m jobs_finder pipeline --skip-linkedin --skip-hh -v
# Force re-rank everythingpython -m jobs_finder pipeline --no-state
Output
outputs/2026-05-15/
├── telegram.jsonl # Raw scraped posts
├── linkedin.jsonl # Raw scraped jobs
├── hh.jsonl # Raw vacancies
└── ranked/
├── ranked.jsonl # All jobs with scores
└── report.md # Top-N markdown report
DevSecOps-5090 — GPU Training Pipeline on Kubernetes
Challenge
Fine-tuning Large Language Models typically requires cloud GPU instances (expensive) or complex local setups. Need a production-ready, self-hosted training pipeline that:
Universal Knowledge Extractor — LLM Training Data Pipeline
Challenge
Fine-tuning LLMs on domain-specific knowledge requires structured, high-quality instruction-response pairs. Manual curation doesn’t scale, and raw content from code repos, docs, and social media needs significant preprocessing before it’s usable for training.
Build a pipeline that:
Extracts knowledge from diverse sources (code, docs, Telegram, LinkedIn)
Automatically discovers content taxonomy
Produces ChatML-formatted JSONL ready for Axolotl/LLaMA-Factory
Build a complete real estate platform for the Belarusian market that enables property search via natural language (Russian/Belarusian), provides automated property valuations, connects buyers with verified realtors, and runs entirely as a Telegram Mini-App — all without recurring cloud API costs.
Solution Architecture
Platform Overview
A modular monolith deployed via Docker Compose with 22 services covering the full real estate lifecycle: search, valuation, listings, realtor marketplace, and admin operations.
•
Real EstateNLPAVMTelegram Mini-AppLocal LLMZero API Costs
Large Language Models have context window limits and increasing latency with prompt size. When analyzing large codebases, documents, or complex multi-part questions, single-shot prompts either exceed context limits or produce slow, unfocused responses.
Build an agent that:
Intelligently splits large prompts into semantic sub-tasks
Executes sub-tasks in parallel for speed
Synthesizes results into coherent responses
Runs entirely on local Ollama (zero API costs)
Solution Architecture
Workflow
User Prompt (large)
↓
[Analyze] - Count tokens, identify boundaries
↓
[Decide] - Decompose or single call?
├─→ Small (<6K tokens) → Single Ollama call → Return
└─→ Large (>6K tokens) → Decomposition
↓
[Split] - Semantic decomposition (headings, lists, paragraphs)
↓
[Execute] - Parallel sub-task execution (3 concurrent)
↓
[Aggregate] - Result synthesis (auto-strategy selection)
↓
[Return] - Final coherent response