Skip to content
user@aliuyar.dev : ~ $

// initialising …

// session ok

NAME
Ali Uyar
ROLE
AI Systems Engineer
LOCUS
Vienna, Austria · UTC+01
STATE
In a full-time role

objective

I build AI products that hold up in practice —
retrieval, agents, and model-reliant workflows where
reliability, evidence, and technical judgment matter
more than the demo.

§ 01 Principles

01/03

Built for pressure

Deterministic workflows, clearer failure modes, and decisions that do not collapse once the system has real users and real stakes.

02/03

Evidence before confidence

Evaluation, traceability, and auditability belong in the product itself, not as reassurance added later.

03/03

Judgment over theatre

Useful AI systems come from better sequencing, better constraints, and clearer choices — not just a more polished demo.

§ 02 Research
§ 03 Selected projects
№ 01

DecisionGraph.py

DecisionGraph is a local-first library for immutable AI decision traces. It is used for audit-ready records and deterministic replays in compliance and approval workflows.

Python source
30.01.26
★ 2
№ 02

SchemaPilot.py

SchemaPilot is a governance-first data platform for building AI-ready datasets. It is used to control schema quality, access policies, and deterministic pipelines.

Python source
19.02.26
★ 1
№ 03

PECR.rs

PECR is a policy-enforced runtime for AI context retrieval. It is used to produce deterministic responses with auditable evidence and clear permission outcomes.

Rust source
19.02.26
★ 1
№ 04

cost-watchdog.ts

Cost Watchdog is a self-hosted cost monitoring and anomaly detection platform. It is used to ingest cost data, detect anomalies automatically, and notify teams before overspend appears in monthly or yearly reporting.

TypeScript source
13.02.26
★ 1
№ 05

VeilPack.rs

VeilPack is an offline, fail-closed privacy gate for enterprise data pipelines. It is used to detect sensitive values, redact outputs, and ship verifiable data packs.

Rust source
10.02.26
★ 1
§ 04 Outcomes

case_01

Policy-Enforced Context Runtime

problem
Plain RAG pipelines left governance gaps around policy, provenance, and deterministic failure handling.
build
Built a two-plane runtime with a non-privileged controller, policy-enforcing gateway, immutable evidence units, and deterministic terminal modes.
result
Delivered replayable, auditable AI retrieval runs with clear permission and evidence outcomes for safer production deployment.
→ PECR

case_02

SchemaPilot

problem
Enterprise data lived across inconsistent sources, making AI usage hard to govern, trust, and operationalize.
build
Built a governance-first data platform with gateway-enforced access controls, deterministic bronze/silver/gold pipelines, and operator-grade tooling.
result
Created an AI-ready, queryable data foundation with stronger auditability, safer defaults, and more predictable delivery workflows.
→ SchemaPilot

case_03

ContextGraph

problem
Teams lacked trustworthy process visibility across Slack, Jira, and GitHub without creating surveillance risk.
build
Built a self-hosted process intelligence platform with fail-closed permissions, private personal timelines, and k-anonymous analytics.
result
Improved process insight and next-step guidance while preserving privacy boundaries and policy-first controls.
→ ContextGraph
§ 05 Focus

// past the demo, into the weather

When a team is past the demo stage and trying to make something dependable — what the system should be trusted to do, where evidence should exist, and which trade-offs are worth making.

That tends to sit between engineering, product, and delivery. I like small teams, clear questions, and work that leaves a system easier to understand than it was before.