Omair Saleh

Full-Stack Engineer · AI-Native Builder

I build entire production
systems end-to-end.

37 repositories across three organisations — donation platforms, data pipelines, AI agents, logistics portals, fintech systems. From messy business problems to working software.

Each built solo, end-to-end. The interesting part isn't the feature list — it's the mess I walked into and the decisions that shaped how I built my way out.

01

CharityRight — Donation Infrastructure Rebuild

Next.js 15 · Bun · Stripe · PostgreSQL · SQLite · Prisma · Docker · Live ↗

I joined a UK charity where the technology was in fragments. The website had been built by an external agency. Every donate button on the site still routed through their hosted checkout — a system the charity didn't own or control. The mobile donation experience took over 30 seconds to navigate, with double form fills and a clunky UI. Donations were scattered across three separate platforms — the agency's checkout, Enthuse, and LaunchGood — each with different payout cycles (2 days vs weekly vs sometimes two months), different fee structures, and inconsistent Gift Aid handling. The CRM integration was broken: donation data wasn't flowing into the system properly. Gift Aid reconciliation was largely manual.

Over 18 months as the sole developer, I systematically replaced every external dependency — 14 repositories in total. Built a new checkout from scratch — multi-step with progressive disclosure, Stripe PaymentIntents for one-off, SetupIntents with off-session charging for recurring (the system only allows amount increases on active plans — a deliberate constraint). Apple Pay and Google Pay. Three separate donation builder components with Zod validation. Built the P2P fundraising engine — individual fundraiser pages, team pages, leaderboards, URL-based attribution, all backed by a Prisma schema with 15 interconnected models. Built a config-driven campaign landing page system with a 30-field campaign interface — slug, amounts, impact messaging, widget type, social proof, custom HTML support — so the fundraising team could launch Ramadan appeals without waiting for a developer.

Then I built the data layer. Three separate sync pipelines for each donation platform: Enthuse required Playwright browser automation with a four-layer auth chain — SSO login triggers a magic-link email, Gmail OAuth reads it, Playwright clicks it, then optional TOTP via speakeasy. LaunchGood required Playwright to navigate the export UI, check an acknowledgment checkbox, and capture the CSV download stream. Both use session persistence via storageState.json, safeClick() retries, and screenshot dumps on failure. A third pipeline syncs the charity's own MySQL database into PostgreSQL — five data streams (one-time donations, scheduled recurring, supporters, appeals, legacy donations), each joining 10+ source tables, landing in raw tables then transforming into a canonical model. That canonical model feeds N3O CRM queue payloads and a donor messaging system built on n8n + Chatwoot + WAHA with a database-backed send queue. Terminated the agency retainer. Migrated hosting.

The decision that mattered

Nobody told me to handle Zakat compliance. I just knew that if a Muslim donor selects Zakat, the system needs to give them a clear choice about admin fees — because Zakat is a religious obligation with specific rules about how the money is used, and that choice needs to be explicit, not buried. I built it as a per-donation and per-plan toggle so the donor controls what happens, not the system. Nobody told me to move Gift Aid capture to post-payment either. But asking a donor for their home address before they've committed to giving kills conversion. I moved it. HMRC still gets what they need. The charity gets more donations. These aren't in any spec. They're the kind of decisions you only make if you understand the domain, not just the technology.


02

AI Outreach Engine

Python · OpenAI · PostgreSQL · Apify · In Production

The same charity needed to reach potential partner organisations across the UK charity sector — hundreds of thousands of registered charities in the Charity Commission dataset. The fundraising team had no tooling for this. They were manually searching, manually qualifying, manually writing emails. It didn't scale and the outreach was generic.

I built a pipeline that ingests raw Charity Commission data into PostgreSQL, then uses OpenAI to translate natural language queries ("large education charities with income over £1M operating nationally") into SQL filter logic via a custom segment engine. Once leads are qualified, OpenAI generates personalised outreach — emails, talking points — based on each charity's actual profile, income band, and operational focus. Not templated mail-merge. Actually personalised against real data. The system enriches contacts through Apify to find the CEO, Director, or Head of Fundraising. The whole thing runs from a CLI with deterministic Python scripts underneath.

The decision that mattered

The AI makes decisions, but the infrastructure is boring and reliable. On purpose. Retry logic, structured outputs, cost ceilings, caching layers, human-in-the-loop gates. Anyone can call OpenAI. The hard part is building the deterministic scaffolding that makes it safe to let AI call the shots — and knowing where the human needs to stay in the loop.


03

PledgeNow

Next.js 14 · PostgreSQL · Stripe · OpenAI · Gemini · Live ↗

Charities using third-party pledge platforms were losing money to fees, waiting weeks for payouts, and had no control over the donor experience. The existing options — Enthuse, LaunchGood — were slow to pay out, inconsistent with Gift Aid handling, and charged admin fees the charity couldn't avoid. I built an alternative from scratch.

Full pledge checkout with Stripe integration — commit now, pay later, with three payment rails (bank transfer, GoCardless direct debit, card). Pledge lifecycle tracks through five statuses: new → initiated → paid → overdue → cancelled. Built-in bank statement reconciliation with configurable column mapping. But the real story is the AI "Steward": an embedded assistant that helps fundraisers manage their campaigns. It uses OpenAI with Gemini fallback, structured function calling with 13 tools that return rich visual cards — stats grids, pledge tables, campaign detail, WhatsApp status, nudge drafts. Every response is grounded in real pledge data.

I stress tested it across 31 scenarios in 10 categories — both bash and Python harnesses. Dashboard overview, pledge lookups, campaign queries, setup status, action requests, navigation, edge cases (gibberish input, dangerous requests like "delete all our data"), brand compliance (regex checks for forbidden terms: "AI assistant", "language model", "ChatGPT", "artificial intelligence"), persona scenarios (event lead, treasurer asking about Gift Aid, attribution queries, pledge amendments, trustee report prep), and tone validation (gives specific advice, responds with empathy not shame). 31 of 31 pass.

The decision that mattered

The Steward never identifies as AI. It's a concierge, not a chatbot. The brand compliance tests grep the response for forbidden terms and fail the build if any appear. Every response references real pledge data, real campaign numbers, real donor activity — not vibes. If the data doesn't exist, the Steward says so instead of making something up. I built this because I've seen what happens when AI hallucinates in a trust-sensitive context. It destroys credibility instantly.

Also shipped

AI Command Center

Node.js · OpenAI · Sentry · GitLab API · PostgreSQL · MySQL

Six-agent autonomous system running on scheduled cycles. Safety agent checks command policy first. Reliability agent pulls unresolved Sentry issues and opens GitLab issues with severity mapping (p0/p1/p2). Code-steward reviews open merge requests, posts automated notes, blocks failed pipelines — up to 3 MRs per cycle. Analytics agent runs cross-database metric queries and publishes reports. Auto-pause on excessive API spend, command allowlists, dry-run mode. Nothing executes without human approval.

Hub Platform

Chatwoot · OpenAI · Node.js · Salla API

AI conversation intelligence for B2B customer service. When a support agent opens a conversation, the system pulls the customer's real order history from Salla and suggests contextual responses with structured function calling — actual order numbers, actual product names, actual interaction history. Not vibes-based autocomplete.

QuikCue — Self-Hosted Infrastructure

Node.js · Express · PostgreSQL · Ghost · Docker Swarm · Traefik · Live ↗

Full company platform: static site with live ship log API (CRUD + bearer auth + member filtering), dual Ghost blogs with separate databases, self-hosted Gitea, Chatwoot, n8n automation, WAHA for WhatsApp. Everything on Docker Swarm behind Traefik with auto-provisioned SSL. One server, one operator, zero managed services.

CIEF Worldwide — Logistics & Fintech Platform

Laravel 5.6–9 · Vue 2 · MySQL · Python · Docker

Four years building the core technology for a Malaysian logistics and financial services company. Shipping portal, money exchange platform (customer onboarding, transfer booking, Billplz payments, supplier settlement, accounting reconciliation, loyalty system), CRM with role-based teams, customer portal, analytics dashboard with ML forecasting (Apriori, regression), and a Python ETL pipeline extracting 49 MySQL tables into Google Cloud Storage. Nine repositories, six Laravel monoliths.

Problem first, not spec first

Most real problems don't come with specs. They come with deadlines, complaints, and half-understood business rules. I figure out what needs to exist, design it, and build it.

AI-native, not AI-curious

I don't prototype with AI. I ship with it. Every AI system I build has cost controls, structured outputs, human approval gates, and deterministic infrastructure underneath. The API call is the easy part.

Ship, then iterate

A working system in production teaches you more than six months of planning. I build the smallest thing that solves the real problem, ship it, and improve based on what actually happens.

Reduce complexity, increase leverage

Most software problems are complexity problems disguised as feature requests. I look for the architectural move that makes ten future problems disappear instead of solving them one by one.

Let's talk.

I don't need onboarding. I need a problem and a git repo.

Location Kuala Lumpur, Malaysia — overlaps London hours

Available immediately.