Hi, I'm Adam Shadi

I spent the last year building with AI. Here's what I shipped.

5 live products, built from scratch — not from tutorials. I understand what customers need because I've worked the queues. I build what they need because I taught myself how.

0 Products Shipped
0 Customer Satisfaction
0 Tickets Handled Daily
0 Above Team Close Rate

I got tired of filing feature requests — so I started building instead.

My career started in customer success and sales. At CoinList, I handled 100+ tickets daily at 95%+ CSAT. At Haselwood Auto, I earned a 930 NPS with 97% perfect surveys. I wasn't just closing — I was finding the patterns behind what kept breaking and fixing them upstream.

That instinct led me to AI. I taught myself prompt engineering, multi-model orchestration, and workflow automation — then proved it by shipping 5 live products from scratch. No bootcamp. No certificate. Just working software that real people use.

Now I'm looking for a team where both sides matter: Customer Success, AI Operations, or AI Enablement — fully remote. I don't just flag the problem in a ticket. I build the fix. Let's talk.

I built my own AI development team. It ships autonomously.

I needed a faster way to ship, so I built one. Seven AI agents — research, specs, code, security, marketing, support, and an orchestrator — that run autonomously through a pipeline I designed. Say "go" and it ships. Every product below was built through this system.

Claude API Multi-Agent Prompt Engineering QA Automation Marketing Ops Node.js Vercel
0
Agents
0
Skills
0
Avg QA Score
0
Retries
01

Quality Gate System

Every pipeline output passes through weighted scoring rubrics. Agents don't just generate — they evaluate, score, and enforce thresholds before anything ships.

3x weight

Spec Compliance

Does the code implement what was specified?

Every endpoint, page, and feature from spec exists Auth gates match spec requirements Environment variables documented Gap analysis: spec-to-code AND code-to-spec
2x weight

Copy Integration

Is marketing copy used verbatim — not paraphrased?

All Hype agent copy appears word-for-word in UI No placeholder text ("Lorem ipsum", "Coming soon") CTAs match brand voice document
2x weight

Code Quality

Logic correctness, error handling, maintainability.

No logic errors or unreachable code paths Error handling present at system boundaries No hardcoded values that should be config
1x weight

Security Basics

OWASP Top 10 coverage with attacker persona.

No hardcoded secrets or API keys Input validation, parameterized SQL XSS prevention, CSRF tokens where needed Mandatory: "If I wanted free usage, what would I try?"

Simulated Score Breakdown

Spec Compliance
92
Copy Integration
88
Code Quality
76
Security Basics
85
Weighted Score 86
PASS — threshold is 80%. Scores 60-79% trigger revision loop. Below 60% = critical.
Why Async/await trace is mandatory. For every async call: is it awaited? What happens if the Promise resolves as undefined? Does the conditional treat undefined as truthy? The review traces actual execution — not intended behavior.
02

Prompt Architecture

Three-layer progressive disclosure keeps context windows efficient. Skills load only what's needed, when it's needed.

Name and description only. Loaded into every conversation. The description determines when the skill triggers — it's optimized for recall, not readability.

name: reflection--code-review description: Deduction-based code quality scoring with weighted rubrics, async/await trace analysis, and attacker persona security review. Triggers on: code review, quality check, "is this code safe", or any Forge output before pipeline continues.
Design Description is a trigger classifier, not documentation. It's tuned so the model invokes the skill at the right moment — like training a retrieval function.

Full instructions injected when the skill is invoked. Structured as methodology, not rigid rules. Explains why behind every instruction so the model can handle edge cases.

# From the actual Reflect-Code skill body: ## Async/Await Trace (Critical) For every async function call: 1. Is the call `await`ed? 2. Is the return value used? 3. If missing `await`, trace exact runtime behavior: - What does Promise object look like used synchronously? - What happens when you access `.allowed` on pending Promise? - `!undefined` is `true` — does the conditional pass? - Write ACTUAL execution path, not intended path ## Attacker Persona (Mandatory Output) After line-by-line review, answer: 1. "If I wanted unlimited free usage, what would I try?" 2. "If I wanted to use without paying?" 3. "Which identifiers can be rotated, spoofed, omitted?" 4. "Which safety checks exist but aren't called/awaited?"
Why Theory of mind over rigid MUSTs. LLMs are smart — if you explain why a check matters, the model generalizes to edge cases instead of pattern-matching to specific examples.

Scripts, templates, reference docs. Only loaded when the skill explicitly reads them during execution. Keeps context window clean.

# Skill directory structure skill-name/ ├── SKILL.md # L2: instructions (required) │ ├── YAML frontmatter # L1: name + description │ └── Markdown body # L2: methodology └── Bundled Resources # L3: loaded on demand ├── scripts/ # Executable code ├── references/ # Docs read during execution └── assets/ # Templates, fonts, icons

Spec Handoff: 16-Section Structured Output

Every product spec follows the same structure. The Blueprint agent fills all 16 sections, then hands off to Forge with: "Read SPEC.docx and CLAUDE.md. Begin Phase 1."

Cover + Summary Table Architecture + Stack Database Schema API Endpoints Core Business Logic Frontend Key Pages Environment Variables Build Order (5-7 phases) Pricing & Revenue Key Decisions Credentials Setup File/Folder Structure OAuth Callback URLs Security Headers Deployment Config Error Response Envelope
Design Build order is phased so each phase is independently testable. Phase 1 is always: project init + models + database + auth. Last phase is always: deploy + README. This means if any phase fails, you know exactly where.

Four products. No team. No bootcamp. All live.

Live

BrandNamer

AI product naming + domain checker

Founders describe their product, get AI-generated name ideas with instant domain availability. No more brainstorming for hours only to find the .com is taken. Live with real users and organic search traffic.

Next.js Claude API Domain API Stripe SEO
Live

PrecisionPrompts

AI prompt generator + marketplace

Custom AI prompt generator plus a marketplace of 15 curated prompt packs for marketing, business, and freelancing. Free web tool, downloadable PDFs, and a Chrome extension — with Stripe payments and Supabase auth.

Claude API Stripe Supabase Chrome Extension Vercel
Beta

MentionPulse

Reddit brand monitoring SaaS

Tracks Reddit for brand mentions, keyword triggers, and competitor chatter — then sends daily email digests with sentiment analysis. Built for founders who want to know what people say about them without doomscrolling.

Node.js Express Reddit API Resend Vercel
Beta

LearnFlow

AI tool discovery platform

Search 100+ AI tools by use case and get matched to the right ones for your workflow. Dynamic tool pages with reviews, affiliate integrations, and a course subscription platform for going deeper.

Next.js TypeScript Stripe Affiliate SEO

Two skill sets. One person.

Customer Success

Customer Onboarding Retention Strategy Health Monitoring Escalation Management Churn Prevention Account Management CRM Systems Zendesk / Freshdesk

AI & Engineering

Prompt Engineering Multi-LLM Orchestration Agent Pipelines Workflow Automation Local LLM Deployment Node.js / Next.js Claude / ChatGPT / Ollama Vercel / Supabase / Stripe N8N

Support queues. Sales floors. Then shipping code.

Jul 2025 — Present

Independent AI Builder

Went all-in on AI. Taught myself prompt engineering, multi-model orchestration, and workflow automation by shipping — not studying. Five live products and a fully autonomous build pipeline. The work is above.

Jul 2024 — Jul 2025

RV Sales Associate

Camping World — Silverdale, WA

13% closing rate (46% above team average). Managed 200+ monthly CRM leads with data-driven follow-up. Built AI-powered internal tools for follow-up sequencing and inventory analysis. Cross-functional coordination across service, sales, and parts to prevent delivery issues.

Mar 2023 — Jul 2024

Sales Consultant

Haselwood Auto Group — Bremerton, WA

930 NPS, 97% perfect ratings across ~100 transactions. Highest yearly average gross profit per vehicle ($4,996). Systematic post-sale follow-up process and proactive issue resolution.

Feb 2022 — Jul 2022

Customer Support Analyst

CoinList — Remote

100+ tickets/day, 95%+ CSAT, <2hr first response time. Drove 20% CSAT increase by identifying automation friction points. Cut repeat ticket volume 15% through self-service documentation. Synthesized customer feedback into 3 shipped feature implementations.

2014 — 2017

A.A. Business

Miami University

2022

Customer Success Foundations

Aspireship — Certificate

I'm looking for the right team.

Customer Success, AI Operations, or AI Enablement — fully remote. Most people in these roles either understand the customer or understand the tech. I do both, and I have the shipped products to prove it. If that's useful to you, let's talk.