Ahmed Taha, intrigued by the opportunity to apply technical skills to the legal domain, became a Patent Examiner at the USPTO specializing in computer graphics and machine learning technologies. With a foundation in software engineering from nearly five years at District Hut, combined with his education in cybersecurity, machine learning, and quantum computation from Johns Hopkins and Columbia, he brings deep technical expertise to patent examination. Currently pursuing his M.S. in Computer Science at Columbia, examining patents, conducting novel ML research, and preparing for the Patent Bar.
Software
Feed Recommender
Two-tower neural recommendation system for news articles using PyTorch, FAISS, and LightGBM. Features sub-15ms retrieval across 1M+ items with production-ready FastAPI serving.
Architecture
+------------------------------------------------------------+ | TWO-STAGE PIPELINE | +------------------------------------------------------------+ | Two-Tower Encoders -> InfoNCE -> 128d User/Item Embeds | | | | | v | | Retrieval: FAISS IVF Index -> Top-K Candidates (sub-15ms) | | | | | v | | Reranking: LightGBM (similarity, popularity, category) | | | | | v | | Serving: FastAPI <-> Redis Cache <-> Async Workers | +------------------------------------------------------------+
Neural Cryptanalyst
Machine learning models achieving 89-91% accuracy for network intrusion detection and malware classification using TensorFlow.
Architecture
+------------------------------------------------------------+ | SIDE-CHANNEL ANALYSIS FLOW | +------------------------------------------------------------+ | ASCAD Traces -> Preprocessing (align/filter/POI) | | -> Model Family (CNN/LSTM/Transformer/GPAM) | | -> Profiled and Non-Profiled Attacks | | -> Countermeasure Evaluation | | -> Metrics (Guessing Entropy, SR, MI) | +------------------------------------------------------------+
LLM Counsel
FastAPI service that classifies query complexity, routes to single or multi-model panels, aggregates answers with dissent detection, and returns confidence plus cost/latency metrics with semantic caching.
Architecture
+------------------------------------------------------------+ | QUERY ROUTING PIPELINE | +------------------------------------------------------------+ | User Query -> Complexity Classifier | | -> Router (single model or multi-model panel) | | -> Dissent Detection | | -> Response Aggregation + Confidence | | -> Semantic Cache + Cost/Latency Analytics | +------------------------------------------------------------+
Mixture-of-Recursions
PyTorch implementation of recursive transformers with dynamic routing from NeurIPS 2025. Combines parameter sharing with adaptive computation, where a router selects which tokens continue through shared recursive layers based on complexity. Achieves parameter efficiency (~70M params matching 360M vanilla) with specialized KV caching strategies.
Architecture
+------------------------------------------------------------+ | MIXTURE-OF-RECURSIONS (MoR) | +------------------------------------------------------------+ | Token Embeddings + RoPE | | -> First Unique Layer (L0) | | -> Recursive Block x Nr | | [Shared Transformer + Router] | | -> Last Unique Layer (L_last) | | -> RMSNorm + LM Head | +------------------------------------------------------------+
RRT
JAX/Flax implementation of Relaxed Recursive Transformers (ICLR 2025), combining layer-wise LoRA with recursive parameter sharing for efficient transformer scaling.
Architecture
+------------------------------------------------------------+ | CONVERSION PIPELINE | +------------------------------------------------------------+ | Vanilla Transformer Layers | | --avg init--> Shared Recursive Block x num_loops | | | | | +--> SVD of residuals | | | | | v | | Relaxed Recursive Transformer: | | Shared Block + LoRA(loop_0) | | Shared Block + LoRA(loop_1) | | Shared Block + LoRA(loop_2) | +------------------------------------------------------------+
TRecViT
JAX/Flax implementation of DeepMind's TRecViT (TMLR 2025): a causal video transformer with GLRU temporal mixing and ViT spatial blocks for real-time streaming inference with constant memory per frame.
Architecture
+------------------------------------------------------------+ | TRECViT ARCHITECTURE | +------------------------------------------------------------+ | Input Video [B,T,H,W,3] | | -> Patch Embedding (16x16) + Spatial Position | | -> TRecViT Blocks x L | | [Gated LRU temporal mixing -> ViT spatial mixing] | | -> Output Tokens [B,T,N,D] | +------------------------------------------------------------+
Publications
Legal publications coming soon...
Contact
Education
Work Experience
M.S. Computer Science (Machine Learning)
Columbia University
Fu Foundation School of Engineering. Advanced coursework in machine learning, artificial intelligence, and computational systems.
Patent Examiner
U.S. Patent and Trademark Office
Promoted from GS-9 to GS-11 January 2026. Specializing in computer graphics and machine learning technologies. Received commendation letter from OPQA director for quality of office actions. Achieved 104% production average.
M.S. Cybersecurity
Johns Hopkins University
Whiting School of Engineering. 3.85 GPA. Coursework in Quantum Computation, Ethical Hacking, Web Security, and Cryptology. Published ML cryptanalysis research paper.
Research Assistant
Johns Hopkins University – CCVL Lab
Whiting School of Engineering. Conducted research at the Computational Cognition, Vision, and Learning (CCVL) lab.
Founder & Software Engineer
District Hut LLC
Led full-stack development projects generating over seven figures in revenue. Built patient-management systems, auto-dealer platforms, and restaurant applications. Registered trademark for company slogan.
B.S. Computer Science
California State University, Sacramento
3.45 GPA. Division 2 Wrestling Team. Overcame serious spine injury to complete degree. Developed Adapted Strength fitness platform as capstone project.