Hello! I’m Edward Praveen.

About Me

I’m a Senior Machine Learning Engineer and AI Architect focused on building production-grade AI systems that solve real-world problems.

My work centers around designing and deploying scalable machine learning and LLM-based applications, with a strong emphasis on reliability, performance, and practical usability in enterprise environments.

What I Do

I specialize in end-to-end AI system development - from model experimentation to production deployment. My core areas include:

  • Large Language Models (LLMs) and multi-agent systems
  • Retrieval-Augmented Generation (RAG) and Text2SQL systems
  • MLOps pipelines and model lifecycle management
  • Fine-tuning techniques such as LoRA and QLoRA
  • Scalable cloud architectures on AWS

Tech Stack

I work extensively with modern cloud-native and AI tooling, including:

  • AWS (Bedrock, EKS, SageMaker, OpenSearch, RDS, AgentCore)
  • Kubernetes, Docker, Helm, ArgoCD
  • Python-based backend systems and API design
  • Observability, monitoring, and performance optimization

Continuous Learning

I strongly believe in continuous learning and regularly invest time in deepening my understanding of both fundamentals and advanced topics.

Currently, I am:

  • Following hands-on tutorials and building systems to reinforce concepts
  • Reading books on deep learning, system design, and AI engineering
  • Revisiting core machine learning and deep learning fundamentals to strengthen my foundation

This ongoing learning process helps me stay updated and apply better engineering practices in real-world systems.

What I Focus On

I’m particularly interested in bridging the gap between AI experimentation and production systems - ensuring models are not just accurate, but also scalable, secure, and maintainable.

My work often involves:

  • Designing multi-agent workflows for complex reasoning tasks
  • Optimizing inference and system performance
  • Building data-driven applications that generate actionable insights

About This Blog

This blog is a collection of hands-on learnings, tutorials, architecture patterns, and practical insights from building AI systems.

I’ll be sharing:

  • Learnings from tutorials and hands-on implementations
  • Key insights from books and research
  • Revisited fundamentals explained with practical context
  • Real-world system design decisions and trade-offs

If you’re interested in applied AI, LLM systems, or modern ML engineering, you’ll find practical and actionable content here.