I am an Intermediate Data Scientist with an engineering background, specializing in turning fragmented, "messy" data into reliable, automated pipelines. While many focus only on the final chart, I prioritize Data Integrity—ensuring that every insight is built on a foundation of clean, validated, and well-architected data.
What I Do
Data Engineering & ETL
I specialize in consolidating multi-source data (CSV, SQL, APIs) into unified "Master Tables". I build robust pipelines that automate the cleaning process so your reports remain accurate as you scale.
Advanced Analytics & Modeling
Using Python (Pandas, Scikit-Learn), I perform deep exploratory analysis to find the "why" behind your data. I develop predictive models for complex sectors like Agriculture and Real Estate, backed by a strong foundation in probability and statistics.
Database Architecture
I design and optimize relational databases using SQL to ensure high-performance data storage and retrieval. My experience building full-stack management systems ensures your data remains organized and accessible.
Why My Approach is Different
I treat every data project as a Computer Engineering task. This means you don't just get a one-off analysis; you get clean, documented, and scalable code. Whether I'm building predictive models for groundwater levels or developing speech therapy software, I focus on finding creative technical solutions that work under real-world constraints.