I. ALTAR BUYUKDOGAN
SOFTWARE & AI SYSTEMS DEVELOPER
PROFILE// SYSTEM.USER.INFO
Software and AI systems developer with a strong focus on product-oriented engineering. I build practical, maintainable systems by combining modern web technologies with applied artificial intelligence.
My work covers full-stack web and mobile applications, OCR pipelines, and LLM-powered features. I care about turning complex requirements into clear architectures, reliable implementations, and user-facing products that actually ship.
DIRECTIVES:
- ▸Product-first engineering mindset
- ▸Clean, scalable system architecture
- ▸Applied AI, OCR, and automation
- ▸Shipping over overengineering
PROJECTS// INSTALLED.MODULES
Museary
Product-focused digital art and culture platform exploring the intersection of museums, archives, and technology. Designed as a curated experience rather than a content feed, with strong emphasis on narrative, preservation, and user journey.
QatibLLM
Multimodal OCR and transcription system for Ottoman Turkish documents. Combines vision-language models, OCR pipelines, and LLM-based evaluation to convert Arabic-script sources into Latin-script Ottoman Turkish with measurable accuracy.
StudyWithNoting
Mobile-first study application that transforms user-uploaded PDFs into summaries, flashcards, and quizzes using LLMs. Designed with a daily-use, Duolingo-inspired learning flow.
Earthquake Alert App
Real-time mobile notification system that alerts users during earthquakes and provides immediate safety guidance. Built with a focus on reliability, low latency, and clear UX under stress conditions.
Document Management System (DMS)
Enterprise-focused document management and approval system built on Frappe. Includes internal document workflows, approval chains, notifications, and structured metadata handling.
Offline OCR Pipeline
Fully offline OCR pipeline optimized for industrial documents with tables and mixed text. Uses classical OCR engines with custom preprocessing for reliable, production-ready extraction.
LLM Fine-Tuning Experiments
Fine-tuning and evaluation experiments on Qwen and LLaMA models using cleaned, task-specific datasets. Focused on improving transcription accuracy and comparing CER across models.