Artificial intelligence has moved from the realm of science fiction into our daily lives, from virtual assistants on our phones to sophisticated diagnostic systems in hospitals. But the real power of AI lies not only in global corporations but also in the hands of individuals and small teams who dare to build something personal, purposeful, and transformative.
This is the story of MIKAI — short for Medical Intelligence + Kijakarn’s AI — a custom-built large language model (LLM) designed not by a tech giant, but by a practicing doctor who wanted to bring the future of medical knowledge into his own clinic.
⸻
Why Build My Own LLM?
The motivation behind MIKAI began with a simple but pressing reality: modern medicine evolves at an overwhelming pace. Every month, hundreds of new clinical studies, guidelines, and case reports are published. No single human can possibly read them all, much less apply them efficiently to patient care.
Commercial AI systems, like ChatGPT, are useful but limited:
• They lack up-to-date knowledge in rapidly advancing fields like endocrinology.
• They are black boxes with no control over how data is handled or filtered.
• They cannot be customized deeply for specific workflows in a private clinic.
As an endocrinologist, I wanted an assistant who could:
1. Continuously learn from medical corpora, guidelines, and journals.
2. Provide safe, accurate, and evidence-based answers.
3. Integrate with my practice — handling patient documentation, translation, RAG-based search, and structured data management.
4. Evolve under my guidance, not under the roadmap of a distant tech company.
That vision gave birth to MIKAI.
⸻
Early Foundations: From Off-the-Shelf to Self-Built
Like most AI builders, I didn’t start from scratch. The initial steps were exploratory: testing models like Mistral, LLaMA, Falcon, and GPT-NeoX. Each had strengths, but none were tailored for the medical domain.
The first true breakthrough came with Mistral 7B Instruct, running locally on my workstation. I used llama.cpp to deploy it without requiring cloud servers, ensuring data privacy. At this stage, MIKAI was more of a “mini research assistant” than a doctor’s aide, but the potential was clear.
To make the system practical, I introduced Retrieval-Augmented Generation (RAG):
• A document store for medical PDFs, journals, and clinical guidelines.
• A retrieval pipeline that allows MIKAI to quote and reason from real references.
• A separation of chat history vs. global medical memory, ensuring clean, contextual responses.
This architecture laid the groundwork for MIKAI as a knowledge-augmented medical assistant.
⸻
Building the AI Rig: Hardware for a Personal LLM
Running LLMs isn’t just about clever software — it’s also about serious hardware. For MIKAI, I built a custom AI rig that balances affordability with power:
• Dual Xeon CPUs, 64GB RAM for multitasking.
• Nvidia Tesla P40 (24GB VRAM) as the main AI accelerator.
• Radeon RX 580 for display.
• Ubuntu dual-boot with Hackintosh Clover for flexibility.
This setup allows me to experiment with models ranging from 7B to 24B parameters, running quantized versions (Q4/Q5) that fit within GPU memory. On the software side, I use:
• CUDA 12.4 for GPU acceleration.
• Dockerized services for portability.
• MariaDB for structured storage of conversations, tokens, and medical notes.
The result is a doctor’s personal AI workstation — a private lab where I can test, train, and fine-tune models without depending on corporate servers.
⸻
The RAG Layer: Teaching MIKAI to Learn Continuously
One of the core challenges with LLMs is stale knowledge. A model trained in 2023 won’t automatically know the 2025 ADA Diabetes Guidelines or a paper published last week.
That’s where RAG (Retrieval-Augmented Generation) comes in. For MIKAI, I designed a two-layer memory system:
1. Session-based memory — keeps track of conversations for contextual flow.
2. Global medical memory — updated with feedback and curated sources.
Here’s how it works in practice:
• I upload a new guideline PDF (e.g., ADA 2025 Standards of Diabetes Care).
• MIKAI parses it, indexes it into the vector database.
• When I ask a clinical question, MIKAI first retrieves relevant passages before generating an answer.
This means MIKAI doesn’t just hallucinate — it answers with citations and context, much like a real medical resident preparing for rounds.
⸻
From Mini Chat to Doctor’s Assistant
MIKAI’s interface started as a basic local chat. Over time, I expanded it into a multi-functional workspace:
• Mini Chat Widget: Embeddable on websites like doctornuke.com.
• Patient File System: Auto-generates structured medical forms from scanned documents or speech-to-text dictations.
• Multilingual Support: Translates medical guidelines into Thai while preserving technical terms.
• Secure Access: Two-step authentication and Cloudflare tunneling for remote use.
These features transform MIKAI from “just a chatbot” into a practical clinic assistant that handles real workflows.
⸻
Training, Fine-Tuning, and Safety
No medical AI is useful if it’s unsafe. A careless answer can put a patient at risk. That’s why I’ve built MIKAI with multiple safety layers:
• Filtering out unreliable tokens (e.g., scam coins in blockchain experiments, or low-quality sources in medical data).
• Developer blacklists for AI models trained with misleading content.
• Automatic detection of hallucinations by comparing generated answers to retrieved sources.
• Fine-tuning via LoRA (Low-Rank Adaptation) on curated medical datasets.
For larger-scale training experiments, I’m preparing to test Magistral 24B QLoRA — a balance between accuracy and local hardware feasibility (24GB VRAM).
The goal is clear: MIKAI should never give “guesses” in medicine. It must either retrieve evidence, admit uncertainty, or point to guidelines.
⸻
The Challenges Along the Way
Building MIKAI hasn’t been easy. The journey has been full of technical hurdles:
• GPU memory limits: Fitting 20–24B parameter models on a 24GB card requires careful quantization.
• Prompt management: Ensuring clean separation of user queries, context, and RAG inputs to avoid “prompt leaks.”
• Performance tuning: Balancing speed vs. accuracy (tokens per second vs. depth of reasoning).
• UI/UX design: Creating a modern chat interface with session management and retrieval panes.
But every obstacle has also been an opportunity to refine the system.
⸻
Where MIKAI Stands Today
Today, MIKAI is no longer just an experiment — it’s a functioning assistant that helps in real-world tasks:
• Answers complex medical questions with evidence from current guidelines.
• Generates structured medical notes from speech or scanned files.
• Runs privately on local hardware with full data control.
• Supports multilingual translation for medical literature.
• Embeds into websites for sharing knowledge beyond the clinic.
It’s not perfect — but it’s growing, learning, and adapting every week.
⸻
The Future of MIKAI
Where does MIKAI go next? The roadmap is ambitious:
1. Self-Learning LoRA: Allowing MIKAI to continuously fine-tune on newly retrieved data.
2. Medical QA Benchmarking: Comparing MIKAI’s answers against mainstream LLMs for accuracy.
3. Patient Integration: Building a secure, lightweight mobile app for patient-clinic communication.
4. AI Collaboration: Connecting MIKAI with other open-source AI agents (Whisper for voice, Stable Diffusion for visuals, etc.).
5. Scalable Training: Testing larger models (20–30B) with quantization strategies to push accuracy further.
Ultimately, the goal isn’t just to have “my own ChatGPT.” It’s to have a personal, evolving, trustworthy medical partner — one that grows alongside my practice and improves patient care.
⸻
Reflections: A Doctor Building AI
MIKAI is more than just an LLM project. It represents a philosophy of empowerment: that doctors, researchers, and independent builders don’t have to wait for corporations to solve their problems.
We can build our own tools.
We can take control of AI.
We can shape it for real-world needs, not generic use cases.
For me, MIKAI is not the end of a journey — it’s just the beginning. And as it grows, it reminds me daily of why I became a doctor: not only to treat patients, but also to improve the systems that support their care.
The future of medicine won’t be written only in journals or hospitals. It will also be written in the labs, clinics, and laptops of doctors and builders worldwide. And MIKAI is my contribution to that future.

A web newbie since 1996