AI & ML interests

Specialized Small Language Models; Optimized Fine-Tuning; Efficient Inference

Recent Activity

GabrielPimenta99  updated a Space 3 days ago
Dharma-AI/README
GabrielPimenta99  published a Space 3 days ago
Dharma-AI/README
GabrielPimenta99  published a dataset 4 days ago
Dharma-AI/DharmaOCR-Benchmark
View all activity

Organization Card

Dharma-AI is a Brazilian AI research lab specialized in building best-in-class Specialized Small Language Models (SSLMs) for high-impact, domain-specific problems. Our models are engineered to maximize performance while minimizing latency, cost, and environmental footprint — by combining state-of-the-art techniques across the full model development stack: from fine-tuning strategies to inference optimization.

We believe the future of applied AI is not bigger models, it is smarter specialization.

Research Focus

  • SLM Specialization — fine-tuning pipelines (SFT, RLHF, GRPO, DPO), multi-stage preference optimization, and data curation strategies to push small models to their performance ceiling on domain-specific tasks
  • Mechanistic Interpretability of SLMs — understanding the internal representations and circuits of small language models to inform better specialization, diagnose failure modes, and build more trustworthy systems
  • GPU Utilization Optimization — maximizing throughput and minimizing memory footprint through quantization, kernel fusion, batching strategies, and efficient serving infrastructure

Links

Website