oba

joined 2 days ago
 

I’ve been experimenting with building and deploying ML and LLM projects for a while now, and honestly, it’s been a journey.

I had a really good conversation with Dean Pleban (CEO @ DAGsHub which is built on chromaDB and other OSS), who shared some great practical insights based on his own experience helping teams go from experiments to real-world production.

Sharing here what he shared with me, and what I experienced myself -

Data matters way more than I thought. Initially, I focused a lot on model architectures and less on the quality of my data pipelines. Production performance heavily depends on robust data handling—things like proper data versioning, monitoring, and governance can save you a lot of headaches. This becomes way more important when your toy-project becomes a collaborative project with others.

LLMs need their own rules. Working with large language models introduced challenges I wasn’t fully prepared for—like hallucinations, biases, and the resource demands. Dean suggested frameworks like RAES (Robustness, Alignment, Efficiency, Safety) to help tackle these issues, and it’s something I’m actively trying out now. He also mentioned “LLM as a judge” which seems to be a concept that is getting a lot of attention recently.

Some practical tips Dean shared with me:

Save chain of thought output (the output text in reasoning models) - you never know when you might need it. This sometimes require using the verbos parameter.

Log experiments thoroughly (parameters, hyper-parameters, models used, data-versioning…).

Start with a Jupyter notebook, but move to production-grade tooling (all tools mentioned in the guide bellow 👇🏻)

To help myself (and hopefully others) visualize and internalize these lessons, I created a guide that breaks down how successful ML/LLM projects are structured. If you’re curious, you can explore it here -

I MARKED the OSS ones!

https://www.readyforagents.com/resources/llm-projects-structure