Valérian de Thézan de Gaussan · Data Engineering for process-heavy organizations

The 9 Schillace Laws for building LLM-based software.

And my take on each.


Using a LLM with a chatbot like ChatGPT seems easy.

But when it’s time to use it in an application in production, things change.

Here are the “Schillace Laws”, from Sam Schillace, Deputy CTO at Microsoft, that helped me a lot to build better LLM-based systems, and my take on each.

1/ Don’t write code if the model can do it; the model will get better, but the code won’t.
👉 This is the one I disagree the most with, as if I can do something reliably in an imperative mode, why would I let the model do it? I agree for the case when it’s not 100% using the imperative way, then it makes sense to let the model do it anyway.

2/ Trade leverage for precision; use interaction to mitigate.
👉 Imagine you’re using a LLM to automate monthly reports from a database. Instead of programming the LLM to understand and interpret every single database field specifically and then write a report (which would be very precise but less flexible), you use a more general approach with a more generic prompt, then you refine using interaction.

3/ Code is for syntax and process; models are for semantics and intent.
👉 Some tasks are better done by the code, some better by the model. For example, solving an equation is easy with standard code, hard for a model, but the model is good at generating the code to compute the equation.

4/ The system will be as brittle as its most brittle part.
👉 Same with imperative code. Except that to be less brittle, you need to keep your prompts flexible and generic. Hard code as little as possible.

5/ Ask Smart to Get Smart.
👉 The more detail and context you give, the more specific and precise the results.

6/ Uncertainty is an exception throw.
👉 When the model is uncertain about intent, use a similar system as exception: you want the model to bubble up the “error” back to a level where it’s manageable. It can be back to the user for example, asking him for more details.

7/ Text is the universal wire protocol.
👉 Generally speaking, pass text into prompt instead of XML or JSON. Especially as LLM gets better and better.

8/ Hard for you is hard for the model.
👉 This one is really about breaking the big tasks into smaller one, then use a meta-prompt to extract results. It gives to model room to think, and only the result is extracted.

9/ Beware pareidolia of consciousness; the model can be used against itself.
👉 There is nothing but matrices and vectors in the model. It doesn’t really think. Which means it doesn’t feel emotion. Which means you can ask the model why some code he generated just a minute ago doesn’t work, the model won’t take it the wrong way. Therefore, it can be used to self-evaluate its responses. More expensive in terms of compute, yes, but the quality gain can be significant.

Hope it will help you build great LLM-based tools.

Ref: learn.microsoft.com/en-us/semantic-kernel/when-to-use-ai/schillace-laws