It’s trendy to say you work with LLMs (Large Language Models) and that you use AI (Artificial Intelligence) in your business development projects.
But saying it is one thing, and doing it is another. Doing it meaningfully, which in the business world means doing it in a way that adds value to the company, to the business model.
In the case of a software development company like taniwa, we always consider the value that any work we do brings to us or to a client (or to an end user). The final validation of a system really rests on finding someone who finds it worth paying for.
If no one wants to pay, we’re still just playing around.
With LLMs there’s a lot of playing around - and a lot at stake seeing what companies like Microsoft, Google or Amazon are betting on - but there are no references, projects, that illustrate the value added and especially the value returned, the famous ROI (Return on Investment).
Let’s explain why this is happening:
LLMs are dumb In reality, what an LLM knows how to do is complete sentences, complete texts. It’s trained so that you pass it a text and it returns another text related to the first one. Nothing more. That’s why it’s important that in the text you provide good context for what you want. But it doesn’t know math, it doesn’t know physics, it doesn’t know chemistry, it doesn’t know anything, it knows how to complete sentences in a context.
LLMs are… generative? Would you hire a person whose main characteristic is being generative? It could even be counterproductive, right? It may have its role, but if you put it in charge of generating news about the real estate world, it will generate them but… would you let it publish without supervision?
LLMs are not precise They’re generative, not precise. If you ask an LLM to multiply 7271652376532 by 8973298732873 it will give you an answer. But it won’t be the correct one. Or maybe it will be, but you don’t know.
LLMs don’t know how to solve problems In some cases it may seem like they do know, but in reality they’ve generated a probable text, given the formulation of a problem. They haven’t inferred anything.
LLMs are limited in their context memory I can’t ask a question about Don Quixote and pass the entire text within the question. I have to select what I pass and what I don’t, and therefore I’m biasing the information that accompanies the question.
LLMs are fashionable. Everyone talks about them, everyone wants to get the credit for working on or leading such a project. This creates a lot of noise and you find very superficial information that confuses stakeholders.
So, When Should You Use LLMs?
Look, LLMs are really cool, their generative capabilities are phenomenal for generating texts, for completing texts, for generating texts from others and can be applied in situations like:
- Give me a summary of a topic and tell me the key points.
- Help me draw up a plan to “conquer the world”.
- Help me generate a text that helps me sell a product.
Our 2 tips for using LLMs are:
- Use them as one more piece of a more complex system, with more pieces.
- If there’s another piece that does a specific task better, use that other piece and not an LLM.
If you have a generic chatbot that supports your mathematical users you’re going to need:
- A piece like Wolfram Alpha to solve mathematical problems. Not an LLM.
- A piece between the LLM and the user that saves the conversation context and allows the user to return to a previous point in the conversation. The LLM doesn’t know how to do that.
- A database that has chunks of documents relevant to the user. The LLM doesn’t know how to do that.
- A context creation system for the LLM from queries to your documents.
LangChain - for example - gets it, and offers us components to combine with LLMs to create systems that are reliable, accurate and add value.
Before asking your LLM, you can go to Wikipedia, also check the weather, convert from Fahrenheit to Celsius with a calculator, and with that create a context for the LLM that helps you solve your problem.
Additionally, an LLM is slow and expensive. If what I want is an integer calculator, the best thing is to have a specialized piece that will do it muuuch faster and muuuch cheaper.
We do have references of projects that use LLMs and if you think we can help you design or understand a problem and its fit in this generative world, don’t hesitate to contact us at hola@taniwa.es

