We explain how large language models (LLMs) work, what they’re good at, and where they can fail. You’ll learn what prompts are, why hallucinations happen, and how context helps. This sets the foundation for building AI assistants that understand queries and carry on meaningful conversations.
LLM stands for Large Language Model — a powerful AI system trained on a massive amount of text data. The most well-known models are ChatGPT, Claude, GigaChat, YandexGPT, DeepSeek, and Qwen. Some of them are open-source, and you can even run them on your own server.
For a user, an LLM is a black box:
You give it text — and get a meaningful output: a translation, a summary, or even code.
The key concept here is the prompt — that’s what you feed into the model.
For example:
“Translate this text into Spanish" or “Write a Telegram bot in Python.”
It’s important to understand:
That’s why sometimes it hallucinates — confidently making things up.
Here are some common and powerful use cases:
The most important part:
→ Formulate your prompt clearly and thoughtfully.
Let’s take an example. You ask:
“What books did Shakespeare write in the 20th century?”
The model might reply:
“In 1923, Shakespeare published The Queen’s Tragedy…”
Sounds convincing — but it’s a complete fabrication. Shakespeare died in the 17th century.
These confident but false answers are called hallucinations.
To solve this problem, we use the RAG approach — Retrieval-Augmented Generation — where we feed the model real, factual data.
More on that in Lesson 2.
Let’s build a simple app on Directual: An assistant that recommends movies.
The user enters a description or preference, and the model suggests 3 suitable movies We’ll use the ChatGPT API, with an API key from your OpenAI account.
We’ll set up:
The response is saved to the database and instantly shown in the interface.
→ In the end, you’ll have a working AI-powered service that responds to real users.
You can download the app snapshot from this lesson.
⚠️ After importing, you’ll need to manually install all the required plugins!
Congratulations — you’ve taken your first step!
You now know:
But things are about to get even more interesting.
In Lesson 2, we’ll learn how to build a smarter AI — one that replies based on your actual data, not just guesses.
We’ll explore:
… and build all of that with your own hands.
Hop into our cozy community and get help with your projects, meet potential co-founders, chat with platform developers, and so much more.