
Announcing Yast
Yast is live: describe AI agents in plain English, connect your tools, and let them improve after every run.
A practical look at lazy-loaded agent skills: metadata first, full files only when needed, and no persistent VM required.
Skills are one of the highest-leverage ideas in agent design. A skill is a small package of instructions and supporting files that teaches an agent how to do a task: follow a company's sales research process, use an internal API safely, format a report, or apply a team's coding standard.
The obvious implementation is to give every agent a filesystem or VM, mount every available skill, and let the agent inspect them. That works, but it is wasteful. Most agents only need a few skills per run.
At Yast, we made skills lazy. The agent first sees a tiny index of available skills. Full files are only loaded when the agent selects a specific skill.
Yast runs agents inside short-lived action runtimes. Skills are stored as normal application records with file metadata, while the actual file contents live in object storage.
That gives us a useful split. The backend remains the source of truth, while the execution runtime only creates local files when a tool requires local paths.
The challenge is that many skill loaders expect a directory of skill folders, usually with a SKILL.md file and optional supporting files. We wanted that interface without permanently maintaining a filesystem per organization or booting a VM just to expose a handful of markdown files.
When an agent run starts, we create a temporary skills directory containing only one small SKILL.md per skill. Each file contains the skill name and description, enough for discovery and selection.
No supporting files are fetched. No large instructions are loaded. The model sees a list of skills it can choose from, but the expensive part has not happened yet.
When the agent selects a skill, the runtime maps the skill name back to the stored skill record, downloads its files through signed storage URLs, and creates a complete temporary directory for just that skill.
The skill can include markdown, scripts, examples, fixtures, or any other files the execution environment needs. But the run only pays that cost for skills the agent actually uses.
The filesystem is an implementation detail, not the source of truth. Temporary directories are just compatibility adapters for tools that expect local paths.
If no skill is selected, no full skill files are downloaded. If one skill is selected, one skill is materialized. If three skills are selected, three skills are materialized. When the run ends, the temporary runtime state can disappear because the canonical data is still in the backend.
Skill slugs and file paths are normalized into safe path segments. The code rejects dot and dot-dot segments, converts nested slugs into safe directory names, and strips redundant leading path segments from stored skill file paths. This prevents a skill file from escaping the generated skill directory.
Selected skills are cached by name within the run. If the agent asks for the same skill again, we reuse the same materialization work instead of refetching and recreating the runtime view.
This design is effectively free for the common case because discovery is tiny. A metadata-only SKILL.md is just a name and description. The token cost is close to showing the agent another tool description.
The expensive work scales with selected skills, not available skills. If an organization has 100 skills but an agent needs two, the run pays for two full hydrations. For agents that need only a few skills, which is the common case, the marginal cost is almost indistinguishable from normal tool loading.
It also avoids paying for a VM boundary just to hold files. The skill files are streamed from storage into the runtime only after the model has made a useful narrowing decision.
Agents are good at choosing from compact descriptions. They do not need every implementation detail up front. Giving every skill file to the model at the beginning often makes performance worse because it pollutes context with irrelevant procedures.
The lazy approach matches the way agents reason. First, identify what capability is needed. Then load the procedure for that capability. Then execute with the supporting files available.
For a sales agent, that might mean loading only the "account research" and "outreach tone" skills. For a support agent, it might mean loading only the "refund policy" skill. For an engineering agent, it might mean loading only the repository-specific testing skill. Each run gets the relevant instructions without dragging along the rest of the organization's operating manual.
The research lesson is that agent context should be progressively disclosed. Tools, files, memories, and skills do not all need to enter the run at the same time. A small index is enough to let the model choose.
Skills still behave like real files when an agent needs them, but the platform does not need to provision file systems or VMs as the primary abstraction. For teams with many specialized skills and agents that usually need only a few, that makes skills feel free in practice.

Yast is live: describe AI agents in plain English, connect your tools, and let them improve after every run.

No single AI model is best at everything. Learn how Yast routes tasks to GPT, Claude, Gemini, Deepseek, and Mistral for optimal results.

Marketing teams spend hours building reports. Yast agents pull data from every source, generate analysis, and deliver polished reports automatically.
Describe what you need. Yast builds the agent, connects your tools, runs it on autopilot, and it gets smarter every time.
Get Started