Tag view

#llms

Cross-subject tag search for related interview cards.

Clear

Results update as you type. Press / to jump straight into search.

Tagged with llms

9 cards

Artificial Intelligence Medium Theory

Fine-tuning vs prompting

Prompting changes the instructions at inference time, while fine-tuning changes model weights using additional training data.

  • Prompting is faster to iterate
  • Fine-tuning can shape style or behavior
  • Use the lighter tool first

Fine-tuning vs prompting

Artificial Intelligence Medium Theory

RAG basics

RAG retrieves relevant external context and supplies it to the model at generation time.

  • Retrieval before generation
  • Improves freshness and grounding
  • Quality depends on chunking and retrieval

RAG basics

Artificial Intelligence Easy Theory

What are tokens in LLM systems?

Tokens are the chunks of text a model processes, and they drive both context window limits and usage cost.

  • Not exactly words
  • Context windows are token-based
  • Cost often scales with token count

What are tokens in LLM systems?

Artificial Intelligence Medium Theory

What is an agent in AI applications?

An AI agent is a system that uses a model plus tools, memory, and control flow to take multi-step actions toward a goal.

  • Model is only one component
  • Tools enable actions
  • Control logic matters for reliability

What is an agent in AI applications?

Artificial Intelligence Easy Theory

What is hallucination in generative AI?

A hallucination is a confident-looking model output that is unsupported, fabricated, or wrong.

  • Looks fluent but is false
  • RAG can reduce it but not eliminate it
  • Verification still matters

What is hallucination in generative AI?

Artificial Intelligence Easy Theory

What is model context window?

The context window is the maximum amount of tokenized input and working context a model can handle in one interaction.

  • Includes prompt and retrieved context
  • Longer is useful but not free
  • Too much context can still hurt quality

What is model context window?

Artificial Intelligence Medium Theory

What is model routing?

Model routing chooses different models or configurations based on task complexity, latency needs, cost, or safety requirements.

  • Cheaper models for simple tasks
  • Stronger models for harder tasks
  • Routing policy is a product decision

What is model routing?

Artificial Intelligence Easy Theory

What is prompt engineering?

Prompt engineering is the practice of structuring instructions and context so a model produces more reliable outputs.

  • Instruction quality matters
  • Examples can shape behavior
  • Evaluation is still required

What is prompt engineering?

Artificial Intelligence Easy Theory

What is tool calling in LLM apps?

Tool calling lets a model request an external function or service when the answer requires real actions or fresh data.

  • Model decides a tool is needed
  • App executes the tool
  • Model continues with the result

What is tool calling in LLM apps?