Tag view

#safety

Cross-subject tag search for related interview cards.

Clear

Results update as you type. Press / to jump straight into search.

Tagged with safety

3 cards

Artificial Intelligence Easy Theory

What are guardrails in AI applications?

Guardrails are checks and controls around the model that reduce unsafe, low-quality, or out-of-policy behavior.

  • Can run before or after model output
  • Includes validation and policy checks
  • Works with human review when needed

What are guardrails in AI applications?

Artificial Intelligence Easy Theory

What is hallucination in generative AI?

A hallucination is a confident-looking model output that is unsupported, fabricated, or wrong.

  • Looks fluent but is false
  • RAG can reduce it but not eliminate it
  • Verification still matters

What is hallucination in generative AI?