Hallucination occurs because LLMs generate text based on statistical patterns, not verified facts. Mitigation strategies include RAG (grounding responses in real documents), asking the model to cite sources, using lower temperature settings, and implementing output validation layers. For high-stakes business workflows, always build a human review step or automated fact-checking agent before acting on LLM outputs.