Zero-shot prompting works well for common tasks the model has seen frequently in training data: translation, summarization, classification. It fails for highly specialized or nuanced tasks where the model lacks sufficient context. When zero-shot produces poor results, the first intervention is few-shot prompting (providing 2–5 examples), not fine-tuning.