v0.4.0
Breaking Changes📦 ragas
⚠ 2 breaking✨ 5 features🐛 9 fixes🔧 16 symbols
Summary
This release introduces major architectural updates, migrating numerous metrics to a modular BasePrompt system and enhancing LLM provider support via instructor.from_provider and dual adapter capabilities. It also includes several bug fixes related to LangChain integration and LLM detection.
⚠️ Breaking Changes
- The migration of several metrics (ContextRelevance, Response Groundedness, AnswerAccuracy, Faithfulness, AnswerCorrectness, SummaryScore, ContextRelevance, AnswerRelevancy, ContextEntityRecall, FactualCorrectness, NoiseSensitivity) to the modular BasePrompt architecture might require updates to custom metric implementations if they relied on internal structures that have changed.
- LangChain import statements have been updated; users relying on specific internal LangChain imports within RAGAS integrations may need to adjust their code.
Migration Steps
- Review and update any custom metric implementations due to the migration of numerous metrics to the modular BasePrompt architecture.
- Review and update any code relying on specific internal LangChain import paths that were modified.
- Update customizations how-to guides to use collections API and LLM factory as documented.
- Consult the migration guide for v0.4 for detailed instructions on upgrading.
✨ New Features
- Support for GPT-5 and o-series models with automatic temperature and top_p constraint handling.
- Migration to instructor.from_provider for universal provider support.
- Implementation of a prompt class for context precision.
- Dual adapter support added for Instructor and LiteLLM.
- Support for EvaluationDataset backend for experiments.
🐛 Bug Fixes
- Fixed import statements for langchain modules.
- Adjusted LLM parameters in evals.py.
- Resolved InstructorLLM detection bug.
- Fixed retrieved_contexts string filtering in LangChain integration.
- Corrected MultiTurnSample user_input validation logic.
- Fixed automatic embedding provider matching for LLMs.
- Fixed detection of async clients in closures for instructor-wrapped litellm routers.
- Fixed quickstart issues.
- Made GoogleEmbeddings handle GenerativeModel clients by auto-extracting the genai module.
🔧 Affected Symbols
ContextRelevanceResponse GroundednessAnswerAccuracyFaithfulnessAnswerCorrectnessSummaryScoreContextRelevanceAnswerRelevancyContextEntityRecallFactualCorrectnessNoiseSensitivityBasePromptInstructorLLMEvaluationDatasetGoogleEmbeddingslangchain modules