Instructor
AI & LLMsstructured outputs for llms
Release History
v1.14.51 fixThis release fixes an issue where the Author metadata field was not being populated correctly for PyPI statistics by ensuring author names are properly separated from emails.
v1.14.44 fixesThis release focuses on stability and correctness, including fixes for validation errors, configuration label loss, and crashes related to list object processing.
v1.14.32 fixes2 featuresThis release introduces completeness-based validation for partial streaming and fixes bugs related to stream handling and field constraints during streaming.
v1.14.22 fixes1 featureThis release addresses critical bugs related to model validation during partial streaming and fixes infinite recursion issues with self-referential models.
v1.14.11 fix1 featureThis patch release introduces support for Google GenAI context caching via the cached_content parameter.
v1.14.06 fixes4 featuresThis release focuses on standardizing provider factory methods and exception handling, while adding Bedrock document support and fixing critical bugs in GenAI, OpenAI, and Cohere integrations.
v1.13.05 fixes2 featuresThis release introduces image support for Bedrock, improves type safety with a py.typed marker, and includes critical fixes for Gemini streaming and Anthropic tool blocks.
v1.12.09 fixes6 featuresThis release introduces enhanced retry tracking, per-call hooks, and xAI streaming support while fixing critical bugs in OpenAI JSON mode and Gemini response handling. It also marks the transition from validation_context to a unified context parameter.
v1.11.31 fix3 featuresThis release introduces enhanced retry tracking, per-call hook support, and llms.txt documentation support, while fixing multimodal import issues.
1.11.22 fixes2 featuresThis release enhances Google Cloud Storage support for multimodal data types and restores backwards compatibility for exception imports.
v1.11.0Breaking3 fixes5 featuresThis release introduces a major modular reorganization of the codebase, adds support for xAI, OpenRouter, and Truefoundry providers, and implements in-memory batching.
1.10.0Breaking5 fixes7 featuresThis release introduces native caching (Redis/AutoCache), expands provider support to include DeepSeek and Anthropic parallel tools, and migrates Google integrations to the new google-genai SDK.
1.9.24 fixes1 featureThis release introduces support for the xAI provider and includes several bug fixes for Gemini API safety settings and GenAI image harm categories.
1.9.14 fixes2 featuresThis release introduces Azure OpenAI support and simplifies Gemini safety configurations while fixing public API visibility for exceptions and JSON schema issues.
1.9.0Breaking6 fixes7 featuresThis release introduces Ollama and Writer provider support, improves Gemini and Anthropic integrations, and standardizes VertexAI async parameters. It also enhances error handling with a new exception hierarchy and resolves several dependency conflicts.
1.8.34 fixes4 featuresRelease 1.8.3 introduces support for asynchronous Bedrock clients and response handling, alongside various bug fixes for the Bedrock converse endpoint and documentation improvements.
1.8.21 fixThis patch release removes a stray print statement to clean up console output.
1.8.12 fixes2 featuresRelease 1.8.1 introduces a unified provider interface and enables streaming support directly within the create method, alongside fixes for Anthropic web search.
1.8.06 fixes1 featureThis release introduces a unified provider interface with string-based initialization and includes several bug fixes for Google GenAI and Python 3.10 type compatibility.
1.7.91 fix3 featuresThis release introduces async partial streaming for Gemini, adds Mistral PDF support, and improves type hinting for LiteLLM integrations.
1.7.81 fix4 featuresThis release introduces streaming support for Mistral and VertexAI, fixes a filename length bug in Google GenAI, and significantly expands documentation including Cursor rules and llms.txt support.
1.7.71 fix1 featureVersion 1.7.7 introduces SambaNova examples for both sync and async workflows and includes minor dependency fixes.
1.7.61 fixThis patch release addresses an incorrect import issue discovered in version 1.7.5.
1.7.54 featuresThis release introduces support for Mistral Structured Outputs and the Google GenAI SDK, alongside documentation improvements for SQL models and contributing guidelines.
1.7.42 fixes1 featureThis release introduces support for Open Router, updates the Anthropic dependency, and includes several documentation and testing fixes.
1.7.33 fixes7 featuresThis release introduces support for AWS Bedrock and Perplexity Sonar, adds Claude 3.7 Sonnet reasoning support, and defaults Gemini to JSON mode. It also includes various documentation improvements and a new utility to strip control characters from LLM outputs.
Common Errors
InstructorRetryException23 reportsInstructorRetryException usually arises from rate limiting or transient errors in the underlying API being called. Implement retry logic with exponential backoff and jitter around the API call within your instructor client. This will handle temporary service unavailability or exceeding API usage limits, ultimately preventing the exception.
ModuleNotFoundError4 reportsThe "ModuleNotFoundError" in instructor typically arises when required dependencies are not installed or are inaccessible in the Python environment. To resolve this, ensure all necessary packages, especially optional dependencies like google libraries, are installed using `pip install instructor[extra]` (or specific extras like `pip install instructor[google]`) and that your Python environment is correctly activated. Verify the package name isn't misspelled in your code as well.
NotImplementedError2 reportsThe `NotImplementedError` in instructor usually means a specific function or method required for a chosen model or integration (like Bedrock) hasn't been fully implemented within the underlying library (e.g., boto3). To fix this, ensure you're using the latest version of both instructor and the relevant provider library (e.g., boto3), and if the error persists, check the provider's documentation or issue tracker for information on supported features or potential workarounds for the unimplemented functionality. If no solution is available consider implementing the missing function to resolve the error.
ValidationError2 reportsValidationError in instructor often arises from mismatches between the expected data structure defined in your instructor model and the actual data returned by the language model. Carefully inspect your model's fields, types, and validation constraints. Ensure the LLM response aligns precisely, and use validation hooks (pre/post) to catch and correct discrepancies or provide better guidance to the LLM.
InstructorValidationError2 reportsThe "InstructorValidationError" arises when the response from the LLM fails to conform to the pydantic model defined by instructor. To fix this, carefully validate your pydantic model's structure, types, and constraints to ensure they align with the LLM's output, and implement robust error handling to catch validation errors and guide the LLM to correct its response, potentially using `instructor.patch` with `mode="retry"` .
ParamValidationError2 reportsParamValidationError in instructor usually arises from incorrect data types or missing required fields when calling AWS Bedrock or botocore APIs. Ensure all parameters passed to these APIs match the expected data types and constraints defined in the API specification, paying close attention to nested structures and required fields. Update your botocore and boto3 packages to the latest versions, as newer versions often include updated parameter validation rules and bug fixes that resolve these validation errors.
Related AI & LLMs Packages
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
🦜🔗 The platform for reliable agents.
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
LLM inference in C/C++
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
Subscribe to Updates
Get notified when new versions are released