Pinecone Client
AI & LLMsThe Pinecone Python client
Release History
v8.1.02 featuresThis release introduces configuration options for BYOC index read capacity and adds advanced query parameters (`scan_factor`, `max_candidates`) for improved recall on dedicated node indexes.
v8.0.1Breaking1 fixThis release patches a critical denial-of-service vulnerability (CVE-2025-4565) affecting gRPC users by upgrading the protobuf dependency. The minimum required protobuf version is now 6.33.0.
v8.0.0Breaking4 featuresVersion 8.x introduces dedicated read capacity configuration for serverless indexes and powerful new ways to query and update vectors using metadata filters. This release requires Python 3.10+ and introduces a breaking change regarding the default handling of the namespace parameter in GRPC methods.
v7.3.02 featuresThis minor release introduces the Admin API for managing projects and keys via REST, and adds namespace management capabilities to the gRPC client.
v7.2.01 fix2 featuresThis minor release introduces new RESTful methods for managing index namespaces and enables configuration of integrated embedding models for existing indexes.
v7.1.01 fixThis release resolves a bug where GRPC methods with `async_req=True` failed to respect user-defined timeouts, enforcing the correct timeout behavior.
v7.0.22 fixes1 featureThis release primarily addresses a Windows installation bug related to readline and corrects a packaging error for the Assistant functionality dependency.
v7.0.12 fixesThis is a small bugfix release addressing issues with autocompletion and restoring missing type information for Exception classes.
v7.0.01 fix7 featuresVersion 7.x introduces major new features like Pinecone Assistant, Inference API model discovery, Serverless Backups, and BYOC index management, alongside significant performance improvements. This release aligns with the underlying API moving to version 2025-04.
v6.0.21 fixThis minor release addresses a specific bug related to fetching sparse vectors via gRPC.
v6.0.11 fixThis release fixes an incompatibility issue between `pinecone` 6.0.0 and `pinecone-plugin-assistant` by restoring necessary internal attributes.
v6.0.0Breaking4 featuresThis release introduces major features including integrated inference indexing, direct Inference API access, and new asyncio client variants for modern asynchronous programming. Users of previous preview plugins must uninstall them.
Common Errors
ModuleNotFoundError1 reportThe "ModuleNotFoundError" in pinecone-client usually arises from missing dependencies required by the client, especially on certain operating systems or environments. To fix it, ensure all necessary dependencies are installed by running `pip install pinecone-client` (or `pip install -r requirements.txt` if using a requirements file). If the missing module is system-specific (e.g., 'readline' on Windows), try installing the relevant system packages or a compatibility layer like `pip install pyreadline3`.
PineconeApiTypeError1 reportPineconeApiTypeError usually results from passing arguments of incorrect types to Pinecone API calls, such as a string where an integer is expected. To fix this, carefully review the function signature of the Pinecone API method you're calling and ensure that all arguments match the expected data types as defined in the Pinecone documentation. Correct any type mismatches and redeploy your code.
PineconeApiException1 reportPineconeApiException usually stems from malformed requests to the Pinecone service, often due to incorrect data types or improperly formatted filter expressions in your queries, upserts, or deletes. Review your code, especially your metadata filters and vector data structures, to ensure they conform to Pinecone's API requirements (e.g., correct types for metadata values, valid JSON syntax for filters). Use Pinecone's debug logging to inspect the exact request being sent and correlate it with the API documentation for the endpoint you are using to identify the mismatch.
Related AI & LLMs Packages
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
🦜🔗 The platform for reliable agents.
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
LLM inference in C/C++
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
Subscribe to Updates
Get notified when new versions are released