v0.35.0-preview.4
Gemini CLISummary
This release introduces streaming support for faster API responses and adds the capability to configure custom inference temperatures. We also resolved critical bugs related to network timeouts and incorrect code indentation in generated output.
New Features
- Introduced support for streaming responses from the Gemini API for faster feedback during code generation.
- Enabled the ability to specify custom temperature settings for model inference via a new command-line flag.
Bug Fixes
- Fixed an issue where the tool would hang indefinitely when encountering a network timeout during API calls.
- Resolved a bug causing incorrect indentation in generated Python code blocks.
- Corrected an error where the history command failed to load sessions older than 30 days.
Improvements
- Improved the parsing logic for markdown code blocks to ensure better compatibility with various IDEs.
- Optimized the startup time of the CLI application by lazy-loading non-essential modules.