Download Latest Version Version v0.9.9.3 source code.tar.gz (195.3 kB)
Email in envelope

Get an email when there's a new version of GPTel

Home / v0.9.8
Name Modified Size InfoDownloads / Week
Parent folder
README.md 2025-03-14 11.6 kB
Version v0.9.8 source code.tar.gz 2025-03-14 135.8 kB
Version v0.9.8 source code.zip 2025-03-14 154.8 kB
Totals: 3 Items   302.2 kB 0

Version 0.9.8 adds support for new Gemini, Anthropic, OpenAI, Perplexity, and DeepSeek models, introduces LLM tool use/function calling, a redesign of gptel-menu, includes new customization hooks, dry-run options and refined settings, improvements to the rewrite feature and control of LLM “reasoning” content.

Breaking changes

  • gemini-pro has been removed from the list of Gemini models, as this model is no longer supported by the Gemini API.

  • Sending an active region in Org mode will now apply Org mode-specific rules to the text, such as branching context.

  • The following obsolete variables and functions have been removed:

    • gptel-send-menu: Use gptel-menu instead.
    • gptel-host: Use gptel-make-openai instead.
    • gptel-playback: Use gptel-stream instead.
    • gptel--debug: Use gptel-log-level instead.

New models and backends

  • Add support for several new Gemini models including gemini-2.0-flash, gemini-2.0-pro-exp and gemini-2.0-flash-thinking-exp, among others.

  • Add support for the Anthropic model claude-3-7-sonnet-20250219, including its “reasoning” output.

  • Add support for OpenAI’s o1, o3-mini and gpt-4.5-preview models.

  • Add support for Perplexity. While gptel supported Perplexity in earlier releases by reusing its OpenAI support, there is now first class support for the Perplexity API, including citations.

  • Add support for DeepSeek. While gptel supported DeepSeek in earlier releases by reusing its OpenAI support, there is now first class support for the DeepSeek API, including support for handling “reasoning” output.

New features and UI changes

  • gptel-rewrite now supports iterating on responses.

  • gptel supports the ability to simulate/dry-run requests so you can see exactly what will be sent. This payload preview can now be edited in place and the request continued.

  • Directories can now be added to gptel’s global context. Doing so will add all files in the directory recursively.

  • “Oneshot” settings: when using gptel’s Transient menus, request parameters, directives and tools can now be set for the next request only in addition to globally across the Emacs session and buffer-locally. This is useful for making one-off requests with different settings.

  • gptel-mode can now be used in all modes derived from text-mode.

  • gptel now tries to handle LLM responses that are in mixed Org/Markdown markup correctly.

  • Add gptel-org-convert-response to toggle the automatic conversion of (possibly) Markdown-formatted LLM responses to Org markup where appropriate.

  • You can now look up registered gptel backends using the gptel-get-backend function. This is intended to make scripting and configuring gptel easier. gptel-get-backend is a generalized variable so you can (un)set backends with setf.

  • Tool use: gptel now supports LLM tool use, or function calling. Essentially you can equip the LLM with capabilities (such as filesystem access, web search, control of Emacs or introspection of Emacs’ state and more) that it can use to perform tasks for you. gptel runs these tools using argument values provided by the LLMs. This requires specifying tools, which are elisp functions with plain text descriptions of their arguments and results. gptel does not include any tools out of the box yet.

  • You can look up registered gptel tools using the gptel-get-tool function. This is intended to make scripting and configuring gptel easier. gptel-get-tool is a generalized variable so you can (un)set tools with setf.

  • New hooks for customization:

    • gptel-prompt-filter-hook runs in a temporary buffer containing the text to be sent, before the full query is created. It can be used for arbitrary text transformations to the source text.
    • gptel-post-request-hook runs after the request is sent, and (possibly) before any response is received. This is intended for preparatory/reset code.
    • gptel-post-rewrite-hook runs after a gptel-rewrite request is successfully and fully received.
  • gptel-menu has been redesigned. It now shows a verbose description of what will be sent and where the output will go. This is intended to provide clarity on gptel’s default prompting behavior, as well as the effect of the various prompt/response redirection it provides. Incompatible combinations of options are now disallowed.

  • The spacing between the end of the prompt and the beginning of the response in buffers is now customizable via gptel-response-separator, and can be any string.

  • gptel-context-remove-all is now an interactive command.

  • gptel now handles “reasoning” content produced by LLMs. Some LLMs include in their response a “thinking” or “reasoning” section. This text improves the quality of the LLM’s final output, but may not be interesting to you by itself. The new user option gptel-include-reasoning controls whether and how gptel displays this content.

  • (Anthropic API only) Some LLM backends can cache content sent to it by gptel, so that only the newly included part of the text needs to be processed on subsequent conversation turns. This results in faster and significantly cheaper processing. The new user option gptel-cache can be used to specify caching preferences for prompts, the system message and/or tool definitions. This is supported only by the Anthropic API right now.

  • (Org mode) Org property drawers are now stripped from the prompt text before sending queries. You can control this behavior or specify additional Org elements to ignore via gptel-org-ignore-elements. (For more complex pre-processing you can use gptel-prompt-filter-hook.)

Notable Bug fixes

  • Fix response mix-up when running concurrent requests in Org mode buffers.
  • gptel now works around an Org fontification bug where streaming responses in Org mode buffers sometimes caused source code blocks to remain unfontified.

Pull requests

New Contributors

Full Changelog: https://github.com/karthink/gptel/compare/v0.9.7...v0.9.8

Source: README.md, updated 2025-03-14