Geeps offers 4 built-in providers:
In addition to this you can add custom providers that are compatible with OpenAI Chat Completions API (Responses API support is coming soon!).
Here you save your API keys and configure models and other parameters.
Select the default provider that will be used for new conversations. You can also switch between providers by long-pressing the compose button. On macOS left click and hold the compose button to switch default provider.
Configure settings for built-in providers or add custom providers.
The API keys are saved in the keychain and are not synced with iCloud. Most other settings are synced across your devices.
You can enable or disable each provider individually, except for OpenAI.
This section is available for custom providers - set the display name and symbol to be used for this provider. The symbol will be used for the compose button and in context menus.
For built-in providers you can only change the display name (icon support is coming soon!).
Model section lets you configure the default model to use with this provider. You can load models directly from the API as long as /v1/models endpoint is supported.
Choose the model from the dropdown, or enter it manually in the text field below. Use the format exactly as it is named in the API documentation, e.g. gpt-5-chat, claude-sonnet-4-5, moonshotai/kimi-k2.5 (for OpenRouter), etc.
OpenRouter users: you can filter models list to show only free models.
Choose the default prompt that will be used for all new conversations with this provider. This prompt (aka system message) will be prepended to every chat. You can also replace prompt for each conversation individually.
Manage your prompts in the prompt library.
Configure the default parameters for the model. You can set temperature, max tokens, top p, frequency penalty and presence penalty. Additional parameters will be available in the future. These parameters are used for all conversations with this provider unless overridden in the individual chat settings.
For OpenAI you can override the default API host by entering a custom base URL or full custom endpoint. This is useful if you are using a proxy or a custom deployment of OpenAI API (e.g. Azure). For more information refer to this guide.
Use this to load models list directly from the API.
All built-in providers are supported, but some custom providers may not fully support /v1/models endpoint. If you encounter issues, let me know which provider you are using and I will try to fix it. In the meantime, you can enter model manually.
Loaded models are cached locally, so you don't need to load them every time. But you can refresh the list at any time to load the latest models.
Use this to test your API connection and key. If the key is valid, you will see a success message. If not, you will see an error message.