Geeps offers 4 built-in providers: OpenAI, Anthropic (Claude), Google (Gemini) and OpenRouter. In addition to this you can add custom providers that are compatible with OpenAI API. Here you save your API keys and configure models and other parameters.
Select the default provider that will be used for new conversations. You can also switch between providers by long-pressing / right clicking the compose button.
Configure settings for built-in providers or add custom providers.
The API keys are saved in the keychain and are not synced with iCloud. Most other settings are synced across your devices.
You can enable or disable each provider individually, except for OpenAI.
This section is available for custom providers - set the display name and symbol to be used for this provider. The symbol will be used for the compose button and in context menus.
Model section lets you configure the default model to use with this provider. You can load models directly from the API as long as /v1/models endpoint is supported.
Choose the model from the dropdown, or enter it manually in the text field below. Use the format exactly as it is named in the API documentation, e.g. gpt-5-chat or claude-sonnet-4-5.
Choose the default prompt that will be used for all new conversations with this provider. This prompt (aka system message) will be prepended to every chat. You can also replace prompt for each conversation individually. Check out prompt library for more information.
Configure the default parameters for the model. You can set temperature, max tokens, top p, frequency penalty and presence penalty. Additional parameters will be available in the future. These parameters are used for all conversations with this provider unless overridden in the individual chat settings.
For OpenAI you can override the default API host by entering a custom base URL or full custom endpoint. This is useful if you are using a proxy or a custom deployment of OpenAI API (e.g. Azure). For more information refer to this guide.
Use this to load models list directly from the API. Some custom providers may not fully support /v1/models endpoint. If you encounter issues, let me know which provider you are using and I will try to fix it. In the meantime, you can enter model manually.
Use this to test your API connection and key. If the key is valid, you will see a success message. If not, you will see an error message.