Providers and Model

Understanding and Managing Configurations in BatcherAI

BatcherAI lets you connect to different Large Language Models (LLM) using Artificial Intelligence (AI) providers to power your tasks.

To allow you to tune how you want AI to be used in your workflow, we have created two types of configurations for you to tweak to your liking :

  • (AI) Provider configurations : these are sets of instructions that tell BatcherAI which providers you want to use, and how you want to use them.

  • LLM configurations : they tell BatcherAI how to use a specific AI model for a given provider configuration.

There are multiple pages allowing you to manage your configurations. Provider-related pages and LLM-related pages follow the same structure.

To switch between provider and LLM configurations, you only need to click on the corresponding button in the navigation bar of the configuration pages.

Configuration list page :

This is the main page used to manage both your provider and LLM configurations.

This page displays a table with each row representing a different configuration you've created. We also include some pre-configured defaults to help you get started quickly.

From this table, you can see your configurations at a glance and quickly perform actions and access other management pages to Create, Edit and Remove configurations.

To edit or delete a provider configuration, simply use the options available in the list for each integration.

Need to add a new provider? Simply click the "+ Add New configuration" button. This action will redirect you to the “Add a configuration” page, where you can define a brand new configuration.

Add a configuration page :

Using this page, you will be able to create a new configuration by filling its corresponding form.

Clicking on the “Create configuration” button will create and store it according to your given values. The configuration will then be visible in its matching configuration list page.

The table below lists the fields of a provider configuration :

name

description

default

example

model

The name of the model you would like to use.

cogito

phi3:latest

Max Tokens

the amount of text, in tokens, that the model can consider or “remember” at any one time

2048 tokens

1024 tokens

Response Format

The expected response format

text

json

Tool choice

The tool(s) you want your model to use

Ø

myfunction

Endpoint

The URL or the entry point you’ll send your requests to

batcher.ai

https://api-inference.huggingface.co/models/

Seed

A value used to make generation deterministic, the higher the more deterministic

Ø

50

System Prompt

A prompt that will be used in all of your prompts to the chosen model

Ø

Can you answer me in French ?

Temperature

Controls the creativity of the response

Ø

0.7

Top P

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass

Ø

0.6

model temperature

It defines the model’s “imagination” by allowing it more freedom of words. The higher it is, the more freedom it gets.

Ø

0.5

Thinking

View the thinking/reasoning tokens as part of your response.

Disabled

Enabled

Stream

Enable or disable real-time data Streaming

no

yes