Integrating Google Gemini with langchain4j

This article is intended for junior developers and focuses on how to integrate Google’s Gemini model using Java and the langchain4j framework. We’ll cover configuration settings, model strategies, and implementation, with a focus on special considerations for using Gemini and a thorough explanation of the provided Java code.

Configuration Settings

Configuration File Explanation

The config/application-dev.yml file defines key parameters for the Gemini model:

1
2
3
4
5
6
7
8
9
10
11
gai:
models:
- id: gemini-pro
name: Google Gemini Pro
projectId: google-lab
location: asia-northeast1
modelName: gemini-pro
topP: 1.0
temperature: 0.9
maxTokens: -1
modelProvider: GEMINI

These parameters include model ID, location, characteristics, etc., essential for connecting to and using the Gemini model.

ModelProvider Enumeration

The ModelProvider enumeration includes the GEMINI option for identifying different model providers:

1
2
3
4
5
public enum ModelProvider {
// ...
GEMINI,
// ...
}

Gemini Chat Model Strategy

Implementation of GeminiChatModelStrategy

GeminiChatModelStrategy defines the strategy for creating and configuring the Gemini model, which is activated when ModelProvider.GEMINI is detected:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
/**
* Ref:
* https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/gemini
*/
@Slf4j
public class GeminiChatModelStrategy implements ChatModelCreationStrategy {

@Override
public boolean applies(ModelProvider provider) {
return provider == ModelProvider.GEMINI;
}

@Override
public ChatStreamProxy createChat(LLMProperties properties) {
log.debug("Using GeminiChatModelStrategy to create VertexAiGeminiStreamingChatModel");
StreamingChatLanguageModel streamingChatLanguageModel = VertexAiGeminiStreamingChatModel.builder()
.project(properties.getProjectId())
.location(properties.getLocation())
.modelName(properties.getModelName())
.build();
ChatStreamProxy chatStreamProxy = new ChatStreamProxy(streamingChatLanguageModel);
chatStreamProxy.setCanHandleSystemMessages(Boolean.FALSE);
return chatStreamProxy;
}
}

In this class, the strategy for when to use the Gemini model is defined. When the provider is GEMINI, a VertexAiGeminiStreamingChatModel is created with the provided properties. The ChatStreamProxy created here is configured not to handle system messages, as Gemini currently doesn’t support them.

Chat Stream Proxy

Role and Functionality of ChatStreamProxy

The ChatStreamProxy class plays a pivotal role in chat functionalities, deciding the method based on the model type (streaming or non-streaming):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
/**
* Proxy handling chat stream capabilities, including support for streaming and system messages.
* StreamingChatLanguageModel is used if the model supports streaming.
* Otherwise, ChatLanguageModel is used and streaming is simulated.
*/
@Slf4j
@FieldDefaults(level = lombok.AccessLevel.PRIVATE)
public class ChatStreamProxy {
/**
* Indicates if this proxy supports streaming chat.
*/
Boolean canStreamChat = false;

/**
* Indicates if this proxy can handle system messages.
*/
@Setter @Getter
Boolean canHandleSystemMessages = true;
ChatLanguageModel chatLanguageModel = null;
StreamingChatLanguageModel streamingChatLanguageModel = null;

// Constructor for non-streaming models
public ChatStreamProxy(ChatLanguageModel chatLanguageModel) {
this.chatLanguageModel = chatLanguageModel;
this.canStreamChat = false;
log.debug("Established non-streaming chat proxy canStreamChat={}, chatLanguageModel={}", canStreamChat, chatLanguageModel);
}

// Constructor for streaming models
public ChatStreamProxy(StreamingChatLanguageModel streamingChatLanguageModel) {
this.streamingChatLanguageModel = streamingChatLanguageModel;
this.canStreamChat = true;
log.debug("Established streaming chat proxy canStreamChat={}, streamingChatLanguageModel={}", canStreamChat,
streamingChatLanguageModel);
}

/**
* Generates chat responses based on the model type.
* Simulates streaming for non-streaming models by breaking down AI messages into characters.
*
* @param messages Chat messages list
* @param handler Handler for streaming responses
*/
public void generate(List<ChatMessage> messages, StreamingResponseHandler<AiMessage> handler) {
if (this.canStreamChat) {
streamingChatLanguageModel.generate(messages, handler);
} else {
// Handling non-streaming models
Response<AiMessage> response = chatLanguageModel.generate(messages);
String aiMessageText = response.content().text();
log.debug("Received complete AI response: {}", aiMessageText);
// Simulating streaming response
simulateStreamingResponse(aiMessageText, handler);
}
}

/**
* Simulates streaming response, sending message content character by character.
*
* @param aiMessageText AI message text
* @param handler Streaming response handler
*/
private void simulateStreamingResponse(String aiMessageText, StreamingResponseHandler<AiMessage> handler) {
for (char character : aiMessageText.toCharArray()) {
handler.onNext(String.valueOf(character));
}
// Completing the process
Response<AiMessage> response = Response.from(
AiMessage.from(aiMessageText),
new TokenUsage(0, 0), // TODO: Token calculation
FinishReason.STOP);
handler.onComplete(response);
}
}

In this class, constructors cater to different model types, with the ability to simulate streaming for non-streaming models.

Chat Service Implementation

Overview of the ChatService Class

The ChatService class in our implementation manages the interactions with the chat functionality within the application. It plays a crucial role in initializing chat models, handling requests, and generating responses based on user inputs and application configurations.

Key Code Segment in ChatService

An important segment in the ChatService class is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// It is important to know the model to be used beforehand, as SystemMessage is currently not supported by Gemini
ChatStreamProxy chatStreamProxy = null;
// Some code omitted for brevity
log.debug("Using default model {}", modelConfig.getProvider());
LLMProperties properties = chatModelFactory.getModelProperties(applicationProperties.getModels(),
modelConfig.getProvider());
chatStreamProxy = chatModelFactory.createChat(properties);

// Applying variables and creating the final prompt
promptVariables.put("prePrompt", app.getPrePrompt());
Prompt prompt = promptTemplate.apply(promptVariables);
log.debug("SystemMessage={}", prompt.toSystemMessage().text());
if (chatStreamProxy.getCanHandleSystemMessages()) {
chatMemory.add(prompt.toSystemMessage());
chatMemory.add(UserMessage.from(chatRqDto.getHumanMessage()));
} else {
chatMemory.add(UserMessage.from(prompt.text() + chatRqDto.getHumanMessage()));
}

In this code, the determination of the model to be used is crucial, especially since Gemini does not currently support SystemMessage. The ChatStreamProxy instance is created based on the model configuration. The application of variables and creation of the final prompt are then carried out, with a check to see if the model can handle system messages. If it can, both the prompt (as a system message) and the user message are added to the chat memory. If not, the system message text is concatenated with the user message, addressing the limitation that Gemini does not support SystemMessage.

Explanation of Key Code

  1. Model Determination: Initially, the model to be used is determined. This is crucial because the Gemini model currently does not support SystemMessage, which affects how messages are processed and added to the chat memory.

  2. ChatStreamProxy Creation: Depending on the model selected, a ChatStreamProxy is instantiated. This proxy will determine how the chat model will be interacted with, especially in terms of streaming capabilities and handling system messages.

  3. Prompt Creation and Application: The prePrompt from the application settings is applied to the prompt template. This step is essential for constructing the final message format to be sent to the chat model.

  4. Handling System Messages: The system checks if the model, encapsulated within ChatStreamProxy, can handle system messages. If it can, the system and user messages are added separately to the chat memory. If not, due to the current limitation of Gemini, the system message text is concatenated with the user message before being added to the chat memory. This adaptation is crucial for ensuring compatibility with models that do not support SystemMessage.

Special Considerations

Handling of SystemMessage

In using Gemini, it’s important to note the lack of support for SystemMessage. This limitation necessitates adapting the ChatService implementation to ensure that functionalities depending on system messages are appropriately modified or redesigned.

References

I would like to sincerely thank all the contributors of langchain4j for their invaluable work and dedication. Their efforts have significantly eased the process of implementing complex language models like Gemini, allowing developers, including myself, to achieve rapid and efficient integration.