OpenAi's most concise Java stream return access method, no third-party dependencies, only need to use Spring Boot! Easily build your chatgpt with chat memory and drawing functions!
preview
Model: GPT-3.5-turbo
memory function
GPT-3.5-turbo itself does not have a memory function and needs to pass the context every time
int currentToken = (int) (content.length() / TOKEN_CONVERSION_RATE);
List<Message> history = userSessionUtil.getHistory(sessionId, MessageType.TEXT, (int) ((MAX_TOKEN / TOKEN_CONVERSION_RATE) - currentToken));
log.info("history:{}", history);
String historyDialogue = history.stream().map(e -> String.format(e.getUserType().getCode(), e.getMessage())).collect(Collectors.joining());
String prompt = StringUtils.hasLength(historyDialogue) ? String.format("%sQ:%s\n\n", historyDialogue, content) : content;
stream return
Implementation based on WebFlux+SSE
The interface needs to return text/event-stream type
@GetMapping(value = "/completions/stream", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
return responsive data
log.info("prompt:{}", prompt);
return Flux.create(emitter -> {
OpenAISubscriber subscriber = new OpenAISubscriber(emitter, sessionId, this, userMessage);
Flux<String> openAiResponse =
openAiWebClient.getChatResponse(sessionId, prompt, null, null, null);
openAiResponse.subscribe(subscriber);
emitter.onDispose(subscriber);
});
Help star Oh, your support is my motivation