Skip to content

Features/integrate response api#1326

Open
iceljc wants to merge 4 commits intoSciSharp:masterfrom
iceljc:features/integrate-response-api
Open

Features/integrate response api#1326
iceljc wants to merge 4 commits intoSciSharp:masterfrom
iceljc:features/integrate-response-api

Conversation

@iceljc
Copy link
Copy Markdown
Collaborator

@iceljc iceljc commented Apr 17, 2026

No description provided.

@qodo-code-review
Copy link
Copy Markdown
Contributor

Review Summary by Qodo

Integrate OpenAI Response API with Chat Completion fallback support

✨ Enhancement

Grey Divider

Walkthroughs

Description
• Integrate OpenAI Response API alongside existing Chat Completion API
• Add response API implementation with streaming and non-streaming support
• Introduce web search configuration and user location models
• Refactor ChatCompletionProvider into partial classes for better code organization
• Add file metadata fields (FileName, FileExtension) to file handlers
Diagram
flowchart LR
  A["OpenAI Plugin Settings"] -->|UseResponseApi flag| B["ChatCompletionProvider"]
  B -->|true| C["Response API Implementation"]
  B -->|false| D["Chat Completion API"]
  C -->|streaming| E["InnerGetResponseStreamingAsync"]
  C -->|non-streaming| F["InnerGetResponse"]
  D -->|streaming| G["InnerGetChatCompletionsStreamingAsync"]
  D -->|non-streaming| H["InnerGetChatCompletions"]
  I["WebSearchSettings"] -->|config| C
  J["WebSearchUserLocation"] -->|location data| I
Loading

Grey Divider

File Changes

1. src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Chat.cs ✨ Enhancement +704/-0

New Chat Completion API implementation file

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Chat.cs


2. src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Response.cs ✨ Enhancement +733/-0

New Response API implementation with streaming support

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Response.cs


3. src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs ✨ Enhancement +18/-666

Refactor to partial class with API selection logic

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs


View more (12)
4. src/Plugins/BotSharp.Plugin.OpenAI/Settings/OpenAiSettings.cs ⚙️ Configuration changes +10/-0

Add UseResponseApi flag and WebSearch configuration

src/Plugins/BotSharp.Plugin.OpenAI/Settings/OpenAiSettings.cs


5. src/Plugins/BotSharp.Plugin.OpenAI/Settings/WebSearchSettings.cs ✨ Enhancement +18/-0

New web search settings configuration class

src/Plugins/BotSharp.Plugin.OpenAI/Settings/WebSearchSettings.cs


6. src/Plugins/BotSharp.Plugin.OpenAI/Models/Web/WebSearchUserLocation.cs ✨ Enhancement +48/-0

New web search user location model with JSON deserialization

src/Plugins/BotSharp.Plugin.OpenAI/Models/Web/WebSearchUserLocation.cs


7. src/Plugins/BotSharp.Plugin.OpenAI/Using.cs Miscellaneous +1/-0

Add web models namespace import

src/Plugins/BotSharp.Plugin.OpenAI/Using.cs


8. src/Plugins/BotSharp.Plugin.OpenAI/BotSharp.Plugin.OpenAI.csproj Miscellaneous +4/-0

Remove temporary build artifact file

src/Plugins/BotSharp.Plugin.OpenAI/BotSharp.Plugin.OpenAI.csproj


9. src/Plugins/BotSharp.Plugin.GoogleAI/Constants/Constants.cs 🐞 Bug fix +1/-1

Fix namespace casing from GoogleAI to GoogleAi

src/Plugins/BotSharp.Plugin.GoogleAI/Constants/Constants.cs


10. src/Plugins/BotSharp.Plugin.GoogleAI/Providers/Chat/ChatCompletionProvider.cs 🐞 Bug fix +4/-4

Fix null check and rename thinking stream variable

src/Plugins/BotSharp.Plugin.GoogleAI/Providers/Chat/ChatCompletionProvider.cs


11. src/Plugins/BotSharp.Plugin.GoogleAI/Using.cs Miscellaneous +0/-1

Remove unused Constants namespace import

src/Plugins/BotSharp.Plugin.GoogleAI/Using.cs


12. src/Plugins/BotSharp.Plugin.FileHandler/Functions/ReadPdfFn.cs ✨ Enhancement +3/-1

Add FileName and FileExtension to BotSharpFile mapping

src/Plugins/BotSharp.Plugin.FileHandler/Functions/ReadPdfFn.cs


13. src/Plugins/BotSharp.Plugin.ImageHandler/Functions/ReadImageFn.cs ✨ Enhancement +3/-1

Add FileName and FileExtension to BotSharpFile mapping

src/Plugins/BotSharp.Plugin.ImageHandler/Functions/ReadImageFn.cs


14. src/Infrastructure/BotSharp.Abstraction/MLTasks/Settings/LlmModelSetting.cs ✨ Enhancement +3/-0

Add Parameters to WebSearchSetting and mark SearchContextSize obsolete

src/Infrastructure/BotSharp.Abstraction/MLTasks/Settings/LlmModelSetting.cs


15. src/Plugins/BotSharp.Plugin.OpenAI/Constants/Constants.cs Additional files +6/-0

...

src/Plugins/BotSharp.Plugin.OpenAI/Constants/Constants.cs


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown
Contributor

qodo-code-review bot commented Apr 17, 2026

Code Review by Qodo

🐞 Bugs (2) 📘 Rule violations (3) 📎 Requirement gaps (0)

Grey Divider


Action required

1. Sync-over-async GetResult() used 📘 Rule violation ☼ Reliability
Description
The new code blocks on HttpClient.GetByteArrayAsync(...).GetAwaiter().GetResult(), which is a
sync-over-async pattern that can deadlock and hides async I/O failures under load. This violates the
async/I-O reliability requirement at integration boundaries.
Code

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Chat.cs[R509-514]

+                        var http = _services.GetRequiredService<IHttpClientFactory>();
+                        using var client = http.CreateClient();
+                        var bytes = client.GetByteArrayAsync(file.FileUrl).GetAwaiter().GetResult();
+                        var binary = BinaryData.FromBytes(bytes);
+                        var contentPart = ChatMessageContentPart.CreateFilePart(binary, contentType, file.FileFullName);
+                        contentParts.Add(contentPart);
Evidence
PR Compliance ID 3 forbids sync-over-async patterns that can deadlock. The added
CollectMessageContentParts implementation downloads FileUrl content using
GetAwaiter().GetResult() on an async HTTP call.

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Chat.cs[509-514]
Best Practice: Learned patterns

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`HttpClient.GetByteArrayAsync(...).GetAwaiter().GetResult()` is a sync-over-async call that can deadlock and reduces observability of async I/O failures.
## Issue Context
This code runs during chat completion request building and performs external HTTP I/O; blocking here can stall request threads.
## Fix Focus Areas
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Chat.cs[473-523]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. float.Parse without guard 📘 Rule violation ☼ Reliability
Description
The new code parses temperature using float.Parse on conversation state/config input without
validation, which can throw on invalid values and crash the request. Integration-boundary inputs
should be guarded and default safely instead of throwing.
Code

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Chat.cs[598]

+        float? temperature = float.Parse(_state.GetState("temperature", "0.0"));
Evidence
PR Compliance ID 2 requires null/empty/invalid guards at API/integration boundaries with safe
fallbacks. The added line parses a potentially user/config-provided string via float.Parse(...)
without TryParse or fallback handling for invalid numeric formats.

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Chat.cs[598-598]
Best Practice: Learned patterns

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`float.Parse(_state.GetState("temperature", "0.0"))` can throw if the state/config value is missing/invalid.
## Issue Context
Conversation state/config values are integration-boundary inputs and must not crash execution when malformed.
## Fix Focus Areas
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Chat.cs[592-625]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. SSRF via FileUrl fetch 🐞 Bug ⛨ Security
Description
OpenAI ChatCompletionProvider downloads non-image attachments by fetching BotSharpFile.FileUrl
server-side without validation, enabling SSRF if FileUrl is attacker-controlled. Incoming instruct
chat requests directly map request FileUrl into BotSharpFile, so a caller can force the server to
request arbitrary internal/external URLs.
Code

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Chat.cs[R507-515]

+                    try
+                    {
+                        var http = _services.GetRequiredService<IHttpClientFactory>();
+                        using var client = http.CreateClient();
+                        var bytes = client.GetByteArrayAsync(file.FileUrl).GetAwaiter().GetResult();
+                        var binary = BinaryData.FromBytes(bytes);
+                        var contentPart = ChatMessageContentPart.CreateFilePart(binary, contentType, file.FileFullName);
+                        contentParts.Add(contentPart);
+                    }
Evidence
The OpenAI chat provider fetches bytes from file.FileUrl for non-image files, which is a direct
outbound HTTP request to an arbitrary URL. The instruct chat endpoint constructs BotSharpFile
objects from request payload fields (FileUrl/FileData/ContentType) without restricting FileUrl to
internal storage, making FileUrl attacker-controlled in at least this flow.

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Chat.cs[496-520]
src/Infrastructure/BotSharp.OpenAPI/Controllers/Instruct/InstructModeController.cs[104-131]
src/Infrastructure/BotSharp.Abstraction/Files/Models/FileInformation.cs[5-18]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`ChatCompletionProvider.Chat.cs` downloads bytes from `BotSharpFile.FileUrl` for non-image attachments. Because `FileUrl` can come from external request payloads, this enables SSRF (server making requests to attacker-chosen URLs), plus uncontrolled data download.
## Issue Context
At least one public endpoint (`/instruct/chat-completion`) maps `IncomingInstructRequest.Files[].FileUrl` directly into `BotSharpFile.FileUrl`, so callers can supply arbitrary URLs.
## Fix Focus Areas
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Chat.cs[496-520]
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Instruct/InstructModeController.cs[104-131]
## Recommended fix (one of these approaches)
1. **Do not fetch remote URLs in the provider**:
- Require `FileData` or `FileStorageUrl` for non-image attachments; reject/ignore `FileUrl` for non-image files.
- Alternatively, only allow `FileUrl` pointing to your own storage domain/path (strict allowlist).
2. If remote fetching is required:
- Validate scheme (`https` only), validate host via allowlist, and block private/link-local IP ranges.
- Enforce size limits and timeouts; stream and cap bytes.
- Pass cancellation tokens from conversation/request lifecycle.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. Culture-sensitive ToLower() parsing 📘 Rule violation ≡ Correctness
Description
The new code normalizes identifier-like settings using ToLower() before comparisons/switches,
which is culture-sensitive and can behave incorrectly under certain locales. Identifier parsing
should use ordinal/ordinal-ignore-case comparisons (or invariant normalization).
Code

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Response.cs[R593-599]

+        return size.ToLower() switch
+        {
+            "low" => WebSearchToolContextSize.Low,
+            "medium" => WebSearchToolContextSize.Medium,
+            "high" => WebSearchToolContextSize.High,
+            _ => null
+        };
Evidence
PR Compliance ID 5 requires ordinal/ordinal-ignore-case handling for identifiers to avoid
culture-sensitive behavior. The added parsing code uses size.ToLower() and then switches on the
result, which is culture-dependent (e.g., Turkish locale casing).

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Response.cs[593-599]
Best Practice: Learned patterns

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Identifier parsing uses culture-sensitive `ToLower()` prior to matching.
## Issue Context
Parsing config/state tokens like `low`/`medium`/`high` should be locale-invariant and use ordinal comparisons.
## Fix Focus Areas
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Response.cs[586-600]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Advisory comments

5. ContentFilter reason mislabeled 🐞 Bug ≡ Correctness
Description
Response-API paths treat ContentFilter incomplete results as “AI response exceeded max output
length”, which is misleading and inconsistent with the legacy streaming Chat Completions path that
distinguishes ContentFilter. This can confuse users and any downstream logic that keys off these
messages.
Code

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Response.cs[R48-58]

+        else if (value.IncompleteStatusDetails?.Reason == ResponseIncompleteStatusReason.MaxOutputTokens
+            || value?.IncompleteStatusDetails?.Reason == ResponseIncompleteStatusReason.ContentFilter)
+        {
+            _logger.LogWarning($"Action: {nameof(InnerGetResponse)}, Reason: {value.IncompleteStatusDetails.Reason}, Agent: {agent.Name}, MaxOutputTokens: {options.MaxOutputTokenCount}, Content:{text}");
+
+            responseMessage = new RoleDialogModel(AgentRole.Assistant, $"AI response exceeded max output length")
+            {
+                CurrentAgentId = agent.Id,
+                MessageId = conversations.LastOrDefault()?.MessageId ?? string.Empty,
+                StopCompletion = true
+            };
Evidence
In the Response API implementation, both MaxOutputTokens and ContentFilter are mapped to the same
error message. In contrast, the legacy streaming Chat Completions path generates a distinct message
for ContentFilter.

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Response.cs[48-58]
src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Response.cs[359-370]
src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Chat.cs[309-314]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Response API code uses the same assistant message for both `MaxOutputTokens` and `ContentFilter` incomplete reasons.
## Issue Context
Legacy Chat Completions streaming already emits a distinct ContentFilter message; aligning behaviors improves clarity.
## Fix Focus Areas
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Response.cs[48-59]
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Response.cs[359-371]
## Recommended fix
- Split the conditional:
- If reason is `MaxOutputTokens`, keep the “exceeded max output length” message.
- If reason is `ContentFilter`, use a message like “Content is omitted due to content filter rule.”

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +509 to +514
var http = _services.GetRequiredService<IHttpClientFactory>();
using var client = http.CreateClient();
var bytes = client.GetByteArrayAsync(file.FileUrl).GetAwaiter().GetResult();
var binary = BinaryData.FromBytes(bytes);
var contentPart = ChatMessageContentPart.CreateFilePart(binary, contentType, file.FileFullName);
contentParts.Add(contentPart);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Sync-over-async getresult() used 📘 Rule violation ☼ Reliability

The new code blocks on HttpClient.GetByteArrayAsync(...).GetAwaiter().GetResult(), which is a
sync-over-async pattern that can deadlock and hides async I/O failures under load. This violates the
async/I-O reliability requirement at integration boundaries.
Agent Prompt
## Issue description
`HttpClient.GetByteArrayAsync(...).GetAwaiter().GetResult()` is a sync-over-async call that can deadlock and reduces observability of async I/O failures.

## Issue Context
This code runs during chat completion request building and performs external HTTP I/O; blocking here can stall request threads.

## Fix Focus Areas
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Chat.cs[473-523]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

var settings = settingsService.GetSetting(Provider, _model);

// Reasoning
float? temperature = float.Parse(_state.GetState("temperature", "0.0"));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. float.parse without guard 📘 Rule violation ☼ Reliability

The new code parses temperature using float.Parse on conversation state/config input without
validation, which can throw on invalid values and crash the request. Integration-boundary inputs
should be guarded and default safely instead of throwing.
Agent Prompt
## Issue description
`float.Parse(_state.GetState("temperature", "0.0"))` can throw if the state/config value is missing/invalid.

## Issue Context
Conversation state/config values are integration-boundary inputs and must not crash execution when malformed.

## Fix Focus Areas
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Chat.cs[592-625]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +507 to +515
try
{
var http = _services.GetRequiredService<IHttpClientFactory>();
using var client = http.CreateClient();
var bytes = client.GetByteArrayAsync(file.FileUrl).GetAwaiter().GetResult();
var binary = BinaryData.FromBytes(bytes);
var contentPart = ChatMessageContentPart.CreateFilePart(binary, contentType, file.FileFullName);
contentParts.Add(contentPart);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Ssrf via fileurl fetch 🐞 Bug ⛨ Security

OpenAI ChatCompletionProvider downloads non-image attachments by fetching BotSharpFile.FileUrl
server-side without validation, enabling SSRF if FileUrl is attacker-controlled. Incoming instruct
chat requests directly map request FileUrl into BotSharpFile, so a caller can force the server to
request arbitrary internal/external URLs.
Agent Prompt
## Issue description
`ChatCompletionProvider.Chat.cs` downloads bytes from `BotSharpFile.FileUrl` for non-image attachments. Because `FileUrl` can come from external request payloads, this enables SSRF (server making requests to attacker-chosen URLs), plus uncontrolled data download.

## Issue Context
At least one public endpoint (`/instruct/chat-completion`) maps `IncomingInstructRequest.Files[].FileUrl` directly into `BotSharpFile.FileUrl`, so callers can supply arbitrary URLs.

## Fix Focus Areas
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.Chat.cs[496-520]
- src/Infrastructure/BotSharp.OpenAPI/Controllers/Instruct/InstructModeController.cs[104-131]

## Recommended fix (one of these approaches)
1. **Do not fetch remote URLs in the provider**:
   - Require `FileData` or `FileStorageUrl` for non-image attachments; reject/ignore `FileUrl` for non-image files.
   - Alternatively, only allow `FileUrl` pointing to your own storage domain/path (strict allowlist).
2. If remote fetching is required:
   - Validate scheme (`https` only), validate host via allowlist, and block private/link-local IP ranges.
   - Enforce size limits and timeouts; stream and cap bytes.
   - Pass cancellation tokens from conversation/request lifecycle.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Jicheng Lu and others added 2 commits April 17, 2026 19:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant