feat: port reddit + linkedin + twitch channel adapters from TS to Rust#55
feat: port reddit + linkedin + twitch channel adapters from TS to Rust#55
Conversation
📝 WalkthroughWalkthroughWorkspace members updated; three TypeScript channel modules removed; new Rust worker crates for LinkedIn, Reddit, Twitch (with manifests, worker configs, and webhook binaries) added; Mastodon test suites updated/added; new workers include message parsing, agent::chat calls, outbound API posting, and security audit triggers. Changes
Sequence Diagram(s)sequenceDiagram
participant WebhookSource as External platform
participant III as III runtime
participant Worker as Channel worker (Rust)
participant Agent as agent::chat
participant API as Channel REST API
participant Vault as vault::get
WebhookSource->>Worker: POST webhook (message event)
Worker->>III: state::get / vault::get (resolve agentId, secrets)
Worker->>Agent: agent::chat {agentId, sessionId, message}
Agent-->>Worker: {content}
Worker->>API: POST reply (chunked, Bearer token)
Worker->>III: security::audit (channel metadata)
Worker-->>WebhookSource: 200 OK
Estimated code review effort🎯 4 (Complex) | ⏱️ ~40 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Warning Review ran into problems🔥 ProblemsTimed out fetching pipeline failures after 30000ms Review rate limit: 1/3 review remaining, refill in 26 minutes. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@workers/channel-linkedin/src/main.rs`:
- Around line 124-215: The webhook_handler currently accepts POSTs without
validating LinkedIn's challenge/signature and doesn't deduplicate notifications;
implement the three required controls: (1) add GET handling inside
webhook_handler to read the challengeCode query param and return
{"challengeCode": "...","challengeResponse":"..."} where challengeResponse =
HMAC-SHA256(challengeCode, client_secret) using the LinkedIn client secret
(fetch from III config/secrets), (2) on POST compute HMAC-SHA256 over the
JSON-encoded request body and compare it (constant-time) to the X-LI-Signature
header before processing—reject with 401 if it fails, and (3) extract
notificationId from the parsed body (use the existing element/element.get flow)
and deduplicate deliveries by recording recent notificationIds (in a short-lived
in-memory or III-backed cache/store) and short-circuit with a 200 OK if seen;
keep the rest of webhook_handler (resolve_agent, trigger agent::chat,
send_message, audit) unchanged but only run them after signature verification
and dedupe pass.
In `@workers/channel-reddit/src/main.rs`:
- Around line 98-132: send_message currently reuses a cached token forever and
never recovers from auth failures; modify send_message to detect authentication
failures (e.g., res.status() == 401 or 403) after the POST to /api/comment,
invalidate token_cache (clear or replace with empty string), call
refresh_access_token(iii, client) to get a fresh token, update token_cache with
the new token, and retry the request once using the refreshed token (return an
error if the refresh or retry fails); reference send_message, token_cache,
token, refresh_access_token and the POST to /api/comment when making the
changes.
In `@workers/channel-twitch/src/main.rs`:
- Around line 112-125: Add EventSub HMAC-SHA256 signature verification at the
top of webhook_handler: read the raw request body (not the parsed JSON), get
headers "Twitch-Eventsub-Message-Id" and "Twitch-Eventsub-Message-Timestamp" and
compute HMAC-SHA256 over Message-Id + Message-Timestamp + raw_body using your
EventSub secret (obtain from III/config/env), then compare the hex/base64 of
that HMAC to the incoming "Twitch-Eventsub-Message-Signature" header and return
a 401/200 error response if it doesn’t match before processing challenge or
event. Also fix send_message to stop using broadcaster_id for both
broadcaster_id and sender_id — set sender_id to the authenticated app/user id
(the token owner) expected by the Helix Send Chat Message API (obtain that id
from the authenticated credentials used to call Twitch) instead of the channel
broadcaster.
- Around line 92-96: The JSON payload incorrectly sets "sender_id" to
broadcaster_id; change it to the bot/account user ID associated with the access
token (e.g., bot_user_id or token_user_id) when building the payload in the
.json(&json!(...)) call so the "sender_id" matches the token owner; locate the
JSON construction that uses broadcaster_id and replace the sender_id value with
the variable that holds the bot's user ID (and ensure that variable is populated
from your OAuth/token response or config before sending the request).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 38c0945e-1cfc-4dbe-8f66-d99497edea69
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (15)
Cargo.tomlsrc/__tests__/channels-mastodon.test.tssrc/__tests__/channels-twitch-linkedin.test.tssrc/channels/linkedin.tssrc/channels/reddit.tssrc/channels/twitch.tsworkers/channel-linkedin/Cargo.tomlworkers/channel-linkedin/iii.worker.yamlworkers/channel-linkedin/src/main.rsworkers/channel-reddit/Cargo.tomlworkers/channel-reddit/iii.worker.yamlworkers/channel-reddit/src/main.rsworkers/channel-twitch/Cargo.tomlworkers/channel-twitch/iii.worker.yamlworkers/channel-twitch/src/main.rs
💤 Files with no reviewable changes (5)
- src/tests/channels-mastodon.test.ts
- src/channels/twitch.ts
- src/channels/reddit.ts
- src/channels/linkedin.ts
- src/tests/channels-twitch-linkedin.test.ts
| async fn webhook_handler( | ||
| iii: &III, | ||
| client: &reqwest::Client, | ||
| input: Value, | ||
| ) -> Result<Value, IIIError> { | ||
| let body = input.get("body").cloned().unwrap_or(input); | ||
|
|
||
| let element = body | ||
| .get("elements") | ||
| .and_then(|e| e.as_array()) | ||
| .and_then(|arr| arr.first()) | ||
| .cloned(); | ||
|
|
||
| let Some(element) = element else { | ||
| return Ok(json!({ "status_code": 200, "body": { "ok": true } })); | ||
| }; | ||
|
|
||
| let msg_event = element | ||
| .get("event") | ||
| .and_then(|e| e.get(MESSAGE_EVENT_KEY)) | ||
| .cloned(); | ||
|
|
||
| let Some(msg_event) = msg_event else { | ||
| return Ok(json!({ "status_code": 200, "body": { "ok": true } })); | ||
| }; | ||
|
|
||
| let Some(text) = extract_message_text(&msg_event) else { | ||
| return Ok(json!({ "status_code": 200, "body": { "ok": true } })); | ||
| }; | ||
|
|
||
| if text.is_empty() { | ||
| return Ok(json!({ "status_code": 200, "body": { "ok": true } })); | ||
| } | ||
|
|
||
| let thread_id = element | ||
| .get("entityUrn") | ||
| .and_then(|v| v.as_str()) | ||
| .unwrap_or("") | ||
| .to_string(); | ||
| let sender_id = element | ||
| .get("from") | ||
| .and_then(|v| v.as_str()) | ||
| .unwrap_or("") | ||
| .to_string(); | ||
|
|
||
| let agent_id = resolve_agent(iii, "linkedin", &thread_id).await; | ||
|
|
||
| let chat = iii | ||
| .trigger(TriggerRequest { | ||
| function_id: "agent::chat".to_string(), | ||
| payload: json!({ | ||
| "agentId": agent_id, | ||
| "message": text, | ||
| "sessionId": format!("linkedin:{thread_id}"), | ||
| }), | ||
| action: None, | ||
| timeout_ms: None, | ||
| }) | ||
| .await | ||
| .map_err(|e| IIIError::Handler(format!("agent::chat failed: {e}")))?; | ||
|
|
||
| let reply = chat | ||
| .get("content") | ||
| .and_then(|v| v.as_str()) | ||
| .unwrap_or("") | ||
| .to_string(); | ||
|
|
||
| if !reply.is_empty() | ||
| && !thread_id.is_empty() | ||
| && let Err(e) = send_message(iii, client, &thread_id, &reply).await | ||
| { | ||
| tracing::error!(error = %e, "failed to post LinkedIn reply"); | ||
| } | ||
|
|
||
| let _ = iii | ||
| .trigger(TriggerRequest { | ||
| function_id: "security::audit".to_string(), | ||
| payload: json!({ | ||
| "type": "channel_message", | ||
| "agentId": agent_id, | ||
| "detail": { | ||
| "channel": "linkedin", | ||
| "threadId": thread_id, | ||
| "senderId": sender_id, | ||
| }, | ||
| }), | ||
| action: Some(TriggerAction::Void), | ||
| timeout_ms: None, | ||
| }) | ||
| .await; | ||
|
|
||
| Ok(json!({ "status_code": 200, "body": { "ok": true } })) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
official LinkedIn webhook docs GET challengeCode X-LI-Signature notificationId dedup
💡 Result:
LinkedIn webhook docs (challengeCode / X-LI-Signature / notificationId dedup) — official flow summary 1) Endpoint validation (GET with challengeCode) - LinkedIn generates a challengeCode (type-4 UUID) and sends an HTTP GET to your webhook endpoint with challengeCode as a query parameter: GET https://{your-webhook}?challengeCode=... - Your endpoint must compute challengeResponse as a hex-encoded HMAC-SHA256 of the challengeCode using your application’s clientSecret, and return JSON: {"challengeCode": "...", "challengeResponse": "..."} with HTTP 200 within 3 seconds. - LinkedIn also mentions integrations using parent-child applications may include an applicationId query parameter so you can select the correct clientSecret. 2) Push notifications signature verification (X-LI-Signature) - For incoming push notifications (HTTP POST), LinkedIn includes header X-LI-Signature. - The value is: hmacsha256={HMAC_SHA256(JSON_stringified_request_body, clientSecret)}. - On receipt, you must compute the HMAC-SHA256 over the JSON-encoded string representation of the POST body, prefix your computed value with "hmacsha256=", and compare to X-LI-Signature. If it doesn’t match, discard the event. 3) notificationId dedup / idempotency - LinkedIn explicitly states that a notification can be delivered multiple times. - Your webhook must deduplicate notifications using the Notification ID included in the payload (notificationId). Implement idempotent handling by storing processed notificationId values (e.g., in a DB/Redis with a TTL) and skipping duplicates. Minimal pseudo-code (idempotent + signature verify) - Verify signature using X-LI-Signature (HMAC-SHA256, clientSecret, JSON-encoded request body). - Parse payload, extract notificationId. - If notificationId already processed, return 2xx without re-processing. - Otherwise process, then persist notificationId, then return 2xx. Sources: LinkedIn webhook validation and signature/dedup requirements are documented in Microsoft Learn’s LinkedIn Webhooks / Webhook Validation pages.
Citations:
- 1: https://learn.microsoft.com/en-us/linkedin/shared/api-guide/webhook-validation
- 2: https://learn.microsoft.com/en-us/linkedin/shared/api-guide/webhook-validation?context=linkedin%2Fcontext
- 3: https://learn.microsoft.com/en-us/linkedin/talent/receive-push-events?view=li-lts-2025-10
Implement LinkedIn webhook validation per the official API contract.
LinkedIn requires three mandatory security controls: (1) GET endpoint accepting challengeCode query parameter and returning {"challengeCode": "...", "challengeResponse": "..."} where challengeResponse is HMAC-SHA256(challengeCode, clientSecret), (2) signature verification via X-LI-Signature header on POST requests (compute HMAC-SHA256 of JSON-encoded body and compare), and (3) deduplication by notificationId since LinkedIn may deliver notifications multiple times. The current handler accepts all POST requests without signature validation, performs no GET challenge flow, and never deduplicates by notification ID.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workers/channel-linkedin/src/main.rs` around lines 124 - 215, The
webhook_handler currently accepts POSTs without validating LinkedIn's
challenge/signature and doesn't deduplicate notifications; implement the three
required controls: (1) add GET handling inside webhook_handler to read the
challengeCode query param and return {"challengeCode":
"...","challengeResponse":"..."} where challengeResponse =
HMAC-SHA256(challengeCode, client_secret) using the LinkedIn client secret
(fetch from III config/secrets), (2) on POST compute HMAC-SHA256 over the
JSON-encoded request body and compare it (constant-time) to the X-LI-Signature
header before processing—reject with 401 if it fails, and (3) extract
notificationId from the parsed body (use the existing element/element.get flow)
and deduplicate deliveries by recording recent notificationIds (in a short-lived
in-memory or III-backed cache/store) and short-circuit with a 200 OK if seen;
keep the rest of webhook_handler (resolve_agent, trigger agent::chat,
send_message, audit) unchanged but only run them after signature verification
and dedupe pass.
| async fn send_message( | ||
| iii: &III, | ||
| client: &reqwest::Client, | ||
| token_cache: &Arc<Mutex<String>>, | ||
| parent_name: &str, | ||
| text: &str, | ||
| ) -> Result<(), IIIError> { | ||
| let mut token = { | ||
| let guard = token_cache.lock().await; | ||
| guard.clone() | ||
| }; | ||
| if token.is_empty() { | ||
| token = refresh_access_token(iii, client).await?; | ||
| let mut guard = token_cache.lock().await; | ||
| *guard = token.clone(); | ||
| } | ||
|
|
||
| let url = format!("{REDDIT_API_BASE}/api/comment"); | ||
| let res = client | ||
| .post(&url) | ||
| .bearer_auth(&token) | ||
| .form(&[("thing_id", parent_name), ("text", text)]) | ||
| .send() | ||
| .await | ||
| .map_err(|e| IIIError::Handler(format!("Reddit comment error: {e}")))?; | ||
|
|
||
| if !res.status().is_success() { | ||
| let status = res.status(); | ||
| let body = res.text().await.unwrap_or_default(); | ||
| return Err(IIIError::Handler(format!( | ||
| "Reddit comment failed ({status}): {}", | ||
| body.chars().take(300).collect::<String>() | ||
| ))); | ||
| } | ||
| Ok(()) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -n "token_cache|refresh_access_token|api/comment|StatusCode::UNAUTHORIZED|status\\(\\)\\.is_success" workers/channel-reddit/src/main.rsRepository: iii-experimental/agentos
Length of output: 853
🏁 Script executed:
#!/bin/bash
# View the refresh_access_token function
sed -n '51,90p' workers/channel-reddit/src/main.rsRepository: iii-experimental/agentos
Length of output: 1435
🏁 Script executed:
#!/bin/bash
# View the full send_message function with more context
sed -n '98,135p' workers/channel-reddit/src/main.rsRepository: iii-experimental/agentos
Length of output: 1149
🏁 Script executed:
#!/bin/bash
# Check if token_cache is cleared/reset anywhere else
rg -n "token_cache.*=.*String::new|clear\(|truncate" workers/channel-reddit/src/main.rsRepository: iii-experimental/agentos
Length of output: 151
🏁 Script executed:
#!/bin/bash
# Check for other functions that might have similar token caching issues
rg -n "fn.*message|fn.*send.*request" workers/channel-reddit/src/main.rs | head -20Repository: iii-experimental/agentos
Length of output: 94
🏁 Script executed:
#!/bin/bash
# Search for how other potential auth failures are handled
rg -n "StatusCode|401|403|UNAUTHORIZED" workers/channel-reddit/src/main.rsRepository: iii-experimental/agentos
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Check what reqwest version is being used and if StatusCode::UNAUTHORIZED exists
grep -n "reqwest" workers/channel-reddit/src/main.rs | head -10Repository: iii-experimental/agentos
Length of output: 214
🏁 Script executed:
#!/bin/bash
# Check imports at the top of the file to see what's available
head -50 workers/channel-reddit/src/main.rsRepository: iii-experimental/agentos
Length of output: 1585
🏁 Script executed:
#!/bin/bash
# Check how send_message is called and if there are retry patterns elsewhere
rg -n "send_message" workers/channel-reddit/src/main.rsRepository: iii-experimental/agentos
Length of output: 182
🏁 Script executed:
#!/bin/bash
# Verify the full context of webhook_handler to understand the call pattern
sed -n '135,210p' workers/channel-reddit/src/main.rsRepository: iii-experimental/agentos
Length of output: 2284
🏁 Script executed:
#!/bin/bash
# Check if there's a Cargo.toml to see reqwest version
cat workers/channel-reddit/Cargo.toml 2>/dev/null || echo "No Cargo.toml found in this directory"Repository: iii-experimental/agentos
Length of output: 539
🏁 Script executed:
#!/bin/bash
# Search in the entire repo for Cargo.toml and reqwest dependency
find . -name "Cargo.toml" -type f | xargs grep -l "reqwest" 2>/dev/null | head -5Repository: iii-experimental/agentos
Length of output: 215
🏁 Script executed:
#!/bin/bash
# Check how other parts of the codebase handle status codes
rg -n "\.status\(\)" workers/channel-reddit/src/main.rs -A 2Repository: iii-experimental/agentos
Length of output: 442
🏁 Script executed:
#!/bin/bash
# Check if StatusCode::UNAUTHORIZED is the correct way to check 401 in reqwest 0.12
# Let's search for how status codes are typically checked/used in the codebase
rg -n "status()" workers/channel-reddit/src/main.rs -B1 -A2Repository: iii-experimental/agentos
Length of output: 843
🏁 Script executed:
#!/bin/bash
# Verify if there's any pattern of checking specific status codes (like 401, 403)
rg -n "== |is_client_error|is_server_error" workers/ -rRepository: iii-experimental/agentos
Length of output: 482
🏁 Script executed:
#!/bin/bash
# Verify the proposed diff syntax by checking if reqwest::StatusCode is available
rg -n "use.*reqwest\|StatusCode" workers/channel-reddit/src/main.rsRepository: iii-experimental/agentos
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Double-check the send_message function one more time to ensure we have the complete picture
sed -n '98,132p' workers/channel-reddit/src/main.rs | cat -nRepository: iii-experimental/agentos
Length of output: 1365
Invalidate the cached Reddit token and retry on authentication failures.
Once a token is cached, it remains in token_cache indefinitely. If the token expires or is revoked, every subsequent /api/comment call reuses the invalid token and fails immediately, with no recovery until the worker restarts.
🔧 Proposed fix
async fn send_message(
iii: &III,
client: &reqwest::Client,
token_cache: &Arc<Mutex<String>>,
parent_name: &str,
text: &str,
) -> Result<(), IIIError> {
let mut token = {
let guard = token_cache.lock().await;
guard.clone()
};
if token.is_empty() {
token = refresh_access_token(iii, client).await?;
let mut guard = token_cache.lock().await;
*guard = token.clone();
}
let url = format!("{REDDIT_API_BASE}/api/comment");
- let res = client
+ let mut res = client
.post(&url)
.bearer_auth(&token)
.form(&[("thing_id", parent_name), ("text", text)])
.send()
.await
.map_err(|e| IIIError::Handler(format!("Reddit comment error: {e}")))?;
+
+ if res.status() == reqwest::StatusCode::UNAUTHORIZED {
+ token = refresh_access_token(iii, client).await?;
+ *token_cache.lock().await = token.clone();
+ res = client
+ .post(&url)
+ .bearer_auth(&token)
+ .form(&[("thing_id", parent_name), ("text", text)])
+ .send()
+ .await
+ .map_err(|e| IIIError::Handler(format!("Reddit comment error: {e}")))?;
+ }
if !res.status().is_success() {
let status = res.status();
let body = res.text().await.unwrap_or_default();
return Err(IIIError::Handler(format!(
"Reddit comment failed ({status}): {}",
body.chars().take(300).collect::<String>()
)));
}
Ok(())
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async fn send_message( | |
| iii: &III, | |
| client: &reqwest::Client, | |
| token_cache: &Arc<Mutex<String>>, | |
| parent_name: &str, | |
| text: &str, | |
| ) -> Result<(), IIIError> { | |
| let mut token = { | |
| let guard = token_cache.lock().await; | |
| guard.clone() | |
| }; | |
| if token.is_empty() { | |
| token = refresh_access_token(iii, client).await?; | |
| let mut guard = token_cache.lock().await; | |
| *guard = token.clone(); | |
| } | |
| let url = format!("{REDDIT_API_BASE}/api/comment"); | |
| let res = client | |
| .post(&url) | |
| .bearer_auth(&token) | |
| .form(&[("thing_id", parent_name), ("text", text)]) | |
| .send() | |
| .await | |
| .map_err(|e| IIIError::Handler(format!("Reddit comment error: {e}")))?; | |
| if !res.status().is_success() { | |
| let status = res.status(); | |
| let body = res.text().await.unwrap_or_default(); | |
| return Err(IIIError::Handler(format!( | |
| "Reddit comment failed ({status}): {}", | |
| body.chars().take(300).collect::<String>() | |
| ))); | |
| } | |
| Ok(()) | |
| async fn send_message( | |
| iii: &III, | |
| client: &reqwest::Client, | |
| token_cache: &Arc<Mutex<String>>, | |
| parent_name: &str, | |
| text: &str, | |
| ) -> Result<(), IIIError> { | |
| let mut token = { | |
| let guard = token_cache.lock().await; | |
| guard.clone() | |
| }; | |
| if token.is_empty() { | |
| token = refresh_access_token(iii, client).await?; | |
| let mut guard = token_cache.lock().await; | |
| *guard = token.clone(); | |
| } | |
| let url = format!("{REDDIT_API_BASE}/api/comment"); | |
| let mut res = client | |
| .post(&url) | |
| .bearer_auth(&token) | |
| .form(&[("thing_id", parent_name), ("text", text)]) | |
| .send() | |
| .await | |
| .map_err(|e| IIIError::Handler(format!("Reddit comment error: {e}")))?; | |
| if res.status() == reqwest::StatusCode::UNAUTHORIZED { | |
| token = refresh_access_token(iii, client).await?; | |
| *token_cache.lock().await = token.clone(); | |
| res = client | |
| .post(&url) | |
| .bearer_auth(&token) | |
| .form(&[("thing_id", parent_name), ("text", text)]) | |
| .send() | |
| .await | |
| .map_err(|e| IIIError::Handler(format!("Reddit comment error: {e}")))?; | |
| } | |
| if !res.status().is_success() { | |
| let status = res.status(); | |
| let body = res.text().await.unwrap_or_default(); | |
| return Err(IIIError::Handler(format!( | |
| "Reddit comment failed ({status}): {}", | |
| body.chars().take(300).collect::<String>() | |
| ))); | |
| } | |
| Ok(()) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workers/channel-reddit/src/main.rs` around lines 98 - 132, send_message
currently reuses a cached token forever and never recovers from auth failures;
modify send_message to detect authentication failures (e.g., res.status() == 401
or 403) after the POST to /api/comment, invalidate token_cache (clear or replace
with empty string), call refresh_access_token(iii, client) to get a fresh token,
update token_cache with the new token, and retry the request once using the
refreshed token (return an error if the refresh or retry fails); reference
send_message, token_cache, token, refresh_access_token and the POST to
/api/comment when making the changes.
| .json(&json!({ | ||
| "broadcaster_id": broadcaster_id, | ||
| "sender_id": broadcaster_id, | ||
| "message": chunk, | ||
| })) |
There was a problem hiding this comment.
Use the bot user ID for sender_id, not the broadcaster ID.
Twitch’s Send Chat Message API requires sender_id to match the user in the access token, and the official examples use the bot user’s ID there. Setting both fields to broadcaster_id only works when the token belongs to the broadcaster account, so normal bot-account setups will fail with auth/permission errors. (dev.twitch.tv)
official Twitch Send Chat Message API sender_id must match access token user id broadcaster_id bot user id
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workers/channel-twitch/src/main.rs` around lines 92 - 96, The JSON payload
incorrectly sets "sender_id" to broadcaster_id; change it to the bot/account
user ID associated with the access token (e.g., bot_user_id or token_user_id)
when building the payload in the .json(&json!(...)) call so the "sender_id"
matches the token owner; locate the JSON construction that uses broadcaster_id
and replace the sender_id value with the variable that holds the bot's user ID
(and ensure that variable is populated from your OAuth/token response or config
before sending the request).
| async fn webhook_handler( | ||
| iii: &III, | ||
| client: &reqwest::Client, | ||
| input: Value, | ||
| ) -> Result<Value, IIIError> { | ||
| let body = input.get("body").cloned().unwrap_or(input); | ||
|
|
||
| // EventSub challenge handshake. | ||
| if let Some(challenge) = body.get("challenge").and_then(|v| v.as_str()) { | ||
| return Ok(json!({ | ||
| "status_code": 200, | ||
| "body": challenge, | ||
| })); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
official Twitch EventSub webhook verification Twitch-Eventsub-Message-Signature raw body timestamp message id
💡 Result:
To verify a Twitch EventSub webhook, compute an HMAC-SHA256 over a message string formed by concatenating: 1) Twitch-Eventsub-Message-Id header value 2) Twitch-Eventsub-Message-Timestamp header value 3) the raw request body (raw bytes of the HTTP body; do not use a parsed JSON string) Then compare it to the value in the Twitch-Eventsub-Message-Signature header. Verification steps (per Twitch docs): 1) Ensure you have the raw body bytes (use middleware like express.raw({ type: 'application/json' }) so req.body is raw data). 2) Build the HMAC input (order is important): hmac_message = Twitch-Eventsub-Message-Id + Twitch-Eventsub-Message-Timestamp + raw_request_body 3) Compute: hmac = HMAC-SHA256(secret=your_webhook_secret, message=hmac_message) 4) Format comparison signature as: expected = 'sha256=' + hex(hmac) 5) Use a time-safe equality check; if mismatch return 4XX (commonly 403). Header names you should use: - Twitch-Eventsub-Message-Id - Twitch-Eventsub-Message-Timestamp - Twitch-Eventsub-Message-Signature - Twitch-Eventsub-Message-Retry (not used in the signature) Notes about “timestamp / message id / raw body”: - Twitch-Eventsub-Message-Timestamp is the UTC time in RFC3339 format. - Twitch-Eventsub-Message-Id is an opaque unique ID for the message. - The signature uses the raw bytes of the request body (so frameworks that re-serialize/transform JSON can break verification). Example (Node/Express-style pseudocode aligned with Twitch docs): - message = req.headers['twitch-eventsub-message-id'] + req.headers['twitch-eventsub-message-timestamp'] + req.body - hmac = crypto.createHmac('sha256', secret).update(message).digest('hex') - expectedHeader = 'sha256=' + hmac - compare expectedHeader with req.headers['twitch-eventsub-message-signature'] using a timing-safe comparison.
Citations:
- 1: https://dev.twitch.tv/docs/eventsub/handling-webhook-events
- 2: https://dev.twitch.tv/docs/eventsub/handling-websocket-events
- 3: https://discuss.dev.twitch.com/t/eventsub-signature-verification-failing/63240
🏁 Script executed:
cd workers/channel-twitch && wc -l src/main.rsRepository: iii-experimental/agentos
Length of output: 84
🏁 Script executed:
cd workers/channel-twitch && sed -n '112,197p' src/main.rsRepository: iii-experimental/agentos
Length of output: 2509
🏁 Script executed:
cd workers/channel-twitch && sed -n '1,120p' src/main.rs | tail -30Repository: iii-experimental/agentos
Length of output: 1061
🏁 Script executed:
cd workers/channel-twitch && sed -n '198,256p' src/main.rsRepository: iii-experimental/agentos
Length of output: 1903
Add EventSub signature verification before processing payloads.
The webhook handler (lines 112-197) processes any JSON body containing challenge or event without verifying the Twitch-Eventsub-Message-Signature header. Twitch requires computing HMAC-SHA256 over the concatenated Message-Id, Message-Timestamp, and raw request body, then comparing it to the signature header. Without this, forged or replayed requests can trigger agent::chat and unauthorized Twitch API calls.
Additionally, send_message (line 105) incorrectly uses broadcaster_id for both broadcaster_id and sender_id; the Helix Send Chat Message API expects sender_id to be the authenticated user or app, not the channel broadcaster.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workers/channel-twitch/src/main.rs` around lines 112 - 125, Add EventSub
HMAC-SHA256 signature verification at the top of webhook_handler: read the raw
request body (not the parsed JSON), get headers "Twitch-Eventsub-Message-Id" and
"Twitch-Eventsub-Message-Timestamp" and compute HMAC-SHA256 over Message-Id +
Message-Timestamp + raw_body using your EventSub secret (obtain from
III/config/env), then compare the hex/base64 of that HMAC to the incoming
"Twitch-Eventsub-Message-Signature" header and return a 401/200 error response
if it doesn’t match before processing challenge or event. Also fix send_message
to stop using broadcaster_id for both broadcaster_id and sender_id — set
sender_id to the authenticated app/user id (the token owner) expected by the
Helix Send Chat Message API (obtain that id from the authenticated credentials
used to call Twitch) instead of the channel broadcaster.
- workers/channel-reddit replaces src/channels/reddit.ts - function ID preserved: channel::reddit::webhook - HTTP trigger: POST webhook/reddit (http_method/api_path form) - env: REDDIT_CLIENT_ID / REDDIT_SECRET / REDDIT_REFRESH_TOKEN (read via vault::get with env fallback) - OAuth refresh-token grant via reqwest basic_auth + form body - comment send via oauth.reddit.com/api/comment - emits security::audit channel_message events (TriggerAction::Void) - 4 unit tests covering session anchor + skip-empty paths Tests file channels-reddit-mastodon.test.ts renamed to channels-mastodon.test.ts; mastodon TS port is owned by another worktree.
- workers/channel-linkedin replaces src/channels/linkedin.ts - function ID preserved: channel::linkedin::webhook - HTTP trigger: POST webhook/linkedin (http_method/api_path form) - env: LINKEDIN_TOKEN (read via vault::get with env fallback) - send via api.linkedin.com/v2/messages with X-Restli-Protocol-Version 2.0.0 - splits messages > 4096 chars (UTF-8 safe) - emits security::audit channel_message events (TriggerAction::Void) - 5 unit tests covering split + message-text extraction Deletes shared TS test channels-twitch-linkedin.test.ts; the twitch portion loses its TS source in the next commit so the file would no longer compile.
- workers/channel-twitch replaces src/channels/twitch.ts - function ID preserved: channel::twitch::webhook - HTTP trigger: POST webhook/twitch (http_method/api_path form) - env: TWITCH_TOKEN / TWITCH_CLIENT_ID (read via vault::get with env fallback) - handles EventSub challenge handshake (echoes back challenge as body) - send via api.twitch.tv/helix/chat/messages with Bearer + Client-Id - splits messages > 500 chars (UTF-8 safe) - emits security::audit channel_message events (TriggerAction::Void) - 3 unit tests covering split helper + multibyte safety
4653503 to
f8b9320
Compare
| process.env.MASTODON_INSTANCE = "https://mastodon.social"; | ||
| process.env.MASTODON_TOKEN = "test-masto-token"; | ||
| await import("../channels/mastodon.js"); |
There was a problem hiding this comment.
🔴 channels-reddit.test.ts was incorrectly replaced with mastodon test content, creating exact duplicate
The file src/__tests__/channels-reddit.test.ts was transformed from testing the Reddit channel into an exact duplicate of the new src/__tests__/channels-mastodon.test.ts. It now sets MASTODON_INSTANCE/MASTODON_TOKEN env vars, imports ../channels/mastodon.js, and tests channel::mastodon::webhook — none of which is related to Reddit. Before this PR, the file properly tested the Reddit channel (verified via git show d5ffdfd:src/__tests__/channels-reddit.test.ts). The ../channels/mastodon.js module does not exist in the repository, so both duplicate files will fail at import time in beforeAll. The file should have been deleted since src/channels/reddit.ts was removed and the Reddit channel is now a Rust worker.
Prompt for agents
The file src/__tests__/channels-reddit.test.ts was accidentally converted to test the Mastodon channel instead of being properly handled during the Reddit TS-to-Rust migration. It is now an exact duplicate of src/__tests__/channels-mastodon.test.ts.
Since src/channels/reddit.ts was deleted (replaced by workers/channel-reddit Rust crate), this TypeScript test file should be deleted entirely. The Reddit Rust worker already has its own unit tests in workers/channel-reddit/src/main.rs (tests module at line 255-296).
Additionally, the new src/__tests__/channels-mastodon.test.ts imports ../channels/mastodon.js which does not exist in the repository. The commit message mentions 'mastodon TS port is owned by another worktree' — this test file should either be removed until mastodon.ts lands, or mastodon.ts needs to be added in this PR.
Was this helpful? React with 👍 or 👎 to provide feedback.
| ) -> Result<Value, IIIError> { | ||
| let body = input.get("body").cloned().unwrap_or(input); | ||
|
|
||
| // EventSub challenge handshake. |
There was a problem hiding this comment.
🔴 Code comment violates CLAUDE.md 'No code comments' rule
Line 119 contains // EventSub challenge handshake. which violates the mandatory CLAUDE.md rule: "No code comments — code should be self-documenting". This is listed under the project's Code Conventions and applies to all code in the repository.
| // EventSub challenge handshake. | |
| if let Some(challenge) = body.get("challenge").and_then(|v| v.as_str()) { |
Was this helpful? React with 👍 or 👎 to provide feedback.
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (3)
workers/channel-twitch/src/main.rs (2)
117-125:⚠️ Potential issue | 🔴 CriticalVerify EventSub signatures before handling challenge/events.
Lines 117-125 trust body content without validating Twitch signature headers; forged requests could trigger
agent::chatand outbound Twitch posts.Suggested fix (outline)
+// Requires crates: hmac, sha2, subtle, hex (or equivalent) +fn verify_eventsub_signature( + secret: &str, + message_id: &str, + timestamp: &str, + raw_body: &str, + provided_sig: &str, +) -> bool { + // expected = "sha256=" + HMAC_SHA256(secret, message_id + timestamp + raw_body) + // compare with constant-time equality +} + async fn webhook_handler( iii: &III, client: &reqwest::Client, input: Value, ) -> Result<Value, IIIError> { - let body = input.get("body").cloned().unwrap_or(input); + let body = input.get("body").cloned().unwrap_or_else(|| input.clone()); + let headers = input.get("headers").cloned().unwrap_or_else(|| json!({})); + let raw_body = body.to_string(); // Prefer exact raw bytes/string from HTTP trigger if available. + + let message_id = headers.get("Twitch-Eventsub-Message-Id").and_then(|v| v.as_str()).unwrap_or(""); + let timestamp = headers.get("Twitch-Eventsub-Message-Timestamp").and_then(|v| v.as_str()).unwrap_or(""); + let signature = headers.get("Twitch-Eventsub-Message-Signature").and_then(|v| v.as_str()).unwrap_or(""); + let secret = get_secret(iii, "TWITCH_EVENTSUB_SECRET").await; + + if message_id.is_empty() || timestamp.is_empty() || signature.is_empty() || secret.is_empty() + || !verify_eventsub_signature(&secret, message_id, timestamp, &raw_body, signature) + { + return Ok(json!({ "status_code": 401, "body": { "ok": false } })); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workers/channel-twitch/src/main.rs` around lines 117 - 125, The EventSub challenge and event handling currently trusts the parsed body; before returning the challenge or invoking agent::chat/outbound posts, validate the Twitch EventSub signature using the raw request body and relevant headers (e.g., message-id, timestamp, signature) — add or call a helper like verify_eventsub_signature(headers, raw_body) at the top of the handler (before the challenge branch) and return an unauthorized response if verification fails; ensure the verification uses the same secret/config used for Twitch subscriptions and that you validate the HMAC using the raw bytes, not the JSON-parsed value.
92-96:⚠️ Potential issue | 🟠 MajorUse token-owner user ID for
sender_id(not broadcaster ID).Line 94 sets
sender_idtobroadcaster_id; this fails for typical bot-token setups and can return auth/permission errors.Suggested fix
async fn send_message( iii: &III, client: &reqwest::Client, broadcaster_id: &str, text: &str, ) -> Result<(), IIIError> { @@ let client_id = get_secret(iii, "TWITCH_CLIENT_ID").await; if client_id.is_empty() { return Err(IIIError::Handler("TWITCH_CLIENT_ID not configured".into())); } + let sender_id = get_secret(iii, "TWITCH_SENDER_ID").await; + if sender_id.is_empty() { + return Err(IIIError::Handler("TWITCH_SENDER_ID not configured".into())); + } @@ .json(&json!({ "broadcaster_id": broadcaster_id, - "sender_id": broadcaster_id, + "sender_id": sender_id, "message": chunk, }))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workers/channel-twitch/src/main.rs` around lines 92 - 96, The JSON payload incorrectly sets "sender_id" to broadcaster_id; change it to the token-owner user ID (e.g., token_owner_id or bot_user_id) instead of broadcaster_id so the bot token is the sender. Update the code that builds the message JSON (the block using .json(&json!({ "broadcaster_id": broadcaster_id, "sender_id": broadcaster_id, "message": chunk }))) to supply the token owner ID for "sender_id" and ensure that the variable providing the token owner (token_owner_id / bot_user_id) is retrieved earlier from the OAuth/token response or the Twitch API and passed into this scope.workers/channel-linkedin/src/main.rs (1)
124-148:⚠️ Potential issue | 🔴 CriticalImplement mandatory LinkedIn webhook verification flow (challenge + signature + dedup).
At Line 237 only
POSTis registered, and at Line 124+ POST payloads are processed withoutX-LI-Signaturevalidation ornotificationIddeduplication. This is a security gap and enables spoofed/replayed deliveries.LinkedIn webhook validation requirements for GET challengeCode response, POST X-LI-Signature verification, and notificationId deduplicationAlso applies to: 237-241
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workers/channel-linkedin/src/main.rs` around lines 124 - 148, The webhook currently only accepts POSTs and processes payloads in webhook_handler without verifying X-LI-Signature, handling GET challenge responses, or deduplicating notifications; update the HTTP route registration to also handle GET challenge requests and implement three changes inside webhook_handler (and related request handling): 1) on GET requests respond to the LinkedIn challenge flow by returning the exact challenge parameter per LinkedIn spec; 2) on POST verify the X-LI-Signature header using the configured LinkedIn webhook secret before any processing (reject or return 401/200 per policy if signature fails); and 3) extract the notificationId from the parsed msg_event (the element/event/MESSAGE_EVENT_KEY path) and check a durable/short-lived dedupe store (cache/DB) to skip repeated notificationIds before proceeding, ensuring you only process unique notifications and log or ignore duplicates. Ensure all early exits still return a 200 body {ok:true} where appropriate after verification/dedupe decisions.
🧹 Nitpick comments (1)
workers/channel-linkedin/src/main.rs (1)
10-30: Guardsplit_messageagainstmax_len == 0to avoid infinite loop.If
max_lenbecomes0, Line 23 yields a split index of0, and Line 27 never advancesremaining.Suggested fix
fn split_message(text: &str, max_len: usize) -> Vec<String> { + if max_len == 0 { + return vec![]; + } if text.chars().count() <= max_len { return vec![text.to_string()]; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workers/channel-linkedin/src/main.rs` around lines 10 - 30, The split_message function can hang when max_len == 0 because split_idx becomes 0 and remaining never advances; add an early guard at the top of split_message that checks for max_len == 0 and returns a sensible value (e.g., return vec![text.to_string()] or an empty Vec depending on desired semantics) to avoid the loop, keeping the rest of the logic (char_indices -> split_idx -> remaining slicing) unchanged; update split_message accordingly so the guard prevents computing split_idx or slicing when max_len is zero.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/__tests__/channels-mastodon.test.ts`:
- Line 1: Remove the top-line "// `@ts-nocheck`" from the test file so TypeScript
checking is enabled, then run the type checker/tests to see reported errors and
fix them by adding correct types or safe casts in the test code (e.g., annotate
mocked Mastodon client instances, responses, and any helper functions used in
channels-mastodon.test.ts); ensure you import needed types from dependencies or
use explicit "as" casts for test fixtures instead of suppressing checks.
In `@src/__tests__/channels-reddit.test.ts`:
- Around line 60-63: The tests in src/__tests__/channels-reddit.test.ts were
accidentally converted to Mastodon tests (references like
"channel::mastodon::webhook", Mastodon env vars and import of
"../channels/mastodon.js") and now duplicate the Mastodon suite; either delete
this file entirely or restore Reddit-specific behavior by reverting environment
setup to Reddit vars, replacing the mastodon import with
"../channels/reddit.js", and updating suite/test names and assertions to the
Reddit identifiers (e.g., "channel::reddit::...") so the file actually tests
Reddit channel behavior instead of duplicating Mastodon tests.
In `@workers/channel-linkedin/src/main.rs`:
- Around line 131-135: The current code extracts only the first webhook element
by using arr.first(), dropping subsequent events; replace that with iterating
over all elements returned from body.get("elements") (e.g., use
arr.iter().cloned() or map over the array) and for each element attempt to
deserialize/convert into your MessageEvent and run the existing per-event
handling logic (preserve whatever processing currently runs for the single
MessageEvent); specifically update the code around the element variable
extraction so you produce a collection/iterator of elements and loop through
them, deserializing each into MessageEvent and processing each one instead of
just the first.
- Line 224: The current reqwest::Client is created with reqwest::Client::new()
(assigned to variable client) and has no timeout; change creation to use
reqwest::Client::builder().timeout(Duration::from_secs(N)).build().unwrap() (or
propagate the error) and add use std::time::Duration so the POST call that uses
client (the LinkedIn POST request) will fail fast instead of hanging
indefinitely; choose an appropriate seconds value for N.
---
Duplicate comments:
In `@workers/channel-linkedin/src/main.rs`:
- Around line 124-148: The webhook currently only accepts POSTs and processes
payloads in webhook_handler without verifying X-LI-Signature, handling GET
challenge responses, or deduplicating notifications; update the HTTP route
registration to also handle GET challenge requests and implement three changes
inside webhook_handler (and related request handling): 1) on GET requests
respond to the LinkedIn challenge flow by returning the exact challenge
parameter per LinkedIn spec; 2) on POST verify the X-LI-Signature header using
the configured LinkedIn webhook secret before any processing (reject or return
401/200 per policy if signature fails); and 3) extract the notificationId from
the parsed msg_event (the element/event/MESSAGE_EVENT_KEY path) and check a
durable/short-lived dedupe store (cache/DB) to skip repeated notificationIds
before proceeding, ensuring you only process unique notifications and log or
ignore duplicates. Ensure all early exits still return a 200 body {ok:true}
where appropriate after verification/dedupe decisions.
In `@workers/channel-twitch/src/main.rs`:
- Around line 117-125: The EventSub challenge and event handling currently
trusts the parsed body; before returning the challenge or invoking
agent::chat/outbound posts, validate the Twitch EventSub signature using the raw
request body and relevant headers (e.g., message-id, timestamp, signature) — add
or call a helper like verify_eventsub_signature(headers, raw_body) at the top of
the handler (before the challenge branch) and return an unauthorized response if
verification fails; ensure the verification uses the same secret/config used for
Twitch subscriptions and that you validate the HMAC using the raw bytes, not the
JSON-parsed value.
- Around line 92-96: The JSON payload incorrectly sets "sender_id" to
broadcaster_id; change it to the token-owner user ID (e.g., token_owner_id or
bot_user_id) instead of broadcaster_id so the bot token is the sender. Update
the code that builds the message JSON (the block using .json(&json!({
"broadcaster_id": broadcaster_id, "sender_id": broadcaster_id, "message": chunk
}))) to supply the token owner ID for "sender_id" and ensure that the variable
providing the token owner (token_owner_id / bot_user_id) is retrieved earlier
from the OAuth/token response or the Twitch API and passed into this scope.
---
Nitpick comments:
In `@workers/channel-linkedin/src/main.rs`:
- Around line 10-30: The split_message function can hang when max_len == 0
because split_idx becomes 0 and remaining never advances; add an early guard at
the top of split_message that checks for max_len == 0 and returns a sensible
value (e.g., return vec![text.to_string()] or an empty Vec depending on desired
semantics) to avoid the loop, keeping the rest of the logic (char_indices ->
split_idx -> remaining slicing) unchanged; update split_message accordingly so
the guard prevents computing split_idx or slicing when max_len is zero.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 8f2b4346-591d-4aee-997b-9083d6d82bde
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (16)
Cargo.tomlsrc/__tests__/channels-mastodon.test.tssrc/__tests__/channels-reddit.test.tssrc/__tests__/channels-twitch-linkedin.test.tssrc/channels/linkedin.tssrc/channels/reddit.tssrc/channels/twitch.tsworkers/channel-linkedin/Cargo.tomlworkers/channel-linkedin/iii.worker.yamlworkers/channel-linkedin/src/main.rsworkers/channel-reddit/Cargo.tomlworkers/channel-reddit/iii.worker.yamlworkers/channel-reddit/src/main.rsworkers/channel-twitch/Cargo.tomlworkers/channel-twitch/iii.worker.yamlworkers/channel-twitch/src/main.rs
💤 Files with no reviewable changes (4)
- src/channels/twitch.ts
- src/channels/linkedin.ts
- src/channels/reddit.ts
- src/tests/channels-twitch-linkedin.test.ts
✅ Files skipped from review due to trivial changes (6)
- workers/channel-twitch/iii.worker.yaml
- workers/channel-linkedin/iii.worker.yaml
- workers/channel-reddit/iii.worker.yaml
- workers/channel-linkedin/Cargo.toml
- workers/channel-twitch/Cargo.toml
- workers/channel-reddit/src/main.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- Cargo.toml
- workers/channel-reddit/Cargo.toml
| @@ -0,0 +1,178 @@ | |||
| // @ts-nocheck | |||
There was a problem hiding this comment.
Remove @ts-nocheck from this suite.
Line 1 suppresses type checking for the entire file and also introduces a code comment under src/**, which this repo disallows.
As per coding guidelines: src/**/*.{ts,tsx,js,jsx}: Code should be self-documenting; do not use code comments.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/__tests__/channels-mastodon.test.ts` at line 1, Remove the top-line "//
`@ts-nocheck`" from the test file so TypeScript checking is enabled, then run the
type checker/tests to see reported errors and fix them by adding correct types
or safe casts in the test code (e.g., annotate mocked Mastodon client instances,
responses, and any helper functions used in channels-mastodon.test.ts); ensure
you import needed types from dependencies or use explicit "as" casts for test
fixtures instead of suppressing checks.
| const handlers: Record<string, Function> = {}; | ||
| vi.mock("iii-sdk", () => ({ | ||
| registerWorker: () => ({ | ||
| registerFunction: (config: any, handler: Function) => { | ||
| handlers[config.id] = handler; | ||
| }, | ||
| registerTrigger: vi.fn(), | ||
| trigger: (req: any) => | ||
| req.action | ||
| ? mockTriggerVoid(req.function_id, req.payload) | ||
| : mockTrigger(req.function_id, req.payload), | ||
| shutdown: vi.fn(), | ||
| }), | ||
| TriggerAction: { Void: () => ({}) }, | ||
| })); | ||
|
|
||
| vi.mock("@agentos/shared/utils", () => ({ | ||
| httpOk: (req: any, data: any) => data, | ||
| splitMessage: vi.fn((text: string, limit: number) => { | ||
| const chunks: string[] = []; | ||
| for (let i = 0; i < text.length; i += limit) | ||
| chunks.push(text.slice(i, i + limit)); | ||
| return chunks.length ? chunks : [text]; | ||
| }), | ||
| resolveAgent: vi.fn(async () => "default-agent"), | ||
| })); |
There was a problem hiding this comment.
Use the shared test harness/mocks instead of bespoke local scaffolding.
This file re-implements handler capture, call plumbing, and shared mocks locally. Please switch to src/__tests__/helpers.ts and include the standard shared module mocks (utils.js, logger.js, metrics.js, errors.js) to keep test behavior consistent across suites.
As per coding guidelines: src/__tests__/**/*.{ts,tsx}: Use shared test helpers from src/__tests__/helpers.ts including KV mock and request builders and Mock shared modules utils.js, logger.js, metrics.js, errors.js.
Also applies to: 65-69
| process.env.MASTODON_INSTANCE = "https://mastodon.social"; | ||
| process.env.MASTODON_TOKEN = "test-masto-token"; | ||
| await import("../channels/mastodon.js"); | ||
| }); |
There was a problem hiding this comment.
This file no longer tests Reddit and now duplicates the Mastodon suite.
The renamed behavior in this channels-reddit test file is fully Mastodon-focused (channel::mastodon::webhook, Mastodon env/import/assertions). Please either remove this file or restore Reddit-specific intent; keeping both current suites duplicates coverage and obscures test ownership.
Also applies to: 71-177
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/__tests__/channels-reddit.test.ts` around lines 60 - 63, The tests in
src/__tests__/channels-reddit.test.ts were accidentally converted to Mastodon
tests (references like "channel::mastodon::webhook", Mastodon env vars and
import of "../channels/mastodon.js") and now duplicate the Mastodon suite;
either delete this file entirely or restore Reddit-specific behavior by
reverting environment setup to Reddit vars, replacing the mastodon import with
"../channels/reddit.js", and updating suite/test names and assertions to the
Reddit identifiers (e.g., "channel::reddit::...") so the file actually tests
Reddit channel behavior instead of duplicating Mastodon tests.
| let element = body | ||
| .get("elements") | ||
| .and_then(|e| e.as_array()) | ||
| .and_then(|arr| arr.first()) | ||
| .cloned(); |
There was a problem hiding this comment.
Process all webhook elements, not just the first event.
At Line 134, using arr.first() drops the rest of the batch when multiple elements arrive. Iterate through all elements and process each MessageEvent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workers/channel-linkedin/src/main.rs` around lines 131 - 135, The current
code extracts only the first webhook element by using arr.first(), dropping
subsequent events; replace that with iterating over all elements returned from
body.get("elements") (e.g., use arr.iter().cloned() or map over the array) and
for each element attempt to deserialize/convert into your MessageEvent and run
the existing per-event handling logic (preserve whatever processing currently
runs for the single MessageEvent); specifically update the code around the
element variable extraction so you produce a collection/iterator of elements and
loop through them, deserializing each into MessageEvent and processing each one
instead of just the first.
| let ws_url = | ||
| std::env::var("III_WS_URL").unwrap_or_else(|_| "ws://localhost:49134".to_string()); | ||
| let iii = register_worker(&ws_url, InitOptions::default()); | ||
| let client = reqwest::Client::new(); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n workers/channel-linkedin/src/main.rs | sed -n '215,235p'Repository: iii-experimental/agentos
Length of output: 1030
🏁 Script executed:
rg -A 5 -B 5 "reqwest::Client::new" workers/channel-linkedin/src/main.rsRepository: iii-experimental/agentos
Length of output: 567
🏁 Script executed:
rg "timeout|Duration" workers/channel-linkedin/src/main.rsRepository: iii-experimental/agentos
Length of output: 188
🏁 Script executed:
rg "use std::time::Duration|use std::time" workers/channel-linkedin/src/main.rsRepository: iii-experimental/agentos
Length of output: 50
🏁 Script executed:
rg -B 10 "timeout_ms" workers/channel-linkedin/src/main.rs | head -50Repository: iii-experimental/agentos
Length of output: 1462
🏁 Script executed:
rg -A 20 "fn webhook_handler" workers/channel-linkedin/src/main.rsRepository: iii-experimental/agentos
Length of output: 628
🏁 Script executed:
rg "client\.(get|post|put|delete|request)" workers/channel-linkedin/src/main.rs -A 3Repository: iii-experimental/agentos
Length of output: 50
🏁 Script executed:
rg "client\." workers/channel-linkedin/src/main.rsRepository: iii-experimental/agentos
Length of output: 107
🏁 Script executed:
rg -n "client" workers/channel-linkedin/src/main.rsRepository: iii-experimental/agentos
Length of output: 457
🏁 Script executed:
wc -l workers/channel-linkedin/src/main.rsRepository: iii-experimental/agentos
Length of output: 109
🏁 Script executed:
cat -n workers/channel-linkedin/src/main.rs | sed -n '1,50p'Repository: iii-experimental/agentos
Length of output: 2058
🏁 Script executed:
cat -n workers/channel-linkedin/src/main.rs | sed -n '74,95p'Repository: iii-experimental/agentos
Length of output: 977
🏁 Script executed:
cat -n workers/channel-linkedin/src/main.rs | sed -n '126,150p'Repository: iii-experimental/agentos
Length of output: 949
🏁 Script executed:
cat -n workers/channel-linkedin/src/main.rs | sed -n '84,120p'Repository: iii-experimental/agentos
Length of output: 1551
🏁 Script executed:
cat -n workers/channel-linkedin/src/main.rs | sed -n '150,200p'Repository: iii-experimental/agentos
Length of output: 1897
Add an explicit timeout to the HTTP client.
At line 224, reqwest::Client::new() creates a client without a timeout. The POST request to the LinkedIn API at line 84 can hang indefinitely if the external service stalls.
Suggested fix
+use std::time::Duration;
...
- let client = reqwest::Client::new();
+ let client = reqwest::Client::builder()
+ .timeout(Duration::from_secs(10))
+ .build()?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let client = reqwest::Client::new(); | |
| use std::time::Duration; | |
| let client = reqwest::Client::builder() | |
| .timeout(Duration::from_secs(10)) | |
| .build()?; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@workers/channel-linkedin/src/main.rs` at line 224, The current
reqwest::Client is created with reqwest::Client::new() (assigned to variable
client) and has no timeout; change creation to use
reqwest::Client::builder().timeout(Duration::from_secs(N)).build().unwrap() (or
propagate the error) and add use std::time::Duration so the POST call that uses
client (the LinkedIn POST request) will fail fast instead of hanging
indefinitely; choose an appropriate seconds value for N.
Webhook security - LinkedIn: HMAC-SHA256 challengeCode GET handshake, X-LI-Signature verification on POST, notificationId dedup via state::*, process all elements (not just first), 10s HTTP timeout - Twitch: EventSub HMAC-SHA256 (Message-Id + Timestamp + raw body) signature verification, sender_id now uses TWITCH_BOT_USER_ID instead of broadcaster_id - WhatsApp: X-Hub-Signature-256 verification (HMAC-SHA256 of raw body via WHATSAPP_APP_SECRET) plus GET hub.challenge endpoint with constant-time verify_token compare - Teams: validate serviceUrl against Bot Framework host suffixes (with TEAMS_ALLOWED_SERVICE_URLS escape hatch) before sending authenticated outbound replies; reqwest connect/total timeouts (5s/15s) - Webex: cache bot personId from /people/me and drop self-posted webhook events to break reply loops; propagate non-2xx fetch errors instead of silently acking Reliability - Reddit: invalidate token cache on 401 and retry once with refresh - Signal: empty agent reply now returns 200 (with `ok:false`) so the bridge does not retry; send_message failures are surfaced to the caller rather than logged-and-acked - WhatsApp: agent::chat failures no longer fail the webhook response; caller-supplied phone numbers redacted (sha256:8) in error logs - Email: SMTP_* secrets resolved through vault::get with env fallback, recipient address redacted in send-failure logs, blank/whitespace agent replies no longer trigger empty outgoing email Hygiene - Newline-aware split_message no longer leaves a leading '\n' on the next chunk in teams/webex/whatsapp - Workspace MSRV pinned to 1.88 (let-chains stabilization) and the reddit/linkedin/twitch crates wired into workspace members
Summary
Ports 3 channel adapters (of the 8 requested) from TS to Rust narrow workers. The other 5 (twitter, facebook, instagram, youtube, lark) do not exist as TS sources in this repo — only reddit, linkedin, and twitch were present under `src/channels/`.
Each ported worker:
Test plan
Files
Summary by CodeRabbit