feat: add embedded adapters (granite switch) to openai backend#881
feat: add embedded adapters (granite switch) to openai backend#881jakelorocco wants to merge 15 commits intomainfrom
Conversation
Enable calling intrinsics on Granite Switch models via the OpenAI backend. Granite Switch models embed adapter weights directly in the checkpoint and activate them via chat template control tokens, so no PEFT loading is needed. - Add EmbeddedIntrinsicAdapter class that carries only I/O config (no weights) with factory methods to load from a model directory or HuggingFace Hub - Add register_granite_switch_model() and add_embedded_adapter() to OpenAIBackend - Add _generate_from_intrinsic() that reuses IntrinsicsRewriter/ResultProcessor and injects intrinsic_name into chat_template_kwargs for the switch model - Ensure serialized messages always include 'role' (latent issue in rewriter instruction messages, newly exposed by OpenAI API serialization path) - Add unit tests for adapter loading, registration, and rewriting Signed-off-by: lastrasl <lastrasl@us.ibm.com>
Signed-off-by: Jake LoRocco <jake.lorocco@ibm.com> Assisted-by: CLAUDE:OPUS
Signed-off-by: Jake LoRocco <jake.lorocco@ibm.com> Assisted-by: CLAUDE:OPUS
|
The PR description has been updated. Please fill out the template for your PR to be reviewed. |
|
I'm still in the process of testing some of these changes from an e2e perspective. I also still need to add documentation for how to use embedded models along with examples. These changes do seem to work though and allow utilizing intrinsics with an embedded adapter / granite switch model. |
|
Bug: The Fix: mirror the if ModelOption.TEMPERATURE in model_options:
api_params["temperature"] = model_options[ModelOption.TEMPERATURE]Verified: with this fix, 5 back-to-back pipeline runs (guardian → rewrite → answerability → QC → base model, no sleep between calls) produce bit-for-bit identical outputs including long-form base model answers. Without the fix, outputs vary run-to-run despite |
|
@lastras Your comment states that all model options get dropped but that the fix is to add (another) special case for It sounds from your comment like there's an underlying bug here that should be fixed. Can you clarify? |
Signed-off-by: Jake LoRocco <jake.lorocco@ibm.com>
Signed-off-by: Jake LoRocco <jake.lorocco@ibm.com> Assisted-by: CLAUDE:OPUS
Signed-off-by: Jake LoRocco <jake.lorocco@ibm.com> Assisted-by: CLAUDE:OPUS
Signed-off-by: Jake LoRocco <jake.lorocco@ibm.com> Assisted-by: CLAUDE:OPUS
Signed-off-by: Jake LoRocco <jake.lorocco@ibm.com> Assisted-by: CLAUDE:OPUS
Signed-off-by: Jake LoRocco <jake.lorocco@ibm.com>
Signed-off-by: Jake LoRocco <jake.lorocco@ibm.com> Assisted-by: CLAUDE:OPUS
c0b163c to
3227a5b
Compare
Signed-off-by: Jake LoRocco <jake.lorocco@ibm.com>
|
I fixed a few random issues and also the issues we talked about (call_intrinsic and seed/temperature setting). I'm working on adding some documentation and minor examples. The functional parts of the code should be stable now. I have additional things that I'd like to clean up, but I will delay them until after we make sure everything is working as intended and post-release, so I don't impact any functionality. |
Signed-off-by: Jake LoRocco <jake.lorocco@ibm.com>
Signed-off-by: Jake LoRocco <jake.lorocco@ibm.com> Assisted-by: CLAUDE:OPUS
Signed-off-by: Jake LoRocco <jake.lorocco@ibm.com> Assisted-by: CLAUDE:OPUS
Signed-off-by: Jake LoRocco <jake.lorocco@ibm.com> Assisted-by: CLAUDE:OPUS
|
Updated the PR description. Fixed a few additional errors / bugs. Added documentation and examples. We should be good to start reviewing. I need to run one more round of tests with the latest from the granite-switch team. I have plans to open a few additional issues to clean things up after this PR is merged. Additional changes that should be made before the official release:
|
ajbozarth
left a comment
There was a problem hiding this comment.
Good overall direction — the EmbeddedIntrinsicAdapter class is well-structured and the design of delegating I/O config loading to the existing io.yaml / IntrinsicsRewriter machinery makes sense. A few issues need addressing before this merges, ranging from a runtime bug to a dependency footprint concern.
| "nltk>=3.9", # Needed for sentence tokenization in granite citation parsing. | ||
| "rouge_score", # Needed for Majority Voting Sampling Strategies. | ||
| "PyYAML", # Needed for backends/adapters and granite formatters. | ||
| "huggingface-hub>=0.33.4", # Needed for Granite Switch embedded adapter downloads (OpenAI backend). |
There was a problem hiding this comment.
huggingface-hub is now a hard dependency for all Mellea users, including those who only use the Ollama or non-switch OpenAI backends. It's also duplicated — it already appears in the hf extra at line 51.
AGENTS.md §5 calls for optional backend imports to be wrapped in try/except ImportError with a helpful message. The HF download only happens when using a Granite Switch model, so this should either be gated behind an optional extra (e.g., openai or a new switch extra) with a try/except ImportError wrapping calls in EmbeddedIntrinsicAdapter.from_hub, or at minimum the duplicate in the hf extra should be removed.
|
|
||
| # Pre-Built Granite Switch Models | ||
| IBM_GRANITE_SWITCH_4_1_8B = ModelIdentifier( | ||
| hf_model_name="GrizleeBer/gs-test-2" # Placeholder. |
There was a problem hiding this comment.
IBM_GRANITE_SWITCH_4_1_8B points to a personal repo (GrizleeBer/gs-test-2). Any user who imports this constant today — or who passes it as a model ID with load_embedded_adapters=True — will silently hit that personal repo.
Please block merge on updating this to the real repo ID, or make it raise explicitly when accessed so it can't accidentally ship in a release.
| # adapters during call_intrinsic, or once we support other types of adapters for | ||
| # OpenAIBackends. | ||
| # OpenAI Backends only support embedded_adapters. | ||
| self._uses_embedded_adapters = True |
There was a problem hiding this comment.
_uses_embedded_adapters is unconditionally True for every OpenAIBackend, regardless of whether the served model is actually a Switch model. This means calling any intrinsic on a non-Switch OpenAI backend will attempt a HuggingFace download and fail with a confusing FileNotFoundError (no adapter_index.json) rather than a clear "this model does not support embedded intrinsics" error.
Suggest tying this flag to whether load_embedded_adapters=True was passed (or whether register_embedded_adapter_model has been called), and raising a clear error in _generate_from_intrinsic when no adapters are registered.
| raise NotImplementedError("Intrinsics require a chat context.") | ||
|
|
||
| # Intrinsics don't support streaming because of their post-processing step. | ||
| if model_options.get(ModelOption.STREAM, None) is not None: |
There was a problem hiding this comment.
This guard raises NotImplementedError whenever ModelOption.STREAM appears in model_options at all — including STREAM=False. Compare the analogous check at line 877 which correctly uses:
if model_opts.get(ModelOption.STREAM, False):Fix:
if model_options.get(ModelOption.STREAM, False):
raise NotImplementedError("Intrinsics do not support streaming.")|
|
||
| if len(model_options.items()) > 0: | ||
| MelleaLogger.get_logger().info( | ||
| "passing in model options when generating with an intrinsic; only temperature and seed are kept from model options" |
There was a problem hiding this comment.
The log message says "only temperature and seed are kept" but SYSTEM_PROMPT is also extracted and used a few lines below:
system_prompt = model_options.get(ModelOption.SYSTEM_PROMPT, "")Either update the message to accurately list all supported options, or narrow the log condition to only fire when unrecognized options are present.
|
|
||
| # TODO: OpenAIBackend only supports EmbeddedAdapters. | ||
| # It should be refactored into a specific adapter.transform() function. | ||
| assert isinstance(adapter, EmbeddedIntrinsicAdapter), ( |
There was a problem hiding this comment.
assert is disabled when Python runs with -O (optimized mode), making this a silent no-op in production environments. Replace with an explicit guard:
if not isinstance(adapter, EmbeddedIntrinsicAdapter):
raise TypeError(
f"OpenAIBackend only supports EmbeddedIntrinsicAdapter, got: {type(adapter).__name__}"
)| # adapter loading: 1. regular adapters, and 2. embedded adapters. | ||
| if not has_adapter: | ||
| # EmbeddedAdapters get grabbed directly from the hf repo. | ||
| if getattr(backend, "_uses_embedded_adapters", False): |
There was a problem hiding this comment.
Accessing private attributes (_uses_embedded_adapters, _model_id) of a concrete class via getattr from outside that class couples _util.py to OpenAIBackend internals without going through the AdapterMixin interface.
Suggest adding a method to AdapterMixin (e.g., uses_embedded_adapters() -> bool) so this is an explicit part of the interface contract rather than a duck-typed private attribute check.
| ) | ||
| adapter_type = AdapterType.ALORA if technology == "alora" else AdapterType.LORA | ||
| super().__init__(intrinsic_name, adapter_type) | ||
| self.intrinsic_name = intrinsic_name |
There was a problem hiding this comment.
self.intrinsic_name is set here, but super().__init__(intrinsic_name, adapter_type) already sets self.name = intrinsic_name (line 49 of the base class). The test even asserts adapter.intrinsic_name == adapter.name.
Pick one: use self.name (the inherited attribute) or rename self.name to self.intrinsic_name in the base class. Having both on EmbeddedIntrinsicAdapter creates ambiguity about which is canonical.
| if intrinsic_name is not None: | ||
| raise ValueError( | ||
| f"No adapter found for intrinsic '{intrinsic_name}' in {repo_id}" | ||
| ) from None |
There was a problem hiding this comment.
raise ... from None suppresses the original exception chain, which includes the local model path that was searched — useful information for debugging. Use raise ... from e (or just raise) to preserve the context.
| int(os.environ.get("CICD", 0)) == 1, | ||
| reason="Skipping OpenAI intrinsics tests in CI", | ||
| ), | ||
| pytest.mark.skip( |
There was a problem hiding this comment.
The entire E2E test file is unconditionally skipped. The unit tests in test_embedded_adapter.py cover adapter loading well, but the ~200-line _generate_from_intrinsic code path in OpenAIBackend has no test coverage at all.
At minimum, add unit tests that mock the OpenAI client and verify the generate path end-to-end (chat_template_kwargs is set, result_processor is applied, temperature/seed are forwarded). These can live alongside the existing unit tests rather than requiring a real vLLM server.
|
This is pretty cool, I had Claude explain it to me along with its review. Afaik the review comments look important to address, but I didn't block on them just incase some are intentional. I also ran |
| import re | ||
| from typing import TypeVar | ||
|
|
||
| import huggingface_hub |
There was a problem hiding this comment.
TestFromHub.test_missing_huggingface_hub_raises fails (both in CI and locally) because of this top-level import.
The test patches sys.modules["huggingface_hub"] to None to simulate the library being absent, but since the name is already bound here at module load time, the patch has no effect — snapshot_download runs for real, hits HuggingFace with some/repo, gets a 401, and raises RepositoryNotFoundError instead of the expected ImportError.
Two consistent fixes:
Option A — keep huggingface_hub optional (recommended): Remove this top-level import and move it inside from_hub:
@staticmethod
def from_hub(repo_id, ...):
try:
import huggingface_hub
except ImportError:
raise ImportError(
"huggingface_hub is required to load embedded adapters from the Hub. "
"Install it with: pip install huggingface-hub"
) from None
local_root = huggingface_hub.snapshot_download(...)The test then passes as-is, and huggingface_hub can be removed from base dependencies.
Option B — accept it as a hard dependency: Remove test_missing_huggingface_hub_raises (the scenario it tests can never happen with a top-level import) and update the from_hub docstring to drop the ImportError entry from Raises:.
Misc PR
Type of PR
Description
Adds support for granite switch models for OpenAIBackends.
Changes:
load_embedded_adapters=Trueat init.IBM_GRANITE_SWITCH_4_1_8Bthat points to the temp repo for now.Please see most recent comment below for additional context: #881 (comment)
Testing
Attribution