Hej, @KRRT7 @gvanrossum
Seems we have a regression with the PR #231
#231
02ca2d2
My .env file contains AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_ENDPOINT_EMBEDDING which includes the api version after ?
Note that this is default, how you get it from the foundry UI when creating new deployments, to this format should be the default in .env files and it should also work out of the box. (as most users will use this format)
make test now fails with my .env files containing ? and api version
we have two problems:
- AsyncAzureOpenAI then appends its own /openai/deployments/gpt-4o/chat/completions, resulting in a doubled path → 404.
- The test_online.py test uses create_async_openai_client (a different code path in utils.py), whereas the failing test goes through create_chat_model → _make_azure_provider → AzureProvider(openai_client=AsyncAzureOpenAI(...)).
Hej, @KRRT7 @gvanrossum
Seems we have a regression with the PR #231
#231
02ca2d2
My .env file contains AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_ENDPOINT_EMBEDDING which includes the api version after ?
Note that this is default, how you get it from the foundry UI when creating new deployments, to this format should be the default in .env files and it should also work out of the box. (as most users will use this format)
make test now fails with my .env files containing ? and api version
we have two problems: