Jump to content

Spotfire LLM Orchestrator - Prompts being send to some public endpoint and timed out


Recommended Posts

Hello Team,

I have successfully deployed the Copilot orchestrator and established an Azure OpenAI Endpoint with the gpt-3.5-turbo model. My test code can access the OpenAI endpoint and receive responses without issues.

However, I encounter a problem when attempting to test the /orchestrator endpoint through the Swagger UI. The requests are being routed to a public endpoint (openaipublic.blob.core.windows.net:443) and consistently time out. Why are the requests directed to this endpoint instead of using my private endpoint?

 

For troubleshooting, here are some details:

I am testing the /orchestrator endpoint via Swagger at this URL: http://10.188.24.106:8080/docs#/default/handle_request_orchestrator__post

image.thumb.png.6475940ec62b90c8837ec483eb100ea7.png

 

In the orchestrator container logs, note the last message:

 

orchestrator  | INFO:root:Processing request.

orchestrator  | INFO:root:LOG_LEVEL: DEBUG

orchestrator  | INFO:root:Param prompt: How to create a bar chart?

orchestrator  | INFO:root:Param index_name: spotfiredocs

orchestrator  | INFO:root:Param index_engine: Milvus

orchestrator  | INFO:root:Param index_score_threshold: 0.3

orchestrator  | INFO:root:Param topk: 5

orchestrator  | INFO:root:Param temperature: 0.0

orchestrator  | INFO:root:Param gpt_engine: spt-deploy-gpt35-turbo

orchestrator  | INFO:root:Param use_context: False

orchestrator  | INFO:root:Param user_intent: QueryIntent

orchestrator  | DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): openaipublic.blob.core.windows.net:443

 

Subsequently, I received a timeout error:

orchestrator  | INFO:     10.218.72.102:56783 - "POST /orchestrator/ HTTP/1.1" 500 Internal Server Error

orchestrator  | ERROR:    Exception in ASGI application

orchestrator  | Traceback (most recent call last):

orchestrator  |   File "/usr/local/lib/python3.10/site-packages/urllib3/connection.py", line 203, in _new_conn

orchestrator  |     sock = connection.create_connection(

orchestrator  |   File "/usr/local/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection

orchestrator  |     raise err

orchestrator  |   File "/usr/local/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection

orchestrator  |     sock.connect(sa)

orchestrator  | TimeoutError: [Errno 110] Connection timed out

 

I also see that configuration parameters are available during container runtime:
image.thumb.png.a84c185d2809bb8964f9b084384423a9.png

 

Could there be something I am overlooking? Any assistance in troubleshooting this issue would be greatly appreciated.

 

Thanks,

Alex

Link to comment
Share on other sites

Hello Alex,

I will be happy to assist.

Could you please let us know where is the container for the orchestrator running?  It seems like it is deployed in a different machine from where your client browser is running.

Please verify the network connection between the machine hosting the orchestrator and Azure OpenAI. You might have to manage the access to your Azure OpenAI endpoint and allow access to the machine hosting the orchestrator.

Let us know if you need further assistance and we will happy to help.

Regards,

Marcelo Gallardo

Link to comment
Share on other sites

Posted (edited)

Hi Marcelo,
Thanks a lot for prompt response!

My question is why Orchestrator is trying to make calls to the openaipublic.blob.core.windows.net when processing requests from Spotfire client?

I have set up a private endpoint using Azure OpenAI service, and I don't want my prompt to be transmitted elsewhere.

Here is a corresponding log entry from orchestrator container:

orchestrator  | DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): openaipublic.blob.core.windows.net:443

Thanks,
Alex

 

Edited by Alexander Osipov 3
Link to comment
Share on other sites

Alex,

Orchestrator is trying to connect to your private Azure OpenAI endpoint.  That is the error shown when the connection to the endpoint fails.

I think the issue is that your private endpoint can't be accessed from the machine where Orchestrator is running.

I will be happy to have a zoom and discuss the details.

Thanks,

Marcelo

Link to comment
Share on other sites

  • 4 weeks later...

Solution was found with Marcelo help.
The following variables from .env file should have been configured like this:

# Plugins Config
RETRIEVER_PLUGIN_ENTRY_POINT="plugins.retrievers.milvus:MilvusRetrieverPlugin"
MODEL_PLUGIN_ENTRY_POINT="plugins.models.az_openai:AzOpenAIPlugin"

 This makes orchestrator to use private Azure OpenAI endpoint.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...