Naoki Nakamichi Posted June 18 Share Posted June 18 Hello, I am looking for guidance on how to configure and deploy an on-premises LLM (e.g., Llama3) with Spotfire Copilot. Specifically, I have two main questions: How should the .env file be configured if we want to use an on-premises LLM instead of a cloud-based LLM like OpenAI? What are the necessary steps and requirements to deploy the on-premises LLM as a service to integrate it with Copilot? Any detailed instructions, best practices, or examples would be greatly appreciated. Thank you! Link to comment Share on other sites More sharing options...
Prashant Maske Posted Friday at 09:35 AM Share Posted Friday at 09:35 AM Hello Naoki, First of all you, will need to run the model locally and then make it accessible as a web service that Spotfire Copilot can communicated with. Ensure that your Spotfire Copilot instance is configured to point to the LLM service URL specified in the .env file. Thanks Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now