Jump to content

Marcelo Gallardo

Spotfire Team
  • Posts

    15
  • Joined

  • Last visited

  • Days Won

    1

Marcelo Gallardo last won the day on September 8

Marcelo Gallardo had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Marcelo Gallardo's Achievements

  1. Hi Naoki, Thank you for your inquiry. For implementing Spotfire Copilot, we recommend allocating the following resources for both the Data Loader and Orchestrator Docker images: Minimum Specifications: CPU: 2 cores Memory: 4 Gi This should provide a baseline for stable performance. Depending on the workload, increasing these resources may be necessary. If your client anticipates heavy usage, let us know, and we can provide more detailed recommendations based on specific use cases. Feel free to reach out if you need further information or clarification. Best regards, Marcelo Gallardo
  2. Hello @Markku Mikkola. Apologies for the ongoing difficulties with your setup. Since you're working on an on-premise deployment that includes embedding models, could you please provide more details about the specific models you're using? Currently, the data loader is configured to assume the use of OpenAI's embedding model, which is why it’s attempting to connect to OpenAI. Once we have the details of the models you're working with, I can share the correct configuration and URL for a data loader that supports your chosen embedding model. If you'd prefer to discuss your environment further, feel free to schedule a call, and we'd be happy to assist you. Best regards, Marcelo Gallardo
  3. Hello Markku, Thank you for reaching out! We’re happy to assist with your inquiries regarding Spotfire Copilot. First, could you let us know which version of Copilot you’re using? The ability to configure local models was introduced in later releases, so this will help us guide you accordingly. Additionally, please share the interface your local model uses. We currently support the Ollama interface for Llama models, but if you're using a different interface, we can assist in providing the necessary plugin. Please note that we support all models that are compatible with the Langchain library. Regarding the data loaders, if you’re using a local embedding model, some configuration will be required to properly integrate it with the data loader. We can help walk you through this setup if needed. Let us know if this answers your question, or if you’d prefer, we can schedule a call to discuss further and provide more detailed guidance. Best regards, Marcelo Gallardo
  4. Niraj, Thank you for raising bringing up your client's concerns about data security when using large language models (LLMs). We take these concerns seriously and prioritize the protection of your data at every step. I would like to assure you that we follow strict security protocols when interacting with LLMs to mitigate any potential risks. Here are some key measures we have in place to ensure the security and privacy of your data: - Data Privacy: We take every precaution to ensure that sensitive data is only shared with LLMs when absolutely necessary. Any data processed is carefully managed, and we actively work to minimize the amount of data sent to the LLM. - Secure Transmission: All data interactions with LLMs occur over encrypted channels (using HTTPS), ensuring that no unauthorized parties can access the information during transmission. - No Data Retention: Many LLM providers, including OpenAI, have strict policies in place to ensure data is not stored after processing. We routinely review these policies with external vendors to ensure they meet these security standards. Furthermore, Copilot is LLM-agnostic, allowing you to select the LLM that best fits your data privacy requirements. Feel free to contact us if you need any additional information. We'd be happy to provide more details and assist you further. Regards, Marcelo Gallardo
  5. @NirajThanks for the feedback and glad you resolved the issue.
  6. @Niraj Could you please share the logs for the Orchestrator? Thanks, Marcelo
  7. Niraj, We observed those issues in the past and were resolved by increasing the rate limit on the models used by Orchestrator. Also, do you have access to OpenAI? You could try that to see if you get the same rate limit issues. Thanks, Marcelo
  8. Hello Niraj, Sorry you are having issues with copilot. Could you please provide the following information: - What version of the copilot orchestrator are you using? - Are you connecting to Azure OpenAI or OpenAI? - Can you access the Cognitive Search URL from the machine running orchestrator? Thanks, Marcelo Gallardo
  9. @Gerard Conway 2 Thank you for providing the details about the issue you're experiencing. I understand that the increased number of parameters might be confusing. Let me address each of your concerns: OrchestratorRequired Parameters: The required parameters for configuring the Orchestrator container largely depend on the specific Large Language Model (LLM) and vector data store being utilized, which are. controlled by the plugin parameters MODEL_PLUGIN_ENTRY_POINT RETRIEVER_PLUGIN_ENTRY_POINT The LANGCHAIN_... parameters are optional and are used to enable Langsmith. You can set the LANGCHAIN_TRACING_V2="false" and leave the other parameters blank if you don't want to use Langsmith VECTOR_DB_TOKEN is indeed used for authentication purposes. If your Milvus setup does not require authentication, you can leave this parameter blank. The system should handle it gracefully. If authentication is enabled, this should be set to username:password. Running a Test Query Against the Orchestrator REST Endpoint: The endpoint is provided just to test very basic functionality. The documentation for each parameter can be seen by looking at the request schema in the swagger documentation. To execute a simple test against the Orchestrator and make sure it is functional, you need to set the following parameters in your request body: { "user_intent": "UserIntent", "prompt": "How to create a bar chart?" } I hope this clarifies your questions. If you have any further doubts or need assistance with any specific configuration, feel free to reach out. We're here to help you get everything running smoothly. Regards, Marcelo Gallardo
  10. @Naoki Nakamichi Thank you for providing the details about the issue you're experiencing. Based on the information you shared, everything appears to be configured correctly. However, to further diagnose the issue, we need to review the Orchestrator container logs. We would appreciate it if you could share these logs with us. If you need any assistance with retrieving the logs, please let us know, and we’ll be happy to guide you through the process. Thank you for your cooperation and understanding. We look forward to resolving this issue promptly. Marcelo Gallardo
  11. Alex, Orchestrator is trying to connect to your private Azure OpenAI endpoint. That is the error shown when the connection to the endpoint fails. I think the issue is that your private endpoint can't be accessed from the machine where Orchestrator is running. I will be happy to have a zoom and discuss the details. Thanks, Marcelo
  12. Hello Alex, I will be happy to assist. Could you please let us know where is the container for the orchestrator running? It seems like it is deployed in a different machine from where your client browser is running. Please verify the network connection between the machine hosting the orchestrator and Azure OpenAI. You might have to manage the access to your Azure OpenAI endpoint and allow access to the machine hosting the orchestrator. Let us know if you need further assistance and we will happy to help. Regards, Marcelo Gallardo
  13. @NirajGood to hear your issue got resolved. Please feel free to contact us with any other issue or suggestion. We will be delivering a new version that fixes a few issues in the next few days. Stay tuned. Marcelo
  14. Hello @Niraj, Please make sure the password you are using matches the value used to obtain the hashed password as indicated in the instructions. Obtain a secret key which is used to sign the JWT tokens. You can obtain the key using openssl. (i.e. openssl rand -hex 32). Obtain a Bcrypt Hashed Password for the admin user. You can obtain a hashed password online from https://bcrypt-generator.com/. During the installation process, an admin user is set up with a bcrypt hashed password. This hashed password will be used to authorize access to the service’s administrative functions and for generating JWT (Json Web Token) tokens. The tokens will be used for authentication while accessing the service. Please save the password used as it will be needed to get the JWT token. Also note an issue related to Docker Compose when using environment variables that contain the “$” character, especially when working with hashed passwords. If your hashed password contains a “$”, please replace each “$” character with double “$$” to ensure that the password is correctly interpreted. Please let us know if this helps you resolve the issue. Marcelo Gallardo
  15. @Gerard Conway 2 Thanks for your feedback and we are aware of the issue with OpenAI. Our next release coming in a few weeks addresses the issue. You can still try the current release if you have access to Azure OpenAI. Thanks for your patience. Marcelo.
×
×
  • Create New...