Jump to content

Ahmad Fattahi

Spotfire Team
  • Posts

    7
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Ahmad Fattahi's Achievements

Newbie

Newbie (1/14)

  • Reacting Well Rare
  • Dedicated Rare
  • One Month Later
  • One Year In
  • First Post Rare

Recent Badges

1

Reputation

  1. Hi Alex, sorry you're running into issues. As per the deployment documentation, you would need a Vector DB or otherwise search service for document-based questions. In this case and when you choose the Spotfire mode, the Copilot will first consult with its local index (the Spotfire document we have included in the install kit) to retrieve the context. The retrieved context is then augmented to the prompt to the Gen AI or LLM (hence the name Retrieval Augmented Generation or RAG). Therefore, you will need a Vector DB for any type of RAG-based query. The upside of this approach is that not only you can continue to easily update the Spotfire documentation over time, you can also add any arbitrary internal documentation and have that as additional context to your users. Hope it helps.
  2. We're aiming for the first half of May for the next release.
  3. Hi @John wondering how your experiment with Spotfire Copilot was.
  4. Is it possible for you to get your desired data view from OpenAI and then have your data source create the aggregated/reshaped view without loading into Spotfire? Data Virtualization often helps with that. For the initial step you can only have the metadata (column names, number of rows, column types, etc.) loaded into Spotfire and passed to OpenAI.
  5. Well, the good news is that we don't do fine tuning. We do prompt engineering to provide the right context and instructions to the foundation model. The architecture is modular, so if and when you prefer to use any LLM endpoint we can simply use the exact same architecture but point it to your LLM of choice instead of Azure or OpenAI.
  6. It is important to note that everything in the Azure architecture happens in your Azure subscription. Therefore, Microsoft will not have access to the data nor will the data be used to improve the foundation model; this is very similar to other Azure services. This is generally in contrast with OpenAI's terms. Let me break down the answer into two scenarios depending on what you mean by "share with": If you mean you need your organization to keep the data private, it will be no problem. Similar to other Azure services, your data will remain your data. The foundation model will happen to run on Azure and in your instance only to operate on your data. The data will not be shared with Microsoft or other organizations.If you mean you cannot or don't want to send your data to any third-party platform whatsoever, that may be an issue today. There are only a few public LLM services out there which will require data to be sent to them.
  7. Check out the Spotfire Copilot AI tool, now available for download from the exchange. With the free add-on, any organization using the Spotfire® platform can stand up their own private instance of the Spotfire Copilot tool. What if you could ask Spotfire, "What is the best-selling product and its quarterly revenue" in this dashboard? Or, after installation, ask right inside Spotfire, "How do I add a new Python library?" The way we interact with computers has moved from cryptic punch cards to keyboards and mice to natural language. Through the Spotfire Copilot tool, we are on the cusp of offering our vast user community the same modern and intuitive experience inside the product. See this example where a Spotfire end user benefits from the Spotfire Copilot tool's automatic visual analytics prowess while having a smart conversation about formation pressure breakdown in an oil rig. The article below is a window into what we are currently designing, the architecture, and the use cases. The Spotfire Copilot core components The question-answering about oil rigs is done simply through the Spotfire platform's extensibility feature (data functions) calling OpenAI's API on the backend. While this example is very easy to build, we are continuing to work on much more significant enhancements to the Spotfire Copilot tool. From the OpenAI ChatGPT data function, we worked on creating a question-answering system, centric to Spotfire. The architecture is covered in more detail below. We "fed" the model Spotfire product documentation and multimedia. Thus, the user can ask a Spotfire product question and receive an answer (with citations if applicable). Our latest work is around autochart generation in Spotfire. Most major LLMs, including ChatGPT, are currently limited to producing text responses. Of course, our Spotfire users are interested in automating and maintaining Spotfire dashboards composed of visuals. The Spotfire Copilot tool has now evolved to a conversational chatbot with advanced natural language query abilities to answer analytics questions, produce innate Spotfire visuals, and perform Spotfire operations. Short introduction to Large Language Models (LLMs) NLP and LLMs - Glossary LLMs are advanced NLP models that use machine learning algorithms to analyze and understand human language. These models are trained on vast amounts of text data, such as books, articles, and websites, allowing them to generate human-like language and perform various language-related tasks. The recent advancements in deep learning and computational power have enabled the development of larger and more powerful language models. These models have been shown to excel at a variety of language tasks, including language generation, text summarization, translation, and sentiment analysis, among others. LLMs employ statistical techniques or probability distributions over sequences of words and range in complexity from N-grams to deep learning models. For a more detailed review of language models refer to this article. For the remainder of this article, we assume that the employed LLMs and accompanying services come from Microsoft Azure unless otherwise mentioned explicitly. We have been partnering closely with Microsoft to build a Spotfire Copilot tool to serve various types of Spotfire users and make their work more efficient. Why use the Spotfire Copilot tool? Talking to the countless number of Spotfire users, we have crafted several use cases for the Spotfire Copilot tool. Each one of the use cases helps at least one persona be more efficient, less error-prone, or have more satisfaction working with the product. The use cases cover a few different categories. In one category, the user may be looking for a specific feature of the product itself. The Copilot add-on can expedite and improve the experience of finding the answer to product questions. In another category, the Spotfire end user may be interested in generating an analytical narration of their data. The narration may accompany charts and graphs automatically generated based on the user data. In another set of use cases, the user is a developer or data scientist who is building new analyses inside Spotfire. When they need help with their Python or JavaScript code, the Copilot tool can help them achieve the result faster. The graph below lists a number of these specific use cases with some example prompts. The architecture To build a capable, secure, and customizable Copilot tool, several components are required. The logical overview of the architecture is depicted below. In the following short paragraphs, we briefly explain how the Copilot add-on works, keeps the data secure, and yet enables the user to customize their experience. At the heart of the user experience is a chat prompt where the user inputs their question or request. That happens in Spotfire. In the simplest of architectures, this prompt can be directly sent to an LLM (e.g., Azure OpenAI or OpenAI) through a Spotfire® Data Function. The result will be parsed and presented to the user. While this is useful, we aim to embed a much more powerful Copilot tool inside the product. Under the hood, the prompt gets significantly processed before it is sent to the LLM. This whole process is managed by a component called "Orchestrator." The Orchestrator runs the raw prompt by a search engine, vector database, or otherwise cognitive service first. The reason to do so is to acquire context around the user's prompt. Imagine the following scenarios where in both the user asks "How do I install a Python package?" In the first scenario, the question is directly sent to the LLM. The LLM doesn't have the Spotfire context; therefore, the answer will likely be the most common answer where a user wants to install the package in their notebook through PyPi. In the second scenario, the raw prompt searches an indexed set of documents with the prompt. The resulting content will then be passed to the LLM as the context. That way, the LLM will know how to answer the question in the Spotfire context. The indexed documents are Spotfire product documentation, relevant articles, or any set of curated questions and answers. On top of that, and very importantly, we are designing features for any admin user to be able to plug in other documents per the needs of the business. That way, the user of the Spotfire Copilot tool will be able to converse about the internal documents, nomenclature, subject matter, policies, or anything else that is provided to the search engine. That is to say, the Copilot add-on is extensible. All of these processes and data flows are handled with the highest levels of security and privacy standards in mind. Our partner, Microsoft, provides several additional safety components that ensure the security and relevance of the user experience in Azure. We also provide citations to the resulting text so that the user knows the answer is trustworthy by looking at the references. We specifically instruct the LLM to avoid hallucinations and only respond within the context of relevant context. Compare fine-tuning and prompt engineering One of the hot topics in employing LLMs is fine-tuning vs. prompt engineering. The LLMs are often called foundation models. These foundation models, such as GPT4, are trained by massive amounts of data, are very large (many billions of nodes), and are often frozen in time. When users need to customize their experience by providing context, the customization can happen in one of two ways. The more obvious way that comes to mind first is to fine-tune the foundation model with a relevant dataset the same way you would retrain any machine learning model. This process employs a significant set of prompt-completion pairs (at least a few hundred of them) that are curated by human vetters. This curated set is fed into a special process to adjust a subset of weights (probabilities) in the model based on the provided context. While this approach may sound obvious it comes at a significant cost. The process of fine-tuning is often quite pricey. Besides, the process of building the curated list of prompt-completion pairs is resource-intensive and expensive. At the time of writing this article, the fine-tuning processes for LLMs are not quite stable either. The incremental improvement in the performance should be important enough to justify the fine-tuning process. In contrast, the process of adjusting, padding, and otherwise engineering the raw prompt before it is sent to the LLM can achieve many of the desired benefits at a fraction of the cost. This process, aptly called prompt engineering, is much more common these days compared with fine-tuning. The first step is to send a system message to the LLM to provide operational instructions, the tone, and the parameters of the conversation. The prompt is also passed to a search engine separately as we explained in the last segment. The results from the search service will serve as the context for the foundation model. Adding the resulting context to the raw prompt before sending it to the LLM is another significant step in prompt engineering. Some more advanced features such as preserving the history of the conversation, bifurcating the search depending on the type of the request, choosing different models based on the prompt, and similar enhancements can all be accomplished by the Orchestrator in the prompt engineering approach. Libraries such as LangChain and Semantic Kernel provide several advanced features for prompt engineering. The table below depicts the two approaches and a comparison of relative pros and cons. ChatGPT deployment in Spotfire The first demo in this article showed an example of ChatGPT in Spotfire. Embedding this service inside Spotfire can be easily accomplished through a Spotfire® Data Function. All you need is your own OpenAI API key in the code which can be obtained from your OpenAI account. If you prefer to use other APIs you can simply swap OpenAI with the service of your choice. There are many simple examples of similar deployments available as open-source online. Note that any data sharing would be as if the data is shared with the API service. Always check with your organization's policies before using the generative AI services and never share sensitive data or information. Have questions? You can email datascience@spotfire.com or leave a comment on this post! For an overview of all NLP and LLM tools for Spotfire see this article: NLP and LLMs in Spotfire. For the most commonly used terminology in NLP and LLM see this article: NLP and LLMs - Glossary
×
×
  • Create New...