jie li 3 Posted March 1 Share Posted March 1 Hello all, here is the example code in my python data function below: import warnings warnings.filterwarnings("ignore") import pandas as pd import io import sys pd_temp=pdtable str1="Hello" pdtable is input parameter str1 is output parameter my questions is when the szie of the input table over the 500,000 rows, the laoding time of the table is very slow. it's needs about 7 mins. is there way can reduce the loading time? Can the data function read sbdf files in the library? Link to comment Share on other sites More sharing options...
Gaia Paolini Posted March 1 Share Posted March 1 Apart from specific loading timing considerations, have you considered reducing the number of columns that you send to the data function, or reducing the number of rows by filtering or marking? Link to comment Share on other sites More sharing options...
Andreas Laestadius Posted March 1 Share Posted March 1 There is an API for uploading and downloading SBDF files from and to the library using the server's rest API available from Spotfire 14. There are also functions for dealing with SBDF here https://github.com/spotfiresoftware/spotfire-python/tree/main/spotfire However, the internals already create DataTable->SBDF->Pandas when using python DataFunctions, so it is not clear to me that there would be obvious performance gains in implementing this separately. Hope this helps, Andreas Link to comment Share on other sites More sharing options...
jie li 3 Posted March 5 Author Share Posted March 5 Thank you Gaia and Andreas. the purpose of loading spotfire table to python data function is i want to have a feature that user can ask questions based on the dataset to the OpenAI api, i just send below to the OpenAI the questions the user i asked on the Spotfire UI the column names then the OpenAI will return a python code about how to filter the dataset and get the answers of the questions the user asked then the python data function will execute the code openai provieded. last the user will get answers from openAI i figured out the whole processm but when the data table was soooo big, the waitting time is unaccept, I found that the most time-consuming task is to load the spotfire table as a parameter into the Python data function. that's why i ask the quesiton here if there is method can reduce the loding table time. i use spotfire 11.4 LTS thanks, Link to comment Share on other sites More sharing options...
Ahmad Fattahi Posted March 6 Share Posted March 6 Is it possible for you to get your desired data view from OpenAI and then have your data source create the aggregated/reshaped view without loading into Spotfire? Data Virtualization often helps with that. For the initial step you can only have the metadata (column names, number of rows, column types, etc.) loaded into Spotfire and passed to OpenAI. Link to comment Share on other sites More sharing options...
jie li 3 Posted March 7 Author Share Posted March 7 i have one more question, in Spotfire Python data function, Is there any other way to read the table in Spotfire besides calling it through input parameters? The number of unique cases of Adverse eventPT with Toxicity grade 1 and list these unqiue cases for example, i have a xyz table in the spotfire, how can i load it to a pandas df directly? Link to comment Share on other sites More sharing options...
Gaia Paolini Posted March 7 Share Posted March 7 You cannot bypass the input parameters, as far as I know. If you want to only load unique cases, you could perform some transformations and filtering before you load the data into the data function. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now