Vijesh
Members-
Posts
22 -
Joined
-
Last visited
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
Vijesh's Achievements
-
Hi David, After performing below I am able to achieve. Referred the link https://support.tibco.com/external/article?articleUrl=How-to-use-one-filter-to-control-two-tables-in-TIBCO-Spotfire Thanks, Vijesh
-
Hi David, Thanks for your Inputs. I have Related the data tables (demo & summary) with Case ID through 'Manage' Relations' as shown below. I then selected '13957155' Case ID from 'demo' filter. But in the output only for Demo Visualization the filter got applied, in summary Visualization I can all the case ID's (in screen shot you can see 15309436 is also coming along with 13957155). So looks the filter is not getting applied for Summary visualization/data table even though data tables are related. Let me know if I am missing something here. Thanks, Vijesh
-
Hi Team, Is it possible to apply filter on Page1 and filter must apply to all Pages i.e Page1, Page2, ..Page n. Each page has it's own data table. The data tables are SQL queries. Data table1 query is like select col1,col2,col3,col4,col5 from table1; Using this Page1 visualization are created. Data table2 query is like select col1,col6,col7,col8,col9 from table2; Using this Page2 visualization are created. Data tablen query is like select col1,col50,col51,col52,col53 from tablen; Using this Pagen visualization are created. The common column in all tables is col1 and rest of the columns are different. Not joining the tables using col1 to form 1 single data table as records are coming in billions. So created separate data tables for each page and visualizations are created for each page. As filters we have col1,col2,col3, col6,col7,col50, col51. Now if apply filter on col2 then what ever the col1 records get filtered on page1 the same col1 records must filter in Page2 ... Pagen. For all the queries the where condition is common i.e col1 In (1,2,3,4,5,6,7,8,9,10...n), The values with in condition will change for each run i.e col1 in (4,5,6,7) is possible col1 in (1,2,3,4,5), col1 in (5,6,7,8)...etc For example if in 1st set the condition is (1,2,3,4,5,6,7,8) then applying col2 filter results in col1 values 1,2,3,4,5,6. Then in page2 ...pagen as well col1 must be 1,2,3,4,5,6. Further if in Page2 col6 filter is applied and col1 values comes as 1,2,3,4 then in all pages col1 must be 1,2,3,4. If we clear col2 filter then col1 in all pages must be 1,2,3,4,5,6. If we remove col1 filter then col1 in all pages must be 1,2,3,4,5,6,7,8.
-
Thanks Soumya for the response. We prefer the 2nd option i.e ironpython script to control the data refresh. In ironpython is there a way to disable calculated columns processing/computation? I can see options like how to refresh datatable (ReloadAllData, ReloadLinkedData, RefreshOnDemandData) or how to remove the expression in calculated column, disable calculated columns, remove calculated columns through Ironpython script. These option will not help us. Not preferring 1st option because we have 88 calculated columns on single datatable which is formed using 10 csv files join. Converting 88 columns to embedded data requires re-design of 10 csv generation and visualizations. So not preferring this option now.
-
Thanks David and Olivier for the support given.
-
Hi Team, Is there a way to skip the calculate columns processing when opening spotfire report or is there a way to store calculated columns data in analysis . Have saved the report with below options but while loading report getting the message Calculate columns while loading analysis.
-
Hi team, I am working on a spotifre dashboard, trying to dynamically change the file path of the data source i.e csv files for each users and then reload the data for that table. Below is my script. But I encounter an error at line op.Reload(). if op.CanReload(): op.Reload() print("DataSourceOperations Reloaded Successfully !") dynamicPath.py from Spotfire.Dxp.Data.DataOperations import DataSourceOperation, DataOperation from Spotfire.Dxp.Application import DocumentMetadata from Spotfire.Dxp.Data import * from Spotfire.Dxp.Data.Import import * from System.IO import * from Spotfire.Dxp.Data import * from Spotfire.Dxp.Framework.Library import LibraryManager, LibraryItemRetrievalOption # Define the path to the CSV files (adjust according to your file structure) final_path=Document.Properties["filepath"]+str(Document.Properties["udrrunid"])+"_"+str(Document.Properties["masterid"])+"\\" Document.Properties["finalpath"] = final_path print("The value of the document property is:", Document.Properties["finalpath"]) # Construction of Paths to each CSV file case_file = Path.Combine(Document.Properties["finalpath"], "case.csv") demo_file = Path.Combine(Document.Properties["finalpath"], "demo.csv") drug_file = Path.Combine(Document.Properties["finalpath"], "drug.csv") reac_file = Path.Combine(Document.Properties["finalpath"], "reac.csv") drug_seq_file=Path.Combine(Document.Properties["finalpath"], "drug_seq.csv") therapy_seq_file=Path.Combine(Document.Properties["finalpath"], "therapy_seq.csv") event_seq_file=Path.Combine(Document.Properties["finalpath"], "event_seq.csv") smq_file=Path.Combine(Document.Properties["finalpath"], "smq.csv") product_groups_file=Path.Combine(Document.Properties["finalpath"], "product_groups.csv") event_groups_file=Path.Combine(Document.Properties["finalpath"], "event_groups.csv") #print case_file print demo_file #print drug_file #print reac_file #print drug_seq_file #print therapy_seq_file #print event_seq_file #print smq_file #print product_groups_file #print event_groups_file # Function to print the table name, type, and source def print_info(table_name, source_type, source): print("Data Table:", table_name, "Type:", source_type, "Source:", source) # Iterate through all data tables in the document for tbl in Document.Data.Tables: sourceView = tbl.GenerateSourceView() ops = sourceView.GetAllOperations[DataOperation]() for op in ops: if isinstance(op, DataSourceOperation): t = op.GetDataFlow().DataSource if type(t).__name__ in ["TextFileDataSource"] and "case.csv" in t.FilePath: t.FilePath = case_file #print_info(tbl.Name, type(t).__name__, t.FilePath) elif type(t).__name__ in ["TextFileDataSource"] and "demo.csv" in t.FilePath: t.FilePath = demo_file elif type(t).__name__ in ["TextFileDataSource"] and "drug.csv" in t.FilePath: t.FilePath = drug_file elif type(t).__name__ in ["TextFileDataSource"] and "reac.csv" in t.FilePath: t.FilePath = reac_file elif type(t).__name__ in ["TextFileDataSource"] and "drug_seq.csv" in t.FilePath: t.FilePath = drug_seq_file elif type(t).__name__ in ["TextFileDataSource"] and "therapy_seq.csv" in t.FilePath: t.FilePath = therapy_seq_file elif type(t).__name__ in ["TextFileDataSource"] and "event_seq.csv" in t.FilePath: t.FilePath = event_seq_file elif type(t).__name__ in ["TextFileDataSource"] and "smq.csv" in t.FilePath: t.FilePath = smq_file elif type(t).__name__ in ["TextFileDataSource"] and "product_groups.csv" in t.FilePath: t.FilePath = product_groups_file elif type(t).__name__ in ["TextFileDataSource"] and "event_groups.csv" in t.FilePath: t.FilePath = event_groups_file if op.CanReload(): op.Reload() print("DataSourceOperations Reloaded Successfully !") # Reload all data tables to reflect the new data sources for tbl in Document.Data.Tables: tbl.ReloadAllData() print("Reloaded All Data successfull") tbl.ReloadLinkedData() print("ReloadedLinked Datas successfully") Document.Data.Tables.RefreshOnDemandData() print("RefreshOnDemand Datas successfully") tbl.Refresh() The idea here is to reload the DatasourceOperations so that it will load data for each datasource when the user access the dashboard. Error: Traceback (most recent call last): File "<string>", line 72, in <module> StandardError: Exception has been thrown by the target of an invocation. System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> Spotfire.Dxp.Data.Exceptions.ImportException: Failed to execute data source query for data source "drug". ---> System.IO.IOException: The device is not ready. at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share) at Spotfire.Dxp.Data.Import.FileDataSource.OpenFileStream(String file, Boolean deleteFileOnClose) at Spotfire.Dxp.Data.Import.TextFileDataSource.CreateDataRowReader(IServiceProvider serviceProvider) at Spotfire.Dxp.Data.DataSourceConnection.ExecuteQuery2() --- End of inner exception stack trace --- at Spotfire.Dxp.Data.DataSourceConnection.ExecuteQuery2() at Spotfire.Dxp.Data.DataFlow.Execute() at Spotfire.Dxp.Data.DataFlow.DataFlowConnection.ExecuteQueryCore2() at Spotfire.Dxp.Data.DataSourceConnection.ExecuteQuery2() at Spotfire.Dxp.Data.Producers.SourceColumnProducer.<>c__DisplayClass76_0.<CreateView>b__0() at Spotfire.Dxp.Framework.ApplicationModel.Progress.ExecuteSubtask(String title, ProgressOperation operation) at Spotfire.Dxp.Data.Producers.SourceColumnProducer.CreateView(CxxSession session, DataPropertyRegistry propertyRegistry, GlobalMethodRegistry globalMethodRegistry, DataSourceConnection connection, IDataPropertyContainer defaultColumnProperties, PartialDataLoadReport& partialLoadReport) at Spotfire.Dxp.Data.Producers.SourceColumnProducer.GetColumnsAndProperties(DataSourceConnection connection) at Spotfire.Dxp.Data.Persistence.DataItem.PerformUpdate(SourceColumnProducer producer, DataSourceConnection connection) at Spotfire.Dxp.Data.Persistence.DataItem.Update(SourceColumnProducer producer, DataSourceConnection connection) at Spotfire.Dxp.Data.Persistence.DataPool.<LoadData>d__15.MoveNext() at Spotfire.Dxp.Internal.EnumerableExtensions.<EnumerateWithExceptionHandling>d__7`2.MoveNext() --- End of inner exception stack trace --- at Spotfire.Dxp.Data.DataTable.<>c.<Refresh>b__257_0(Exception e) at Spotfire.Dxp.Data.DataTable.BeginRefresh(Boolean reportColumnsChangedResultToNotificationService, Boolean async, Action`1 continuation, DisposableCollection disposableCollection, Predicate`1 isLeafAffected, Boolean refreshEmbeddedSources, DataLoadSettings loadSettings) at Spotfire.Dxp.Data.DataTable.Refresh(Boolean reportColumnsChangedResultToNotificationService, Predicate`1 isLeafAffected, Boolean refreshEmbeddedSources, DataLoadSettings loadSettings) at Spotfire.Dxp.Data.DataOperations.DataOperation.ReloadCore(DataLoadSettings loadSettings) at Microsoft.Scripting.Interpreter.ActionCallInstruction`1.Run(InterpretedFrame frame) at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame) at Microsoft.Scripting.Interpreter.LightLambda.Run3[T0,T1,T2,TRet](T0 arg0, T1 arg1, T2 arg2) at System.Dynamic.UpdateDelegates.UpdateAndExecute2[T0,T1,TRet](CallSite site, T0 arg0, T1 arg1) at Microsoft.Scripting.Interpreter.DynamicInstruction`3.Run(InterpretedFrame frame) at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame) at Microsoft.Scripting.Interpreter.LightLambda.Run2[T0,T1,TRet](T0 arg0, T1 arg1) at IronPython.Compiler.PythonScriptCode.RunWorker(CodeContext ctx) at Microsoft.Scripting.Hosting.ScriptSource.Execute(ScriptScope scope) at Spotfire.Dxp.Application.IronPython27.IronPythonScriptEngine.ExecuteForDebugging(String scriptCode, Dictionary`2 scope, Stream outputStream) Kindly help to get this error fixed. Thanks in advance for suggestion! Regards Vijesh
-
Yes, indeed. Thanks @Rae Chen . We have used below code, and it is printing the location of CSV files used in Spotfire Analytics. # Function to print the table name, type, and source def print_info(table_name, source_type, source): print("Data Table:", table_name, "Type:", source_type, "Source:", source) # Iterate through all data tables in the document for tbl in Document.Data.Tables: sourceView = tbl.GenerateSourceView() ops = sourceView.GetAllOperations[DataOperation]() for op in ops: if isinstance(op, DataSourceOperation): t = op.GetDataFlow().DataSource if type(t).__name__ in ["TextFileDataSource"] : path = t.FilePath if t.FilePath else "Clipboard" print_info(tbl.Name, type(t).__name__, path) We have one query, could you please help us, how to set the location of each csv files dynamically using TextFileDataSource ? Below is the sample code for one of the csv files which gives error case_file = Path.Combine(Document.Properties["finalpath"], "case.csv") if type(t).__name__ in ["TextFileDataSource", "Excel2FileDataSource"] and "case.csv" in t.FilePath: path = t.FilePath if t.FilePath else "Clipboard" print_info(tbl.Name, type(t).__name__, path) # create a new datasource and set the location for case.csv. new_data_source = TextFileDataSource(case_file, TextFileFormat.Csv) #this gives error op.GetDataFlow().DataSource = new_data_source Error: Traceback (most recent call last): File "<string>", line 53, in <module> NameError: name 'TextFileFormat' is not defined IronPython.Runtime.UnboundNameException: name 'TextFileFormat' is not defined at IronPython.Runtime.Operations.PythonOps.GetVariable(CodeContext context, String name, Boolean isGlobal, Boolean lightThrow) at IronPython.Compiler.LookupGlobalInstruction.Run(InterpretedFrame frame) at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame) at Microsoft.Scripting.Interpreter.LightLambda.Run2[T0,T1,TRet](T0 arg0, T1 arg1) at IronPython.Compiler.PythonScriptCode.RunWorker(CodeContext ctx) at Microsoft.Scripting.Hosting.ScriptSource.Execute(ScriptScope scope) at Spotfire.Dxp.Application.IronPython27.IronPythonScriptEngine.ExecuteForDebugging(String scriptCode, Dictionary`2 scope, Stream outputStream) We have tried to impot the TextFileFormat also
-
We have created a Spotfire analyst dashboard by loading the multiple CSV files. Could you please share us sample code to print the location of these CSV files programmatically using iron python script. A sample code snippet using below API would help. https://docs.tibco.com/pub/doc_remote/sfire-analyst/14.2.0/doc/api/TIB_sfire-analyst_api/html/P_Spotfire_Dxp_Data_Import_FileDataSource_FilePath.htm
-
Hi @Olivier Keugue Tadaa We do not want to join the CSVs again through a script or program as join is done manually in data-canvas by loading the CSVs manually. We are looking for a solution directly changing the file path of CSV. We are trying to get FileDataSource from datatable (PROD_ANALYTICS) to change the path, but we are not getting the data-sources at all. Without joining the CSVs through program do we have any solution to change the path, or we need to go through the join of CSVs through program only.
-
Hi Olivier, I have followed your suggestion. Tried below code by adding table as parameter and pointing to data table. from Spotfire.Dxp.Data import AddRowsSettings, DataSource #from Spotfire.Dxp.Application import Application # Get the current document doc = Application.Document # Construct the path to the source file by combining base path and the property value path_to_the_source_file = "D:/CSV/DEV/sf/IN/38737_525265/case.csv" # Create the file data source using the constructed path ds = Application.CreateFileDataSource(path_to_the_source_file) # Replace the data source in the table with the newly created data source ccres = table.ReplaceData(ds) # Define the settings for adding rows (if you need any specific settings, you can configure AddRowsSettings) add_rows_settings = AddRowsSettings() # at this point you will have to design a loop like this for path_to_the_source_file in paths: # paths is the collection containing all the paths. it can be in a document property # Create a file data source (this example assumes a CSV file; adjust as necessary for your data source type) data_source = Application.CreateFileDataSource(path_to_the_source_file) # Add the rows from the external data source to the target table columns_changed_result = table.AddRows(data_source, add_rows_settings) Got the syntax error: TypeError: AddRowsSettings() takes at least 2 arguments (0 given). Verified online and found it expects 2 arguments DataTable, DataSource. Looks the class AddRowsSettings will append the rows to the data table. But for our use case I feel we need use the class AddColumnsSettings(DataTable, DataSource, JoinType) because we are joining multiple csv's to form 1 data table (columns are added) like shown in below screenshot. Just need guidance what is syntax/value that we need to pass for JoinType, below is how we are joining 2 csv files in data canvas.
-
Hi Olivier, Thanks for the response. When I tried I got error "ImportError: Cannot import name Application". So I tried installing application package as shown below. The package installation failed with below error In dist.py file I didn't find 'distribution-name' . I see a code which replaces dash '-' with underscore '_' and this looks fine as per me. So not sure what need to be done to resolve this issue as there is no distribution-name which need to be replaced as distribution_name as per error message. Attached the dist.py dist.py
-
Hi Olivier, For testing purposed used only 2 csv's. Below is screen shot before running script. Screen shot after running the script. Data is missing and even demo.csv file is missing. So is there a way to handle our use case where multiple csv files joined and 1 data table used. Note: Added 'table' as parameter as below, without this the script is giving error.