Soeren Christensen Posted December 13, 2019 Share Posted December 13, 2019 I am trying to use the output of an Apache spark python notebook from Azure Databricks. Ideally I would like to set document properties from the spotfire view, and use them as input to a spark job. This job would be triggered manually from the spotfire view by a spotfire cloud user, who does not have any knowledge of this backend. I downloaded the Apache Spark SQL ODBC driver from https://docs.tibco.com/pub/spotfire/general/drivers/data_sources/connector_apache_spark_sql.htm And then the following steps on https://docs.tibco.com/pub/sfire-analyst/10.6.0/doc/html/en-US/TIB_sfire-analyst_UsersGuide/connectors/apache-spark/apache_spark_details_on_apache_spark_sql_connection.htm However I am stuck at this step, since I have no clue how to connect a spotfire view to a notebook/job on databricks. edit: I found this link https://docs.microsoft.com/en-us/azure/databricks/bi/jdbc-odbc-bi NOTE THAT USERNAME AND PASSWORD IS token: WHERE IS GENERATED BY YOU ON DATABRICKS And I am now able to connect to the clusters and see the data that is available on the databricks platform aswell. However I still don't get how i can run a spark job from this connection, and input parameters to spark. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now