Jump to content

Connect to apache spark python notebook on azure databricks


Soeren Christensen

Recommended Posts

I am trying to use the output of an Apache spark python notebook from Azure Databricks.

Ideally I would like to set document properties from the spotfire view, and use them as input to a spark job.

This job would be triggered manually from the spotfire view by a spotfire cloud user, who does not have any knowledge of this backend.

I downloaded the Apache Spark SQL ODBC driver from

https://docs.tibco.com/pub/spotfire/general/drivers/data_sources/connector_apache_spark_sql.htm

 

And then the following steps on

 

https://docs.tibco.com/pub/sfire-analyst/10.6.0/doc/html/en-US/TIB_sfire-analyst_UsersGuide/connectors/apache-spark/apache_spark_details_on_apache_spark_sql_connection.htm

However I am stuck at this step, since I have no clue how to connect a spotfire view to a notebook/job on databricks.

 

edit:

 

I found this link

https://docs.microsoft.com/en-us/azure/databricks/bi/jdbc-odbc-bi

NOTE THAT USERNAME AND PASSWORD IS

token:

WHERE IS GENERATED BY YOU ON DATABRICKS

And I am now able to connect to the clusters and see the data that is available on the databricks platform aswell.

However I still don't get how i can run a spark job from this connection, and input parameters to spark.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...