Jump to content
We've recently updated our Privacy Statement, available here. ×
  • Spotfire® JDBC Data Access Connectivity Details


    Spotfire® connects to virtually any JDBC compliant data source via the Spotfire Server Information Services interface. Spotfire Information Services requires a Data Source Template to configure the URL Connection string, the JDBC driver class, and other settings. This page provides general information about Spotfire JDBC connectivity and examples of Information Services data source templates
    • Spotfire® connects to virtually any JDBC compliant data source via the Spotfire Server Information Services interface.
    • Spotfire Information Services requires a Data Source Template to configure the URL Connection string, the JDBC driver class, and other settings.
    • This page provides general information about Spotfire JDBC connectivity and examples of Information Services data source templates
    • See also the Information Services section in the Spotfire® Server and Environment - Installation and Administration guide.

     

    How to create a new JDBC data source template

    Maybe you can't find a data source template example for your data source on this page? If that is the case you can create your own. The different XML settings are documented here.

    How to add a new JDBC data source to Spotfire Server

    1. Find your data source template below or from another resource. If you can't find a template for your specific data source you might be able to modify an existing template to work with your driver and data source.
    2. Launch the Spotfire Server configuration tool.
    3. Click the Configuration tab.
    4. Click Data Source Templates on the left pane.
    5. On the right pane, click New, enter a name, and copy the data source template into the Data source template section.
    6. Click OK and make sure that the following message is displayed at the bottom of the page XML parsed OK.
    7. Confirm that the data source is enabled with a checkbox.
    8. Save the configuration and restart the Spotfire Server for changes to take effect.

    JDBC Driver Folder Locations

    For Spotfire Server versions 12.1.0 and later, the default location for drivers is the <installation directory>/tomcat/custom-ext-informationservices folder. (For earlier versions, the default location is <installation directory>/tomcat/custom-ext). Placing the drivers there ensures that the files are handled properly by the Spotfire Server Upgrade Tool.

    If files are placed in another directory, for example, <installation directory>/tomcat/lib then you must manually move them during an upgrade. Try first to place the jar files in the <installation directory>/tomcat/custom-ext-informationservices directory (or <installation directory>/tomcat/custom-ext for versions earlier than 12.1.0), but if you encounter any issues, then you might need to use one of the other locations provided below. Depending on the versions of the JDBC jar and the Spotfire Server, there are some other common issues you could run into. Below, you can find them together with possible solutions:

    Apache Drill JDBC Driver Location

    The current Apache Drill JDBC file must be placed in the tomcat/lib folder. (For Spotfire Server versions 12.1.0 and later, drivers should in general be placed in the <installation directory>/tomcat/custom-ext-informationservices folder. For versions earlier than 12.1.0, place them in <installation directory>/tomcat/custom-ext.)

    Cloudera JDBC Driver Location

    The jar files from Cloudera must be placed in the spotfire/WEB-INF/lib folder. (For Spotfire Server versions 12.1.0 and later, drivers should in general be placed in the <installation directory>/tomcat/custom-ext-informationservices folder. For versions earlier than 12.1.0, place them in <installation directory>/tomcat/custom-ext.)

    Hortonworks, Apache Drill and Phoenix JDBC Driver Location

    The jar files from Hortonworks and Phoenix must be placed in the spotfire/WEB-INF/lib folder if the Apache Drill jar file is in the tomcat/lib folder. (For Spotfire Server versions 12.1.0 and later, drivers should in general be placed in the <installation directory>/tomcat/custom-ext-informationservices folder. For versions earlier than 12.1.0, place them in <installation directory>/tomcat/custom-ext.)

    Oracle JDBC Driver Location In Spotfire 12.1.0 and later, if you are using Oracle as the Spotfire database and you want to access data with information services (regardless of whether you want to access Oracle data), you must add the same Oracle jar file in both directories <installation directory>/tomcat/custom-ext and <installation directory>/tomcat/custom-ext-informationservices.

     

    SAP/Sybase jConnect JDBC Driver Location

    In Spotfire 10.10 and later, because of a conflict with Oracle JDBC drivers (when used for Information Services), you cannot place the JDBC driver for SAP/Sybase jConnect (jconn4-jdbc.jar) in the recommended directory for JDBC drivers (<installation directory>/tomcat/custom-ext-informationservices or <installation directory>/tomcat/custom-ext for Spotfire Server versions earlier than 12.1.0.) 

    The conflict happens because of a known issue in the jConnect driver. See SAP Note 2827462.

    To install the JDBC driver for jConnect, jconn-jdbc.jar place the driver file for example in the following directory:

    <installation directory>/tomcat/lib


    Access DB

    Information Services queries by default has double quotes as shown in the example below:

    SELECT
       C1."key" AS "KEY",
       C1."odometerdistance" AS "ODOMETERDISTANCE"
    FROM
       "Complete_Bend_Strain_IMU_data" C1
    WHERE
       <conditions>

    But Access DB is not compatible with double quotes and requires queries like in this example:
     
    SELECT
       C1.key AS KEY,
       C1.odometerdistance AS ODOMETERDISTANCE
    FROM
       Complete_Bend_Strain_IMU_data C1
    WHERE
       <conditions>

    Instead of editing the SQL manually in Information Designer the data source template can hold this information. Add the xxx-pattern items to the data source template like in the example below. Please note that if quotes aren't used, that means that column names cannot be two words.

    Data Source Template for Access DB

    <jdbc-type-settings>
        <type-name>
            AccessDB
        </type-name>
        <driver>
            net.ucanaccess.jdbc.UcanaccessDriver
        </driver>
        <connection-url-pattern>
            jdbc:ucanaccess://c:/temp/AccessDB.accdb;showSchema=true
        </connection-url-pattern>
        <table-types>
            TABLE, VIEW
        </table-types>
        <ping-command>
            SELECT 1
        </ping-command>
        <supports-catalogs>
            false
        </supports-catalogs>
        <supports-schemas>
            false
        </supports-schemas>
        <use-ansii-style-outer-join>
            true
        </use-ansii-style-outer-join>
        <column-name-pattern>
            $$name$$
        </column-name-pattern>
        <table-name-pattern>
            $$name$$
        </table-name-pattern>
        <schema-name-pattern>
            $$name$$
        </schema-name-pattern>
        <catalog-name-pattern>
            $$name$$
        </catalog-name-pattern>
        <procedure-name-pattern>
            $$name$$
        </procedure-name-pattern>
        <column-alias-pattern>
            $$name$$
        </column-alias-pattern>
    </jdbc-type-settings>
     

    Apache Drill

    Data Source Template for Apache Drill JDBC driver (example)-> https://drill.apache.org/docs/configuring-tibco-spotfire-server-with-drill/

    <jdbc-type-settings>
      <type-name>drill</type-name>
      <driver>org.apache.drill.jdbc.Driver</driver> 
      <connection-url-pattern>jdbc:drill:zk=<zk-node>:5181/drill/<drill-cluster-name>-drillbits</connection-url-pattern> 
      <ping-command>SELECT 1 FROM sys.version</ping-command>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>false</supports-procedures>
      <table-expression-pattern>[$$schema$$.]$$table$$</table-expression-pattern>
    
      <column-name-pattern>`$$name$$`</column-name-pattern>
      <table-name-pattern>`$$name$$`</table-name-pattern>
      <schema-name-pattern>`$$name$$`</schema-name-pattern>
      <catalog-name-pattern>`$$name$$`</catalog-name-pattern>
      <procedure-name-pattern>`$$name$$`</procedure-name-pattern>
      <column-alias-pattern>`$$name$$`</column-alias-pattern>
    
      <java-to-sql-type-conversions>
       <type-mapping>
        <from max-length="32672">String</from>
        <to>VARCHAR($$value$$)</to>
       </type-mapping>
       <type-mapping>
        <from>String</from>
        <to>VARCHAR(32672)</to>
       </type-mapping>
       <type-mapping>
        <from>Integer</from>
        <to>INTEGER</to>
       </type-mapping>
      </java-to-sql-type-conversions>
      </jdbc-type-settings>
     

    Apache Hive

    Hive does not support the default double quotes (?) that Spotfire uses and requires a specific single quote (`) around names (on US keyboards which is under the tilde next to the numeral 1 key ? not the single quote under the double quotes).

    One might think that connectivity to Hive would be similar among the Hadoop distributions and that they would use the same JDBC drivers; this is not the case. Some of the implementations have downloads of the required jar driver files from their websites while others require one to get the jar files from the VM.

    Hive details for different Hadoop distributions are available further down this page.

    Data Source Template for Apache Hive (example)

    <jdbc-type-settings>
        <type-name>hive</type-name>
        <driver>org.apache.hadoop.hive.jdbc.HiveDriver</driver>
        <connection-url-pattern>jdbc:hive://&lt;host&gt;:&lt;port&gt;/default</connection-url-pattern>
        <table-types>TABLE,EXTERNAL TABLE</table-types>
        <supports-catalogs>true</supports-catalogs>
        <supports-schemas>true</supports-schemas>
        <ping-command>SHOW TABLES</ping-command>
        <fetch-size>10000</fetch-size>
        <batch-size>100</batch-size>
        <java-to-sql-type-conversions>
        </java-to-sql-type-conversions>
        <temp-table-name-pattern>#$$name$$</temp-table-name-pattern>
        <create-temp-table-command>CREATE TABLE $$name$$ $$column_list$$</create-temp-table-command>
        <drop-temp-table-command>DROP TABLE $$name$$</drop-temp-table-command>
        <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
    </jdbc-type-settings>
     

    Apache Phoenix

    Since the JDBC connectivity to HBase is based on Apache Phoenix, the Data Source Template for Apache HBase is the same across environments.

    To use Apache Phoenix, one needs to specify the metadata.

    Please also see Hortonworks' article about Spotfire JDBC Connection to Phoenix on Kerberized Cluster.

    Data Source Template for Apache Phoenix (example)

    <jdbc-type-settings>
        <type-name>Phoenix_HBase2</type-name>
        <driver>org.apache.phoenix.jdbc.PhoenixDriver</driver>
        <connection-url-pattern>jdbc:phoenix:&lt;host&gt;:&lt;port2181&gt;:/&lt;path&gt;</connection-url-pattern>
        <supports-catalogs>false</supports-catalogs>
        <supports-schemas>false</supports-schemas>
        <supports-procedures>false</supports-procedures>
        <ping-command></ping-command>
    </jdbc-type-settings>
     

    Amazon Athena

    The Accessing Amazon Athena JDBC page has a link to the latest JDBC drivers for Athena. The Data Source Template below is for version 2.0.x of the Athena JDBC driver.

    Data Source Template for Athena (example)

    <jdbc-type-settings>
    	<type-name>Amazon Athena JDBC</type-name>
    	<driver>com.simba.athena.jdbc.Driver</driver>
    	<connection-url-pattern>jdbc:awsathena://athena.&lt;region&gt;.amazonaws.com:443;s3_staging_dir=&lt;staging_dir&gt;</connection-url-pattern>
    	<table-types>DATA, EXTERNAL_TABLE, TABLE, VIEW, VIRTUAL_VIEW</table-types>
      	<supports-catalogs>true</supports-catalogs>
    	<supports-schemas>true</supports-schemas>
    	<supports-procedures>false</supports-procedures>
    	<always-use-prepared-statement>false</always-use-prepared-statement>
    </jdbc-type-settings>
     
    • When one creates the data source in Spotfire Information Designer, Athena requires that the fetch size be reduced to less than the 10,000 default in Spotfire.  The fetch size can be set to 0 which will use the default fetch size for the JDBC driver.  The s3_staging_dir URL parameter is mandatory for Athena and will throw an error if it is missing.
    • The username and password will be the AWS access key and AWS secret key, respectively.  One can do more complex security using an AWSCredentialsProvider.  Details are on the Amazon Athena JDBC page referenced above.  Connecting using SQL Workbench can help troubleshoot that the connection is working and the correct access is in place.

    Amazon Redshift

    Data Source Template for Amazon Redshift JDBC41 (example)

    <jdbc-type-settings>
    	<type-name>AWS Redshift</type-name>
    	<driver>com.amazon.redshift.jdbc41.Driver</driver>
    	<connection-url-pattern> jdbc:redshift://endpoint:port/database</connection-url-pattern>
    	<ping-command>SELECT 1</ping-command>
    	<metadata-provider>com.spotfire.ws.im.ds.sql.BasicJDBCMetadataProvider</metadata-provider>
    	<fetch-size>10000</fetch-size>
    	<batch-size>100</batch-size>
    	<table-types>TABLE, VIEW, EXTERNAL TABLE</table-types>
    	<supports-catalogs>true</supports-catalogs>
    	<supports-schemas>true</supports-schemas>
    	<supports-procedures>true</supports-procedures>
    	<max-in-clause-size>1000</max-in-clause-size>
    	<condition-list-threshold>10000</condition-list-threshold>
    	<expand-in-clause>false</expand-in-clause>
    	<table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern>
    	<procedure-expression-pattern>[$$catalog$$.][$$schema$$.]$$procedure$$</procedure-expression-pattern>
    	<time-format-expression>to_char($$value$$, 'HH24:MI:SS')</time-format-expression>
    	<date-time-format-expression>$$value$$</date-time-format-expression>
    </jdbc-type-settings>
     

    Data Source Template for Amazon Redshift JDBC4 (example)

    <jdbc-type-settings>
      <type-name>AWS Redshift</type-name>
      <driver>com.amazon.redshift.jdbc4.Driver</driver>
      <connection-url-pattern> jdbc:redshift://endpoint:port/database</connection-url-pattern>
      <ping-command>SELECT 1</ping-command>
      <metadata-provider>com.spotfire.ws.im.ds.sql.BasicJDBCMetadataProvider</metadata-provider>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <table-types>TABLE, VIEW, EXTERNAL TABLE</table-types>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>true</supports-procedures>
      <max-in-clause-size>1000</max-in-clause-size>
      <condition-list-threshold>10000</condition-list-threshold>
      <expand-in-clause>false</expand-in-clause>
      <table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern>
      <procedure-expression-pattern>[$$catalog$$.][$$schema$$.]$$procedure$$</procedure-expression-pattern>
      <time-format-expression>to_char($$value$$, 'HH24:MI:SS')</time-format-expression>
      <date-time-format-expression>$$value$$</date-time-format-expression>
    </jdbc-type-settings>
     

    Cloudera

    Data Source Template for Cloudera Apache Hive Connectivity (sample)

    <jdbc-type-settings>
        <type-name>cloudera_hive</type-name>
        <driver>com.cloudera.hive.jdbc42.HS2Driver</driver>
        <connection-url-pattern>jdbc:hive2://&lt;host&gt;:&lt;port10000&gt;/&lt;database&gt;</connection-url-pattern>
        <supports-catalogs>false</supports-catalogs>
        <supports-schemas>true</supports-schemas>
        <supports-procedures>false</supports-procedures>
        <ping-command>SHOW TABLES</ping-command>
        <column-name-pattern>`$$name$$`</column-name-pattern>
        <table-name-pattern>`$$name$$`</table-name-pattern>
        <schema-name-pattern>`$$name$$`</schema-name-pattern>
        <catalog-name-pattern>`$$name$$`</catalog-name-pattern>
        <procedure-name-pattern>`$$name$$`</procedure-name-pattern>
        <column-alias-pattern>`$$name$$`</column-alias-pattern>
    </jdbc-type-settings>
     

    Data Source Template for Cloudera Impala Connectivity (sample)

    <jdbc-type-settings>
        <type-name>cloudera_impala</type-name>
        <driver>com.cloudera.impala.jdbc42.Driver</driver>
        <connection-url-pattern>jdbc:impala://&lt;host&gt;:&lt;port21050&gt;/&lt;database&gt;</connection-url-pattern>
        <supports-catalogs>false</supports-catalogs>
        <supports-schemas>true</supports-schemas>
        <supports-procedures>false</supports-procedures>
        <ping-command>SHOW TABLES</ping-command>
        <column-name-pattern>`$$name$$`</column-name-pattern>
        <table-name-pattern>`$$name$$`</table-name-pattern>
        <schema-name-pattern>`$$name$$`</schema-name-pattern>
        <catalog-name-pattern>`$$name$$`</catalog-name-pattern>
        <procedure-name-pattern>`$$name$$`</procedure-name-pattern>
        <column-alias-pattern>`$$name$$`</column-alias-pattern>
    </jdbc-type-settings>
     

    Couchbase

    Data Source Template for Cdata's Couchbase JDBC driver (example)

    <jdbc-type-settings>
      <type-name>couchbase</type-name>
      <driver>cdata.jdbc.couchbase.CouchbaseDriver</driver>
      <connection-url-pattern>jdbc:couchbase:</connection-url-pattern>
      <ping-command>SELECT * FROM Projects LIMIT 1</ping-command>
      <connection-properties>
        <connection-property>
          <key>User</key>
          <value>myuseraccount</value>
        </connection-property>
        <connection-property>
          <key>Password</key>
          <value>mypassword</value>
        </connection-property>
        <connection-property>
          <key>Server</key>
          <value>http://mycouchbaseserver</value>;
        </connection-property>
      </connection-properties>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <max-column-name-length>32</max-column-name-length>
      <table-types>TABLE, VIEW</table-types>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>false</supports-procedures>
      <supports-distinct>true</supports-distinct>
      <supports-order-by>true</supports-order-by>
      <column-name-pattern>"$$name$$"</column-name-pattern>
      <table-name-pattern>"$$name$$"</table-name-pattern>
      <schema-name-pattern>"$$name$$"</schema-name-pattern>
      <catalog-name-pattern>"$$name$$"</catalog-name-pattern>
      <procedure-name-pattern>"$$name$$"</procedure-name-pattern>
      <column-alias-pattern>"$$name$$"</column-alias-pattern>
      <string-literal-quote>'</string-literal-quote>
      <max-in-clause-size>1000</max-in-clause-size>
      <condition-list-threshold>10000</condition-list-threshold>
      <expand-in-clause>false</expand-in-clause>
      <table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern>
      <procedure-expression-pattern>[$$catalog$$.][$$schema$$.]$$procedure$$</procedure-expression-pattern>
      <procedure-table-jdbc-type>0</procedure-table-jdbc-type>
      <procedure-table-type-name></procedure-table-type-name>
      <date-format-expression>$$value$$</date-format-expression>
      <date-literal-format-expression>'$$value$$'</date-literal-format-expression>
      <time-format-expression>$$value$$</time-format-expression>
      <time-literal-format-expression>'$$value$$'</time-literal-format-expression>
      <date-time-format-expression>$$value$$</date-time-format-expression>
      <date-time-literal-format-expression>'$$value$$'</date-time-literal-format-expression>
      <java-to-sql-type-conversions>VARCHAR($$value$$) VARCHAR(255) INTEGER BIGINT REAL DOUBLE PRECISION DATE TIME TIMESTAMP</java-to-sql-type-conversions>
      <temp-table-name-pattern>$$name$$#TEMP</temp-table-name-pattern>
      <create-temp-table-command>CREATE TABLE $$name$$#TEMP $$column_list$$</create-temp-table-command>
      <drop-temp-table-command>DROP TABLE $$name$$#TEMP</drop-temp-table-command>
      <data-source-authentication>false</data-source-authentication>
      <lob-threshold>-1</lob-threshold>
      <use-ansii-style-outer-join>false</use-ansii-style-outer-join>
      <credentials-timeout>86400</credentials-timeout>
    </jdbc-type-settings>
     

    DB2

    Data source template for IBM DB2 with IBM JDBC driver.

    <jdbc-type-settings>
      <type-name>db2-ibm</type-name>
      <driver>com.ibm.db2.jcc.DB2Driver</driver>
      <connection-url-pattern>jdbc:db2://&lt;server&gt;:&lt;port(50000)&gt;/&lt;database&gt;</connection-url-pattern>
      <connection-properties>
        <connection-property>
          <key>loginTimeout</key>
          <value>$m{loginTimeoutSeconds}</value>
        </connection-property>
      </connection-properties>
      <ping-command>SELECT 1 FROM sysibm.sysdummy1</ping-command>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <table-types>TABLE, VIEW, NICKNAME</table-types>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>true</supports-procedures>
      <max-in-clause-size>1000</max-in-clause-size>
      <condition-list-threshold>10000</condition-list-threshold>
      <expand-in-clause>false</expand-in-clause>
      <table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern>
      <procedure-expression-pattern>[$$catalog$$.][$$schema$$.]$$procedure$$</procedure-expression-pattern>
      <date-format-expression>$$value$$</date-format-expression>
      <time-format-expression>$$value$$</time-format-expression>
      <date-time-format-expression>CHAR(DATE($$value$$), JIS) || ' ' || CHAR(TIME($$value$$), JIS)</date-time-format-expression>
      <java-to-sql-type-conversions>
        <type-mapping>
          <from max-length="32672">String</from>
          <to>VARCHAR($$value$$)</to>
        </type-mapping>
        <type-mapping>
          <from>String</from>
          <to>VARCHAR(32672)</to>
        </type-mapping>
        <type-mapping>
          <from>Integer</from>
          <to>INTEGER</to>
        </type-mapping>
        <type-mapping>
          <from>Long</from>
          <to>BIGINT</to>
        </type-mapping>
        <type-mapping>
          <from>Float</from>
          <to>REAL</to>
        </type-mapping>
        <type-mapping>
          <from>Double</from>
          <to>DOUBLE</to>
        </type-mapping>
        <type-mapping>
          <from>Date</from>
          <to>DATE</to>
        </type-mapping>
        <type-mapping>
          <from>Time</from>
          <to>TIME</to>
        </type-mapping>
        <type-mapping>
          <from>DateTime</from>
          <to>TIMESTAMP</to>
        </type-mapping>
      </java-to-sql-type-conversions>
      <temp-table-name-pattern>SESSION.$$name$$</temp-table-name-pattern>
      <create-temp-table-command>DECLARE GLOBAL TEMPORARY TABLE $$name$$ $$column_list$$ ON COMMIT PRESERVE ROWS NOT LOGGED</create-temp-table-command>
      <drop-temp-table-command>DROP TABLE $$name$$</drop-temp-table-command>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
    </jdbc-type-settings>
     

    DB2 (Progress DataDirect JDBC driver)

    <jdbc-type-settings>
      <type-name>db2_datadirect</type-name>
      <driver>com.ddtek.jdbc.db2.DB2Driver</driver>
      <connection-url-pattern>jdbc:datadirect:db2://&lt;host&gt;:&lt;port&gt;;databaseName=&lt;database&gt;</connection-url-pattern>
      <connection-properties>
        <connection-property>
          <key>MaxPooledStatements</key>
          <value>20</value>
        </connection-property>
        <connection-property>
          <key>LoginTimeout</key>
          <value>$m{loginTimeoutSeconds}</value>
        </connection-property>
      </connection-properties>
      <ping-command>SELECT 1 FROM sysibm.sysdummy1</ping-command>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <table-types>TABLE, VIEW, NICKNAME</table-types>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>true</supports-procedures>
      <max-in-clause-size>1000</max-in-clause-size>
      <condition-list-threshold>10000</condition-list-threshold>
      <expand-in-clause>false</expand-in-clause>
      <table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern>
      <procedure-expression-pattern>[$$catalog$$.][$$schema$$.]$$procedure$$</procedure-expression-pattern>
      <date-format-expression>$$value$$</date-format-expression>
      <time-format-expression>$$value$$</time-format-expression>
      <date-time-format-expression>CHAR(DATE($$value$$), JIS) || ' ' || CHAR(TIME($$value$$), JIS)</date-time-format-expression>
      <java-to-sql-type-conversions>
        <type-mapping>
          <from max-length="32672">String</from>
          <to>VARCHAR($$value$$)</to>
        </type-mapping>
        <type-mapping>
          <from>String</from>
          <to>VARCHAR(32672)</to>
        </type-mapping>
        <type-mapping>
          <from>Integer</from>
          <to>INTEGER</to>
        </type-mapping>
        <type-mapping>
          <from>Long</from>
          <to>BIGINT</to>
        </type-mapping>
        <type-mapping>
          <from>Float</from>
          <to>REAL</to>
        </type-mapping>
        <type-mapping>
          <from>Double</from>
          <to>DOUBLE</to>
        </type-mapping>
        <type-mapping>
          <from>Date</from>
          <to>DATE</to>
        </type-mapping>
        <type-mapping>
          <from>Time</from>
          <to>TIME</to>
        </type-mapping>
        <type-mapping>
          <from>DateTime</from>
          <to>TIMESTAMP</to>
        </type-mapping>
      </java-to-sql-type-conversions>
      <temp-table-name-pattern>SESSION.$$name$$</temp-table-name-pattern>
      <create-temp-table-command>DECLARE GLOBAL TEMPORARY TABLE $$name$$ $$column_list$$ ON COMMIT PRESERVE ROWS NOT LOGGED</create-temp-table-command>
      <drop-temp-table-command>DROP TABLE $$name$$</drop-temp-table-command>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
    </jdbc-type-settings>

     

    Denodo

    Data Source Template for Denodo (example, tested with Denodo 8 beta 1)

    <jdbc-type-settings>     
          <type-name>Denodo</type-name>
          <driver>com.denodo.vdp.jdbc.Driver</driver>
          <connection-url-pattern>jdbc:vdb://&lt;server&gt;:&lt;port9999&gt;/&lt;data_base&gt;</connection-url-pattern>
          <supports-catalogs>true</supports-catalogs>
          <supports-schemas>false</supports-schemas>
          <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
          <metadata-provider>com.spotfire.ws.im.ds.sql.BasicJDBCMetadataProvider</metadata-provider>
          <supports-procedures>true</supports-procedures>
    </jdbc-type-settings>
     

    Exasol

    <jdbc-type-settings>
        <type-name>EXASOL</type-name>
        <driver>com.exasol.jdbc.EXADriver</driver>
        <connection-url-pattern>
            jdbc:exa:&lt;host&gt;:&lt;port&gt;;clientname=Spotfire;
        </connection-url-pattern>
        <ping-command>SELECT 1</ping-command>
        <supports-catalogs>false</supports-catalogs>
        <supports-schemas>true</supports-schemas>
        <java-to-sql-type-conversions>
            <type-mapping>
                <from max-length="2000000">String</from>
                <to>VARCHAR($$value$$)</to>
            </type-mapping>
            <type-mapping>
                <from>Integer</from>
                <to>DECIMAL(18,0)</to>
            </type-mapping>
            <type-mapping>
                <from>Long</from>
                <to>DECIMAL(36,0)</to>
            </type-mapping>
            <type-mapping>
                <from>Float</from>
                <to>REAL</to>
            </type-mapping>
            <type-mapping>
                <from>Double</from>
                <to>DOUBLE PRECISION</to>
            </type-mapping>
            <type-mapping>
                <from>Date</from>
                <to>DATE</to>
            </type-mapping>
            <type-mapping>
                <from>DateTime</from>
                <to>TIMESTAMP</to>
            </type-mapping>
        </java-to-sql-type-conversions>
    </jdbc-type-settings>
     

    File Maker

    For File Maker connectivity, you need to install the appropriate JDBC driver. This link contains instructions for installing and configuring the appropriate .jar file:

    http://help.filemaker.com/app/answers/detail/a_id/13603/~/installing-filemaker-jdbc-client-drivers

    <jdbc-type-settings>
      <type-name>Filemaker</type-name>
      <driver>com.filemaker.jdbc.Driver</driver>
      <connection-url-pattern>jdbc:filemaker://&lt;host&gt;/&lt;database&gt;</connection-url-pattern>
      <ping-command>SELECT 1</ping-command>
      <metadata-provider>com.spotfire.ws.im.ds.sql.BasicJDBCMetadataProvider</metadata-provider>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <table-expression-pattern>[$schema$.]$table$</table-expression-pattern>
      <table-types>TABLE</table-types>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>true</supports-procedures>
    </jdbc-type-settings>
     

    Google BigQuery

    JDBC driver from Google/Simba

    <jdbc-type-settings>
      <type-name>BigQuery</type-name>
      <driver>com.simba.googlebigquery.jdbc41.Driver</driver>
      <connection-url-pattern>jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId=&lt;project&gt;;OAuthType=0;OAuthServiceAcctEmail=xxxxxx@developer.gserviceaccount.com;OAuthPvtKeyPath=D:\\path\file</connection-url-pattern>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>false</supports-procedures>
      <column-name-pattern>$$name$$</column-name-pattern>
      <table-name-pattern>$$name$$</table-name-pattern>
      <schema-name-pattern>$$name$$</schema-name-pattern>
      <catalog-name-pattern>$$name$$</catalog-name-pattern>
      <procedure-name-pattern>$$name$$</procedure-name-pattern>
      <column-alias-pattern>$$name$$</column-alias-pattern>
    </jdbc-type-settings>
     

    For Google BigQuery, the default xxx-name-patterns in the data source template will not work.  Google BigQuery does not support double quotes in the schema, table, and column names.  Please see this Spotfire Support Knowledge Base article for more information: Executing Information Link based on Google BigQuery fails with error "Marshalling Error: INVALID_CHARACTER_ERR: An invalid or illegal XML character is specified."

    JDBC driver from CData

    <jdbc-type-settings>
      <type-name>googlebigquery</type-name>
      <driver>cdata.jdbc.googlebigquery.GoogleBigQueryDriver</driver>
      <connection-url-pattern>jdbc:googlebigquery:</connection-url-pattern>
      <ping-command>SELECT * FROM Projects LIMIT 1</ping-command>
      <connection-properties>
        <connection-property>
          <key>DataSetId</key>
          <value>MyDataSetId</value>
        </connection-property>
        <connection-property>
          <key>ProjectId</key>
          <value>MyProjectId</value>
        </connection-property>
        <connection-property>
          <key>InitiateOAuth</key>
          <value>GETANDREFRESH</value>
        </connection-property>
      </connection-properties>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <max-column-name-length>32</max-column-name-length>
      <table-types>TABLE, VIEW</table-types>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>false</supports-procedures>
      <supports-distinct>true</supports-distinct>
      <supports-order-by>true</supports-order-by>
      <column-name-pattern>`$$name$$`</column-name-pattern>
      <table-name-pattern>`$$name$$`</table-name-pattern>
      <schema-name-pattern>`$$name$$`</schema-name-pattern>
      <catalog-name-pattern>`$$name$$`</catalog-name-pattern>
      <procedure-name-pattern>`$$name$$`</procedure-name-pattern>
      <column-alias-pattern>`$$name$$`</column-alias-pattern>
      <string-literal-quote>'</string-literal-quote>
      <max-in-clause-size>1000</max-in-clause-size>
      <condition-list-threshold>10000</condition-list-threshold>
      <expand-in-clause>false</expand-in-clause>
      <table-expression-pattern>[$$catalog$$.][$$schema$.]$$table$$</table-expression-pattern>
      <procedure-expression-pattern>[$$catalog$$.][$$schema$$.]$$procedure$$</procedure-expression-pattern>
      <procedure-table-jdbc-type>0</procedure-table-jdbc-type>
      <procedure-table-type-name></procedure-table-type-name>
      <date-format-expression>$$value$$</date-format-expression>
      <date-literal-format-expression>$$value$$</date-literal-format-expression>
      <time-format-expression>$$value$$</time-format-expression> 
      <time-literal-format-expression>$$value$$</time-literal-format-expression>
      <date-time-format-expression>$$value$$</date-time-format-expression>
      <date-time-literal-format-expression>$$value$$</date-time-literal-format-expression>
      <java-to-sql-type-conversions>VARCHAR($$value$$) VARCHAR(255) INTEGER BIGINT REAL DOUBLE PRECISION DATE TIME TIMESTAMP</java-to-sql-type-conversions>
      <temp-table-name-pattern>$$value$$#TEMP</temp-table-name-pattern>
      <create-temp-table-command>CREATE TABLE $$name$$#TEMP $$column_list$$</create-temp-table-command>
      <drop-temp-table-command>DROP TABLE $$name$$#TEMP</drop-temp-table-command>
      <data-source-authentication>false</data-source-authentication>
      <lob-threshold>-1</lob-threshold>
      <use-ansii-style-outer-join>false</use-ansii-style-outer-join>
      <credentials-timeout>86400</credentials-timeout>
    </jdbc-type-settings>
      

    Google BigQuery with OAuth

    JDBC driver from Google/Simba

    <jdbc-type-settings>
      <type-name>bigquery_simba_user_acct</type-name>
      <driver>com.simba.googlebigquery.jdbc.Driver</driver>
      <connection-url-pattern>jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId=&lt;project&gt;</connection-url-pattern>;
      <connection-properties>
        <connection-property>
          <key>OAuthType</key>
          <value>2</value>
        </connection-property>
        <connection-property>
          <key>EnableSession</key>
          <value>1</value>
        </connection-property>
      </connection-properties>
      <credentials-provider-class>com.spotfire.ws.im.ds.sql.TokenCredentialsProvider</credentials-provider-class>
      <credentials-provider-init-params>
        <parameter>
          <key>issuer</key>
          <value>https://accounts.google.com</value>;
        </parameter>
        <parameter>
          <key>scope</key>
          <value>https://www.googleapis.com/auth/bigquery</value>;
        </parameter>
        <parameter>
          <key>access_token_property_name</key>
          <value>OAuthAccessToken</value>
        </parameter>
        <parameter>
          <key>driver_supports_refresh_tokens</key>
          <value>true</value>
        </parameter>
        <parameter>
          <key>refresh_token_property_name</key>
          <value>OAuthRefreshToken</value>
        </parameter>
        <parameter>
          <key>client_id_property_name</key>
          <value>OAuthClientId</value>
        </parameter>
        <parameter>
          <key>client_secret_property_name</key>
          <value>OAuthClientSecret</value>
        </parameter>
      </credentials-provider-init-params>
      <supports-catalogs>true</supports-catalogs>
      <supports-procedures>false</supports-procedures>
      <supports-schemas>true</supports-schemas>
      <column-name-pattern>$$name$$</column-name-pattern>
      <table-name-pattern>$$name$$</table-name-pattern>
      <schema-name-pattern>$$name$$</schema-name-pattern>
      <catalog-name-pattern>$$name$$</catalog-name-pattern>
      <procedure-name-pattern>$$name$$</procedure-name-pattern>
      <column-alias-pattern>$$name$$</column-alias-pattern>
    </jdbc-type-settings>
     

    For more information, see Using OAuth2 with Information Services.

    Greenplum

    Data Source Template for Greenplum (example)

    <jdbc-type-settings>
      <type-name>postgreSQL</type-name>
      <driver>org.postgresql.Driver</driver>
      <connection-url-pattern>
             jdbc:postgresql://&lt;host&gt;:&lt;port&gt;/database
       </connection-url-pattern>
       <supports-catalogs>true</supports-catalogs>
       <supports-schemas>true</supports-schemas>
       <use-ansii-style-outer-join>true</use-ansii-style-outer-join> 
       <metadata-provider>com.spotfire.ws.im.ds.sql.BasicJDBCMetadataProvider</metadata-provider>
       <supports-procedures>true</supports-procedures>
      <sql-runtime>com.spotfire.ws.im.ds.sql.postgresql.PostgresSQLRuntime</sql-runtime>
    </jdbc-type-settings>
     

    HBase

    To connect to HBase via JDBC, you can use the Apache Phoenix drivers. A sample data source template for Apache Phoenix is provided above.

    IBM Informix

    Data Source Template for IBM Informix (sample)

    <jdbc-type-settings>
            <type-name>Informix</type-name>
            <display-name>Informix</display-name>
            <driver>com.informix.jdbc.IfxDriver</driver>
            <connection-url-pattern>
              jdbc:informix-sqli://&lt;host&gt;;port&gt;/&lt;database&gt;:INFORMIXSERVER=servername;user=&lt;username&gt;;password=&lt;password&gt;
            </connection-url-pattern>
            <ping-command></ping-command>
            <supports-catalogs>false</supports-catalogs>
            <column-name-pattern>$$name$$</column-name-pattern>
            <table-name-pattern>$$name$$</table-name-pattern>
            <schema-name-pattern>$$name$$</schema-name-pattern>
            <catalog-name-pattern>$$name$$</catalog-name-pattern>
            <procedure-name-pattern>$$name$$</procedure-name-pattern>
            <column-alias-pattern>$$name$$</column-alias-pattern>
          </jdbc-type-settings>
     

    JBoss Teiid

    Data Source Template for JBoss Teiid (Teiid JDBC driver)

    <jdbc-type-settings>
        <type-name>teiid</type-name>
        <driver>org.teiid.jdbc.TeiidDriver</driver>
        <connection-url-pattern>jdbc:teiid:&lt;vdb-name&gt;@mm:&lt;host&gt;:&lt;port&gt;</connection-url-pattern>
        <ping-command>SELECT 1</ping-command>
        <metadata-provider>com.spotfire.ws.im.ds.sql.BasicJDBCMetadataProvider</metadata-provider>
        <fetch-size>10000</fetch-size>
        <batch-size>100</batch-size>
        <table-expression-pattern>[$$schema$$.]$$table$$</table-expression-pattern>
        <table-types>TABLE, VIEW</table-types>
        <supports-catalogs>true</supports-catalogs>
        <supports-schemas>true</supports-schemas>
        <supports-procedures>true</supports-procedures>
        <condition-list-threshold>2000</condition-list-threshold>
        <temp-table-name-pattern>$$name$$</temp-table-name-pattern>
        <create-temp-table-command>CREATE LOCAL TEMPORARY TABLE $$name$$ $$column_list$$;</create-temp-table-command>
        <sql-filter>com.spotfire.ws.im.ds.sql.sassharenet.SASSQLFilter</sql-filter>
    </jdbc-type-settings>
     

    Microsoft SQL Server

    Microsoft SQL Server with Kerberos delegated authentication

    This chapter provides steps to create an Information Services Data source with Kerberos delegated authentication to Microsoft SQL Server Database.

    Prerequisites

    1) Spotfire Server configured for Kerberos authentication. Refer to the Spotfire Server Installation manual (section "Kerberos Authentication").

    2) Spotfire Web Player (optional), configured for Single Sign-On (SSO) using Delegation with Kerberos authentication. Refer to the Spotfire Web Player installation manual.

    3) Constrained or Unconstrained delegation enabled on the domain controller for the Service Account used for Kerberos authentication on Spotfire Server. Refer to the Spotfire Server installation manual (section "Grant the Spotfire Server Service Account the Right to Delegate Client Credentials").

    4) Microsoft SQL Server database to be used in setting up an information link data source that is configured for Kerberos authentication.

    Steps to be performed on Spotfire Server

    1) The Microsoft SQL Server JDBC driver is now bundled with Spotfire Server so one should not have to download it.  If one does have to download it, then one can search the web or use the following URL.  Once downloaded, copy the sqljdbc_auth.dll file into the Spotfire Server instance's tomcat\custom-ext folder.

    http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=11774

    The sqljdbc_auth.dll and jar files are extracted into the following location:  <installation directory>\sqljdbc_<version>\<language>\auth\x64\ 

    2) Create a new Database Template for Kerberos delegated Authentication to Microsoft SQL Server Database using the Spotfire Server Configuration Tool. Define the JDBC data source template using the XML shown below.

    3) Enable the template via the Server configuration tool and Save the configuration into the database.

    4) On the Spotfire Server, create a keytab file for the MS SQL Server database secured with Kerberos:

    ktpass /princ {dbspnuser}@DOMAIN /ptype krb5_nt_principal /crypto rc4-hmac-nt /out spotfire-database.keytab /kvno 0 /pass ***

    Replace dbspnuser - User used to register SPNs for the database

                  DOMAIN - your custom domain

    5) Copy spotfire-database.keytab to <tss_install_dir>\jdk\jre\lib\security folder.

    6) Create spotfire-database.login file in <tss_install_dir>\jdk\jre\lib\security\ containing the following text:

    DatabaseKerberos

    {
        com.sun.security.auth.module.Krb5LoginModule required debug=true storeKey=true useKeyTab=true
        keyTab="${java.home}/jre/lib/security/spotfire-database.keytab" principal="{dbspnuser}@DOMAIN"
    }

    Replace dbspnuser - User used to register SPNs for the SQL Server database

                  DOMAIN - your custom domain

    7) Add an entry for the spotfire-database.login in the <tss_install_dir>\jdk\jre\lib\security\java.security file:   login.config.url.1=file:${java.home}/jre/lib/security/spotfire-database.login

    😎 Run the Spotfire Server service with the service account used to register SPNs for the server machine.

    9) Restart the Spotfire Server service (as a Windows or Linux service), to make these changes take effect.

    10) Login to Spotfire Server from the Spotfire desktop client, then select "Tools > Information Designer" from the main menu and create a new Data Source using the newly created template (sqlserver2005_delegated). Keep the username/password field blank.

    11) If the data source connection fails, purge the kerberos tickets on the Spotfire client machine using command:  

        <tss_install_dir>\jdk\bin\klist purge

    References

    https://msdn.microsoft.com/en-us/library/ms378428(v=sql.110).aspx

    https://msdn.microsoft.com/en-us/library/gg558122(v=sql.110).aspx

    Data Source Template for Microsoft SQL Server with Kerberos delegated authentication (example)

    <jdbc-type-settings>
      <type-name>sqlserver2005_delegated</type-name>
      <driver>com.microsoft.sqlserver.jdbc.SQLServerDriver</driver>
      <connection-url-pattern>jdbc:sqlserver://&lt;host&gt;:&lt;port&gt;;DatabaseName=&lt;database&gt;;integratedSecurity=true;    authenticationScheme=JavaKerberos</connection-url-pattern>
      <ping-command>SELECT 1</ping-command>
      <connection-properties>
        <connection-property>
          <key>spotfire.connection.pool.factory.data.source</key>
          <value>kerberos.data.source</value>
        </connection-property>
        <connection-property>
          <key>selectMethod</key>
          <value>cursor</value>
        </connection-property>
      </connection-properties>
      <metadata-provider>com.spotfire.ws.im.ds.sql.sqlserver.SQLServerMetadataProvider</metadata-provider>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <table-types>TABLE, VIEW</table-types>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>true</supports-procedures>
      <max-in-clause-size>1000</max-in-clause-size>
      <condition-list-threshold>10000</condition-list-threshold>
      <expand-in-clause>false</expand-in-clause>
      <table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern>
      <date-format-expression>$$value$$</date-format-expression>
      <time-format-expression>convert(varchar(8), $$value$$, 108)</time-format-expression>
      <date-time-format-expression>$$value$$</date-time-format-expression>
      <java-to-sql-type-conversions>
        <type-mapping>
          <from max-length="8000">String</from>
          <to>VARCHAR($$value$$)</to>
        </type-mapping>
        <type-mapping>
          <from>String</from>
          <to>VARCHAR(8000)</to>
        </type-mapping>
        <type-mapping>
          <from>Integer</from>
          <to>DECIMAL(10, 0)</to>
        </type-mapping>
        <type-mapping>
          <from>Long</from>
          <to>DECIMAL(38, 0)</to>
        </type-mapping>
        <type-mapping>
          <from>Float</from>
          <to>REAL</to>
        </type-mapping>
        <type-mapping>
          <from>Double</from>
          <to>DOUBLE PRECISION</to>
        </type-mapping>
        <type-mapping>
          <from>Date</from>
          <to>DATETIME</to>
        </type-mapping>
        <type-mapping>
          <from>Time</from>
          <to>DATETIME</to>
        </type-mapping>
        <type-mapping>
          <from>DateTime</from>
          <to>DATETIME</to>
        </type-mapping>
      </java-to-sql-type-conversions>
      <temp-table-name-pattern>#$$name$$</temp-table-name-pattern>
      <create-temp-table-command>CREATE TABLE $$name$$ $$column_list$$</create-temp-table-command>
      <drop-temp-table-command>DROP TABLE $$name$$</drop-temp-table-command>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
    </jdbc-type-settings>
     

    Microsoft SQL Server (Progress DataDirect)

    <jdbc-type-settings>
      <type-name>sqlserver_datadirect</type-name>
      <driver>com.ddtek.jdbc.sqlserver.SQLServerDriver</driver>
      <connection-url-pattern>jdbc:datadirect:sqlserver://&lt;host&gt;:&lt;port&gt;;databaseName=&lt;database&gt;</connection-url-pattern>
      <connection-properties>
        <connection-property>
          <key>MaxPooledStatements</key>
          <value>20</value>
        </connection-property>
        <connection-property>
          <key>LoginTimeout</key>
          <value>$m{loginTimeoutSeconds}</value>
        </connection-property>
        <connection-property>
          <key>selectMethod</key>
          <value>cursor</value>
        </connection-property>
      </connection-properties>
      <ping-command>SELECT 1</ping-command>
      <metadata-provider>com.spotfire.ws.im.ds.sql.sqlserver.SQLServerMetadataProvider</metadata-provider>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <table-types>TABLE, VIEW</table-types>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>true</supports-procedures>
      <max-in-clause-size>1000</max-in-clause-size>
      <condition-list-threshold>10000</condition-list-threshold>
      <expand-in-clause>false</expand-in-clause>
      <table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern>
      <date-format-expression>$$value$$</date-format-expression>
      <time-format-expression>convert(varchar(8), $$value$$, 108)</time-format-expression>
      <date-time-format-expression>$$value$$</date-time-format-expression>
      <java-to-sql-type-conversions>
        <type-mapping>
          <from max-length="8000">String</from>
          <to>VARCHAR($$value$$)</to>
        </type-mapping>
        <type-mapping>
          <from>String</from>
          <to>VARCHAR(8000)</to>
        </type-mapping>
        <type-mapping>
          <from>Integer</from>
          <to>DECIMAL(10, 0)</to>
        </type-mapping>
        <type-mapping>
          <from>Long</from>
          <to>DECIMAL(38, 0)</to>
        </type-mapping>
        <type-mapping>
          <from>Float</from>
          <to>REAL</to>
        </type-mapping>
        <type-mapping>
          <from>Double</from>
          <to>DOUBLE PRECISION</to>
        </type-mapping>
        <type-mapping>
          <from>Date</from>
          <to>DATETIME</to>
        </type-mapping>
        <type-mapping>
          <from>Time</from>
          <to>DATETIME</to>
        </type-mapping>
        <type-mapping>
          <from>DateTime</from>
          <to>DATETIME</to>
        </type-mapping>
      </java-to-sql-type-conversions>
      <temp-table-name-pattern>#$$name$$</temp-table-name-pattern>
      <create-temp-table-command>CREATE TABLE $$name$$ $$column_list$$</create-temp-table-command>
      <drop-temp-table-command>DROP TABLE $$name$$</drop-temp-table-command>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
    </jdbc-type-settings>

     

    MySQL

    MySQL (Connector/J)

    Data source template for MySQL using Connector/J JDBC driver.

    <jdbc-type-settings>
      <type-name>mysql</type-name>
      <driver>com.mysql.cj.jdbc.Driver</driver>
      <connection-url-pattern>jdbc:mysql://&lt;host&gt;:&lt;port&gt;/&lt;database&gt;</connection-url-pattern>
      <connection-properties>
        <connection-property>
          <key>connectTimeout</key>
          <value>$m{loginTimeoutMilliseconds}</value>
        </connection-property>
      </connection-properties>
      <ping-command>SELECT 1</ping-command>
      <column-name-pattern>`$$name$$`</column-name-pattern>
      <table-name-pattern>`$$name$$`</table-name-pattern>
      <catalog-name-pattern>`$$name$$`</catalog-name-pattern>
      <procedure-name-pattern>`$$name$$`</procedure-name-pattern>
      <sql-runtime>com.spotfire.ws.im.ds.sql.mysql.MySQLDriverSQLRuntime</sql-runtime>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <table-types>TABLE, VIEW</table-types>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>false</supports-schemas>
      <supports-procedures>true</supports-procedures>
      <max-in-clause-size>1000</max-in-clause-size>
      <condition-list-threshold>10000</condition-list-threshold>
      <expand-in-clause>false</expand-in-clause>
      <column-alias-pattern>`$$name$$`</column-alias-pattern>
      <table-expression-pattern>[$$catalog$$.]$$table$$</table-expression-pattern>
      <date-format-expression>DATE_FORMAT($$value$$, '%Y-%m-%d')</date-format-expression>
      <time-format-expression>DATE_FORMAT($$value$$, '%H:%i:%s')</time-format-expression>
      <date-time-format-expression>DATE_FORMAT($$value$$, '%Y-%m-%d %H:%i:%s')</date-time-format-expression>
      <java-to-sql-type-conversions>
        <type-mapping>
          <from max-length="255">String</from>
          <to>VARCHAR($$value$$)</to>
        </type-mapping>
        <type-mapping>
          <from>String</from>
          <to>TEXT</to>
        </type-mapping>
        <type-mapping>
          <from>Integer</from>
          <to>DECIMAL(10)</to>
        </type-mapping>
        <type-mapping>
          <from>Long</from>
          <to>DECIMAL(38)</to>
        </type-mapping>
        <type-mapping>
          <from>Float</from>
          <to>REAL</to>
        </type-mapping>
        <type-mapping>
          <from>Double</from>
          <to>DOUBLE PRECISION</to>
        </type-mapping>
        <type-mapping>
          <from>Date</from>
          <to>DATE</to>
        </type-mapping>
        <type-mapping>
          <from>Time</from>
          <to>TIME</to>
        </type-mapping>
        <type-mapping>
          <from>DateTime</from>
          <to>DATETIME</to>
        </type-mapping>
      </java-to-sql-type-conversions>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
    </jdbc-type-settings>
     

    MySQL (Progress DataDirect)

    <jdbc-type-settings>
      <type-name>mysql_datadirect</type-name>
      <driver>com.ddtek.jdbc.mysql.MySQLDriver</driver>
      <connection-url-pattern>jdbc:datadirect:mysql://&lt;host&gt;:&lt;port&gt;;databaseName=&lt;database&gt;</connection-url-pattern>
      <connection-properties>
        <connection-property>
          <key>MaxPooledStatements</key>
          <value>20</value>
        </connection-property>
        <connection-property>
          <key>LoginTimeout</key>
          <value>$m{loginTimeoutSeconds}</value>
        </connection-property>
      </connection-properties>
      <ping-command>SELECT 1</ping-command>
      <column-name-pattern>`$$name$$`</column-name-pattern>
      <table-name-pattern>`$$name$$`</table-name-pattern>
      <catalog-name-pattern>`$$name$$`</catalog-name-pattern>
      <procedure-name-pattern>`$$name$$`</procedure-name-pattern>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <table-types>TABLE, VIEW</table-types>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>false</supports-schemas>
      <supports-procedures>true</supports-procedures>
      <max-in-clause-size>1000</max-in-clause-size>
      <condition-list-threshold>10000</condition-list-threshold>
      <expand-in-clause>false</expand-in-clause>
      <column-alias-pattern>`$$name$$`</column-alias-pattern>
      <table-expression-pattern>[$$catalog$$.]$$table$$</table-expression-pattern>
      <date-format-expression>DATE_FORMAT($$value$$, '%Y-%m-%d')</date-format-expression>
      <time-format-expression>DATE_FORMAT($$value$$, '%H:%i:%s')</time-format-expression>
      <date-time-format-expression>DATE_FORMAT($$value$$, '%Y-%m-%d %H:%i:%s')</date-time-format-expression>
      <java-to-sql-type-conversions>
        <type-mapping>
          <from max-length="255">String</from>
          <to>VARCHAR($$value$$)</to>
        </type-mapping>
        <type-mapping>
          <from>String</from>
          <to>TEXT</to>
        </type-mapping>
        <type-mapping>
          <from>Integer</from>
          <to>DECIMAL(10)</to>
        </type-mapping>
        <type-mapping>
          <from>Long</from>
          <to>DECIMAL(38)</to>
        </type-mapping>
        <type-mapping>
          <from>Float</from>
          <to>REAL</to>
        </type-mapping>
        <type-mapping>
          <from>Double</from>
          <to>DOUBLE PRECISION</to>
        </type-mapping>
        <type-mapping>
          <from>Date</from>
          <to>DATE</to>
        </type-mapping>
        <type-mapping>
          <from>Time</from>
          <to>TIME</to>
        </type-mapping>
        <type-mapping>
          <from>DateTime</from>
          <to>DATETIME</to>
        </type-mapping>
      </java-to-sql-type-conversions>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
    </jdbc-type-settings>

    Oracle

    Oracle Database with SSL encryption

    For connecting to an Oracle database with SSL encryption, the recommendation is to review the JDBC Client-Side Security Features, specifically the Support for SSL section.  In more recent driver versions, one preferred method is to use a certificate in a java keystore file.  See Configuring LDAPS from the Spotfire Server Installation Manual for options for handling certificates in Spotfire Server.  In addition to the certificate file, there may be changes required on the connection string.  Please see Oracle JDBC driver documentation for details since the available options can change depending on the database and driver version.

    Note: The template below is based on Oracle with Java 7 and will only work if the SSL is used only for encryption (NOT for Authentication) on the Oracle Database. Different connection properties exist for SSL Authentication to Oracle DB.

    References

    http://www.oracle.com/technetwork/topics/wp-oracle-jdbc-thin-ssl-130128.pdf

    http://docs.oracle.com/cd/B19306_01/java.102/b14355/sslthin.htm#BABDJBAE

    Data Source Template for Oracle Database with SSL encryption (example)

    <jdbc-type-settings>
      <type-name>OracleSSL</type-name>
      <driver>oracle.jdbc.OracleDriver</driver>
      <connection-url-pattern>jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=&lt;host&gt;)(PORT=&lt;port&gt;))(CONNECT_DATA=(SERVICE_NAME=&lt;servicename&gt;)))</connection-url-pattern>
      <connection-properties>
    	<connection-property> 
    	<key>oracle.net.ssl_cipher_suites</key> 
    	<value>TLS_RSA_WITH_AES_256_CBC_SHA256, SSL_DH_anon_WITH_3DES_EDE_CBC_SHA, SSL_DH_anon_WITH_RC4_128_MD5, SSL_DH_anon_WITH_DES_CBC_SHA</value>
    	</connection-property>
      </connection-properties>
      <ping-command>SELECT 1 FROM DUAL</ping-command>
      <metadata-provider>com.spotfire.ws.im.ds.sql.oracle.OracleMetadataProvider</metadata-provider>
      <sql-filter>com.spotfire.ws.im.ds.sql.oracle.OracleSQLFilter</sql-filter>
      <sql-runtime>com.spotfire.ws.im.ds.sql.oracle.OracleSQLRuntime</sql-runtime>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <table-types>TABLE, VIEW</table-types>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>true</supports-procedures>
      <max-in-clause-size>1000</max-in-clause-size>
      <condition-list-threshold>7000</condition-list-threshold>
      <expand-in-clause>false</expand-in-clause>
      <table-expression-pattern>[$$schema$$.]$$table$$[@$$catalog$$]</table-expression-pattern>
      <procedure-expression-pattern>[$$schema$$.]$$procedure$$[@$$catalog$$]</procedure-expression-pattern>
      <date-format-expression>TO_CHAR($$value$$, 'YYYY-MM-DD')</date-format-expression>
      <time-format-expression>TO_CHAR($$value$$, 'HH24:MI:SS')</time-format-expression>
      <date-time-format-expression>$$value$$</date-time-format-expression>
      <date-time-literal-format-expression>TO_DATE('$$value$$', 'YYYY-MM-DD HH24:MI:SS')</date-time-literal-format-expression>
      <java-to-sql-type-conversions>
        <type-mapping>
          <from max-length="4000">String</from>
          <to>VARCHAR2($$value$$)</to>
        </type-mapping>
        <type-mapping>
          <from>String</from>
          <to>VARCHAR2(4000)</to>
        </type-mapping>
        <type-mapping>
          <from>Integer</from>
          <to>NUMBER(10)</to>
        </type-mapping>
        <type-mapping>
          <from>Long</from>
          <to>NUMBER(38)</to>
        </type-mapping>
        <type-mapping>
          <from>Float</from>
          <to>REAL</to>
        </type-mapping>
        <type-mapping>
          <from>Double</from>
          <to>FLOAT</to>
        </type-mapping>
        <type-mapping>
          <from>Date</from>
          <to>DATE</to>
        </type-mapping>
        <type-mapping>
          <from>Time</from>
          <to>DATE</to>
        </type-mapping>
        <type-mapping>
          <from>DateTime</from>
          <to>DATE</to>
        </type-mapping>
      </java-to-sql-type-conversions>
      <create-temp-table-command>CREATE GLOBAL TEMPORARY TABLE $$name$$ $$column_list$$ ON COMMIT PRESERVE ROWS</create-temp-table-command>
      <drop-temp-table-command>TRUNCATE TABLE $$name$$;DROP TABLE $$name$$</drop-temp-table-command>
      <lob-threshold>4000</lob-threshold>
      <procedure-table-jdbc-type>-10</procedure-table-jdbc-type>
      <procedure-table-type-name>REF CURSOR</procedure-table-type-name>
    </jdbc-type-settings>

    Oracle (Progress DataDirect)

    <jdbc-type-settings>
      <type-name>oracle_datadirect</type-name>
      <driver>com.ddtek.jdbc.oracle.OracleDriver</driver>
      <connection-url-pattern>jdbc:datadirect:oracle://&lt;host&gt;:&lt;port1521&gt;;SID=&lt;sid&gt;</connection-url-pattern>
      <connection-properties>
        <connection-property>
          <key>MaxPooledStatements</key>
          <value>20</value>
        </connection-property>
        <connection-property>
          <key>LoginTimeout</key>
          <value>$m{loginTimeoutSeconds}</value>
        </connection-property>
      </connection-properties>
      <ping-command>SELECT 1 FROM DUAL</ping-command>
      <metadata-provider>com.spotfire.ws.im.ds.sql.oracle.DataDirectMetadataProvider</metadata-provider>
      <sql-filter>com.spotfire.ws.im.ds.sql.oracle.OracleSQLFilter</sql-filter>
      <sql-runtime>com.spotfire.ws.im.ds.sql.oracle.OracleSQLRuntime</sql-runtime>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <table-types>TABLE, VIEW</table-types>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>true</supports-procedures>
      <max-in-clause-size>1000</max-in-clause-size>
      <condition-list-threshold>7000</condition-list-threshold>
      <expand-in-clause>false</expand-in-clause>
      <table-expression-pattern>[$$schema$$.]$$table$$[@$$catalog$$]</table-expression-pattern>
      <procedure-expression-pattern>[$$schema$$.]$$procedure$$[@$$catalog$$]</procedure-expression-pattern>
      <date-format-expression>TO_CHAR($$value$$, 'YYYY-MM-DD')</date-format-expression>
      <time-format-expression>TO_CHAR($$value$$, 'HH24:MI:SS')</time-format-expression>
      <date-time-format-expression>$$value$$</date-time-format-expression>
      <date-time-literal-format-expression>TO_TIMESTAMP('$$value$$', 'YYYY-MM-DD HH24:MI:SS.FF')</date-time-literal-format-expression>
      <java-to-sql-type-conversions>
        <type-mapping>
          <from max-length="4000">String</from>
          <to>VARCHAR2($$value$$)</to>
        </type-mapping>
        <type-mapping>
          <from>String</from>
          <to>VARCHAR2(4000)</to>
        </type-mapping>
        <type-mapping>
          <from>Integer</from>
          <to>NUMBER(10)</to>
        </type-mapping>
        <type-mapping>
          <from>Long</from>
          <to>NUMBER(38)</to>
        </type-mapping>
        <type-mapping>
          <from>Float</from>
          <to>REAL</to>
        </type-mapping>
        <type-mapping>
          <from>Double</from>
          <to>FLOAT</to>
        </type-mapping>
        <type-mapping>
          <from>Date</from>
          <to>DATE</to>
        </type-mapping>
        <type-mapping>
          <from>Time</from>
          <to>DATE</to>
        </type-mapping>
        <type-mapping>
          <from>DateTime</from>
          <to>DATE</to>
        </type-mapping>
      </java-to-sql-type-conversions>
      <create-temp-table-command>CREATE GLOBAL TEMPORARY TABLE $$name$$ $$column_list$$ ON COMMIT PRESERVE ROWS</create-temp-table-command>
      <drop-temp-table-command>TRUNCATE TABLE $$name$$;DROP TABLE $$name$$</drop-temp-table-command>
      <lob-threshold>4000</lob-threshold>
      <procedure-table-jdbc-type>-10</procedure-table-jdbc-type>
      <procedure-table-type-name>REF CURSOR</procedure-table-type-name>
    </jdbc-type-settings>

    Netezza

    Netezza Data source template for Netezza

    <jdbc-type-settings>
      <type-name>netezza</type-name>
      <driver>org.netezza.Driver</driver>
      <connection-url-pattern>jdbc:netezza://&lt;host&gt;:5480/&lt;database&gt;</connection-url-pattern>
      <connection-properties>
        <connection-property>
          <key>loginTimeout</key>
          <value>$m{loginTimeoutSeconds}</value>
        </connection-property>
      </connection-properties>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>false</supports-procedures>
    </jdbc-type-settings>
     

    PostgreSQL

    Data Source Template for PostgreSQL (example)

    <jdbc-type-settings>
      <type-name>postgreSQL</type-name>
      <display-name>postgreSQL</display-name>
      <driver>org.postgresql.Driver</driver>
      <connection-url-pattern>
        jdbc:postgresql://&lt;host&gt;:&lt;port&gt;/database
      </connection-url-pattern>
      <ping-command>select version()</ping-command>
      <supports-procedures>true</supports-procedures>
      <condition-list-threshold>10000</condition-list-threshold>
      <date-literal-format-expression>{d '$$value$$'}</date-literal-format-expression>
      <time-literal-format-expression>{t '$$value$$'}</time-literal-format-expression>
      <date-time-literal-format-expression>{ts '$$value$$'}</date-time-literal-format-expression>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
      <sql-runtime>com.spotfire.ws.im.ds.sql.postgresql.PostgresSQLRuntime</sql-runtime>
    </jdbc-type-settings>
     

    Presto

    This data source template for Presto is based on the information available on this page Presto 337 Documentation: 3.2. JDBC Driver. The catalog and schema parameters are not required on the connection URL. 

    <jdbc-type-settings>
    	<type-name>PrestoKerberousTesting</type-name>
    	<driver>io.prestosql.jdbc.PrestoDriver</driver>
    	<connection-url-pattern>jdbc:presto://&lt;host&gt;:port&gt;/&lt;catalog&gt;/&lt;schema&gt;</connection-url-pattern>
    	<supports-catalogs>true</supports-catalogs>
    	<supports-schemas>true</supports-schemas>
    	<date-format-expression>$$value$$</date-format-expression>
    	<date-literal-format-expression>date '$$value$$'</date-literal-format-expression>
    	<time-format-expression>$$value$$</time-format-expression>
    	<time-literal-format-expression>time '$$value$$'</time-literal-format-expression>
    	<date-time-format-expression>$$value$$</date-time-format-expression>
    	<date-time-literal-format-expression>timestamp '$$value$$'</date-time-literal-format-expression>
    </jdbc-type-settings>
     

    Progress OpenEdge

    Data Source Template for Progress OpenEdge (example)

    <jdbc-type-settings>
            <type-name>OpenEdge</type-name>
            <display-name>OpenEdge</display-name>
            <driver>com.ddtek.jdbc.openedge.OpenEdgeDriver</driver>
            <connection-url-pattern>
                  jdbc:datadirect:openedge://&lt;HOST&gt;:&lt;PORT&gt;;databaseName=&lt;DB_NAME&gt;
            </connection-url-pattern>
            <ping-command></ping-command>
            <supports-catalogs>false</supports-catalogs>
    </jdbc-type-settings>
     

    SAP HANA

    For SAP HANA-connectivity in Spotfire, the SAP HANA connector ngdbc.jar (either 32-bit or 64-bit version) is required. 

    Data Source Template for SAP HANA (example)

    <jdbc-type-settings>
      <type-name>Hana</type-name>
      <driver>com.sap.db.jdbc.Driver</driver>
      <connection-url-pattern>jdbc:sap://&lt;host&gt;:&lt;port&gt;/&lt;database&gt;</connection-url-pattern>
      <ping-command>select 1 from dummy</ping-command>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>true</supports-procedures>
      <table-types>TABLE, CALC VIEW, OLAP VIEW, JOIN VIEW, HIERARCHY VIEW, VIEW</table-types>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
      <condition-list-threshold>2000</condition-list-threshold>
      <temp-table-name-pattern>#$$name$$</temp-table-name-pattern>
      <create-temp-table-command>CREATE LOCAL TEMPORARY TABLE $$name$$ $$column_list$$;</create-temp-table-command>
      <drop-temp-table-command></drop-temp-table-command>
    </jdbc-type-settings>
     

    Data Source Template for SAP HANA - Kerbros, not constrained (example)

    <jdbc-type-settings>
      <type-name>Hana_krb</type-name>
      <driver>com.sap.db.jdbc.Driver</driver>
      <connection-url-pattern>jdbc:sap://&lt;host&gt;:&lt;port&gt;/&lt;database&gt;</connection-url-pattern>
      <ping-command>select 1 from dummy</ping-command>
      <connection-properties>
        <connection-property>
          <key>spotfire.connection.pool.factory.data.source</key>
          <value>kerberos.data.source</value>
        </connection-property>
        <connection-property>
          <key>nativeAuthentication</key>
          <value>true</value>
        </connection-property>
        <connection-property>
          <key>authentication</key>
          <value>DatabaseMapping</value>
        </connection-property>
        <connection-property>
          <key>spotfire.kerberos.gsscredential.property</key>
          <value>user</value>
        </connection-property>
      </connection-properties>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>true</supports-procedures>
      <table-types>TABLE, CALC VIEW, OLAP VIEW, JOIN VIEW, HIERARCHY VIEW, VIEW</table-types>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
      <condition-list-threshold>2000</condition-list-threshold>
      <temp-table-name-pattern>#$$name$$</temp-table-name-pattern>
      <create-temp-table-command>CREATE LOCAL TEMPORARY TABLE $$name$$ $$column_list$$;</create-temp-table-command>
      <drop-temp-table-command></drop-temp-table-command>
    </jdbc-type-settings>
     

    SAS IOM/Workspace

    Data Source Template for SAS IOM/Workspace (example)

    <jdbc-type-settings>
      <type-name>SAS IOM</type-name>
      <driver>com.sas.rio.MVADriver</driver>
      <connection-url-pattern>jdbc:sasiom://&lt;host&gt;:&lt;port&gt;</connection-url-pattern>
      <ping-command />
      <table-types>TABLE, VIEW</table-types>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <column-name-pattern>$$name$$</column-name-pattern>
      <table-name-pattern>$$name$$</table-name-pattern>
      <schema-name-pattern>$$name$$</schema-name-pattern>
      <column-alias-pattern>$$name$$</column-alias-pattern>
    </jdbc-type-settings>
      

    Snowflake

    Data Source Template for Snowflake (example)

    <jdbc-type-settings>
      <type-name>Snowflake</type-name>
      <driver>net.snowflake.client.jdbc.SnowflakeDriver</driver>
      <connection-url-pattern>jdbc:snowflake://&lt;ACCOUNT&gt;.snowflakecomputing.com</connection-url-pattern>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>false</supports-procedures>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
      <connection-properties>
        <connection-property>
          <key>application</key>
          <value>TIBCO_Spotfire</value>
        </connection-property>
      </connection-properties>
    </jdbc-type-settings>
     

    Data Source Template for Snowflake reading BLOB data

    If one is reading BLOB data from Snowflake, like geospatial geometry data, then some additional configuration is needed in the data source template and a custom jar needs to be deployed to Spotfire Server.  By default, Spotfire Server requests BLOB data as a binary stream from the Snowflake JDBC driver, and, as of June 2022, the Snowflake JDBC driver does not support the binary stream request. Instead binary (BLOB) data should be requested as a byte array. 

    One can use the Spotfire Server Information Services API, specifically the SQLRuntime interface, to override the default behavior.  This requires copying the custom jar - SpotfireSnowflakeSQLRuntime.jar - to the Spotfire Server directory <version>/tomcat/custom-ext, and modifying the data source template.  The jar is attached to this page in a zip file - SpotfireSnowflakeSQLRuntime.zip.  Unzip the zip file and copy the jar to the tomcat/custom-ext directory on all Spotfire Servers.

    The Snowflake data source template needs to reference the custom SQLRuntime.  This is done by adding the following line to the Snowflake data source template: 

     <sql-runtime>com.tibco.spotfireps.server.is.SnowflakeSQLRuntime</sql-runtime>
     

    After updating the data source template and saving the configuration to the database, all Spotfire Servers need to be restarted.  With the addition of the custom SQLRuntime, the Snowflake data source template to support BLOB data becomes this: 

    <jdbc-type-settings>
      <type-name>Snowflake</type-name>
      <driver>net.snowflake.client.jdbc.SnowflakeDriver</driver>
      <connection-url-pattern>jdbc:snowflake://&lt;ACCOUNT&gt;.snowflakecomputing.com</connection-url-pattern>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>false</supports-procedures>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
      <connection-properties>
        <connection-property>
          <key>application</key>
          <value>TIBCO_Spotfire</value>
        </connection-property>
      </connection-properties>
      <sql-runtime>com.tibco.spotfireps.server.is.SnowflakeSQLRuntime</sql-runtime>
    </jdbc-type-settings>
     

    Snowflake with OAuth

    Data Source Template for Snowflake with OAuth

    <jdbc-type-settings>
      <type-name>snowflake_oauth2</type-name>
      <driver>net.snowflake.client.jdbc.SnowflakeDriver</driver>
      <connection-url-pattern>jdbc:snowflake://&lt;account_identifier&gt;.snowflakecomputing.com</connection-url-pattern>
      <connection-properties>
        <connection-property>
          <key>authenticator</key>
          <value>oauth</value>
        </connection-property>
      </connection-properties>
      <credentials-provider-class>com.spotfire.ws.im.ds.sql.TokenCredentialsProvider</credentials-provider-class>
      <credentials-provider-init-params>
        <parameter>
          <key>issuer</key>
          <value>&lt;issuer&gt;</value>
        </parameter>
        <parameter>
          <key>scope</key>
          <value></value>
        </parameter> 
        <parameter>
          <key>access_token_property_name</key>
          <value>token</value>
        </parameter>
      </credentials-provider-init-params>
      <supports-catalogs>true</supports-catalogs>
      <supports-procedures>false</supports-procedures>
      <supports-schemas>true</supports-schemas>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
    </jdbc-type-settings>
     

    For more information, see Using OAuth2 with Information Services.

    SQream DB

    Data Source Template for SQream (example)

    <jdbc-type-settings>
    <type-name>Sqream</type-name>
    <display-name>Sqream</display-name>
    <driver>com.sqream.jdbc.SQDriver</driver>
    <connection-url-pattern> jdbc:SQream;IP=192.168.0.90;PORT=5000;URLCONECTION=demo;</connection-url-pattern>
    <supports-catalogs>false</supports-catalogs>
    <supports-schemas>false</supports-schemas>
    <supports-procedures>true</supports-procedures>
    <java-to-sql-type-conversions>
        <type-mapping>
          <from>VARCHAR(2048)</from>
          <to>STRING</to>
        </type-mapping>
        <type-mapping>
          <from>INT</from>
          <to>Integer</to>
        </type-mapping>
        <type-mapping>
          <from>BIGINT</from>
          <to>LongInteger</to>
        </type-mapping>
        <type-mapping>
          <from>Real</from>
          <to>Real</to>
        </type-mapping>
        <type-mapping>
          <from>Date</from>
          <to>DATE</to>
        </type-mapping>
        <type-mapping>
          <from>BOOL</from>
          <to>Boolean</to>
        </type-mapping>
        <type-mapping>
          <from>DateTime</from>
          <to>DATETIME</to>
        </type-mapping>
    </java-to-sql-type-conversions>
    </jdbc-type-settings>
     

    Sybase 

    Data source template for Sybase using the Sybase driver

    <jdbc-type-settings>
      <type-name>sybase</type-name>
      <driver>com.sybase.jdbc3.jdbc.SybDriver</driver>
      <connection-url-pattern>jdbc:sybase:Tds:&lt;host&gt;:&lt;port&gt;/&lt;database&gt;</connection-url-pattern>
      <connection-properties>
        <connection-property>
          <key>LoginTimeout</key>
          <value>$m{loginTimeoutSeconds}</value>
        </connection-property>
      </connection-properties>
      <metadata-provider>com.spotfire.ws.im.ds.sql.sybase.SybaseMetadataProvider</metadata-provider>
      <sql-runtime>com.spotfire.ws.im.ds.sql.sybase.SybaseSQLRuntime</sql-runtime>
      <ping-command>SELECT 1</ping-command>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <table-types>TABLE, VIEW</table-types>
      <supports-catalogs>false</supports-catalogs>
      <supports-schemas>false</supports-schemas>
      <supports-procedures>true</supports-procedures>
      <max-in-clause-size>1000</max-in-clause-size>
      <condition-list-threshold>10000</condition-list-threshold>
      <expand-in-clause>false</expand-in-clause>
      <table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern>
      <date-format-expression>$$value$$</date-format-expression>
      <time-format-expression>convert(varchar(8), $$value$$, 108)</time-format-expression>
      <date-time-format-expression>$$value$$</date-time-format-expression>
      <java-to-sql-type-conversions>
        <type-mapping>
          <from max-length="8000">String</from>
          <to>VARCHAR($$value$$)</to>
        </type-mapping>
        <type-mapping>
          <from>String</from>
          <to>VARCHAR(8000)</to>
        </type-mapping>
        <type-mapping>
          <from>Integer</from>
          <to>DECIMAL(10, 0)</to>
        </type-mapping>
        <type-mapping>
          <from>Long</from>
          <to>DECIMAL(38, 0)</to>
        </type-mapping>
        <type-mapping>
          <from>Float</from>
          <to>REAL</to>
        </type-mapping>
        <type-mapping>
          <from>Double</from>
          <to>DOUBLE PRECISION</to>
        </type-mapping>
        <type-mapping>
          <from>Date</from>
          <to>DATETIME</to>
        </type-mapping>
        <type-mapping>
          <from>Time</from>
          <to>DATETIME</to>
        </type-mapping>
        <type-mapping>
          <from>DateTime</from>
          <to>DATETIME</to>
        </type-mapping>
      </java-to-sql-type-conversions>
      <temp-table-name-pattern>#$$name$$</temp-table-name-pattern>
      <create-temp-table-command>CREATE TABLE $$name$$ $$column_list$$</create-temp-table-command>
      <drop-temp-table-command>DROP TABLE $$name$$</drop-temp-table-command>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
    </jdbc-type-settings>
      

    Sybase (Progress DataDirect)

    <jdbc-type-settings>
      <type-name>sybase_datadirect</type-name>
      <driver>com.ddtek.jdbc.sybase.SybaseDriver</driver>
      <connection-url-pattern>jdbc:datadirect:sybase://&lt;host&gt;:&lt;port&gt;;databaseName=&lt;database&gt;</connection-url-pattern>
      <connection-properties>
        <connection-property>
          <key>MaxPooledStatements</key>
          <value>20</value>
        </connection-property>
        <connection-property>
          <key>LoginTimeout</key>
          <value>$m{loginTimeoutSeconds}</value>
        </connection-property>
        <connection-property>
          <key>selectMethod</key>
          <value>cursor</value>
        </connection-property>
      </connection-properties>
      <metadata-provider>com.spotfire.ws.im.ds.sql.sybase.SybaseMetadataProvider</metadata-provider>
      <sql-runtime>com.spotfire.ws.im.ds.sql.sybase.SybaseSQLRuntime</sql-runtime>
      <ping-command>SELECT 1</ping-command>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <table-types>TABLE, VIEW</table-types>
      <supports-catalogs>false</supports-catalogs>
      <supports-schemas>false</supports-schemas>
      <supports-procedures>true</supports-procedures>
      <max-in-clause-size>1000</max-in-clause-size>
      <condition-list-threshold>10000</condition-list-threshold>
      <expand-in-clause>false</expand-in-clause>
      <table-expression-pattern>[$$catalog$$.][$$schema$$.]$$table$$</table-expression-pattern>
      <date-format-expression>$$value$$</date-format-expression>
      <time-format-expression>convert(varchar(8), $$value$$, 108)</time-format-expression>
      <date-time-format-expression>$$value$$</date-time-format-expression>
      <java-to-sql-type-conversions>
        <type-mapping>
          <from max-length="8000">String</from>
          <to>VARCHAR($$value$$)</to>
        </type-mapping>
        <type-mapping>
          <from>String</from>
          <to>VARCHAR(8000)</to>
        </type-mapping>
        <type-mapping>
          <from>Integer</from>
          <to>DECIMAL(10, 0)</to>
        </type-mapping>
        <type-mapping>
          <from>Long</from>
          <to>DECIMAL(38, 0)</to>
        </type-mapping>
        <type-mapping>
          <from>Float</from>
          <to>REAL</to>
        </type-mapping>
        <type-mapping>
          <from>Double</from>
          <to>DOUBLE PRECISION</to>
        </type-mapping>
        <type-mapping>
          <from>Date</from>
          <to>DATETIME</to>
        </type-mapping>
        <type-mapping>
          <from>Time</from>
          <to>DATETIME</to>
        </type-mapping>
        <type-mapping>
          <from>DateTime</from>
          <to>DATETIME</to>
        </type-mapping>
      </java-to-sql-type-conversions>
      <temp-table-name-pattern>#$$name$$</temp-table-name-pattern>
      <create-temp-table-command>CREATE TABLE $$name$$ $$column_list$$</create-temp-table-command>
      <drop-temp-table-command>DROP TABLE $$name$$</drop-temp-table-command>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
    </jdbc-type-settings>

    Teradata

    The recommended way to connect Spotfire to Teradata and Teradata Cloud is to use the native connector, please see this article for more details.

    Data source template for Teradata.

    <jdbc-type-settings>
      <type-name>teradata</type-name>
      <driver>com.teradata.jdbc.TeraDriver</driver>
      <connection-url-pattern>jdbc:teradata://&lt;host&gt;</connection-url-pattern>
      <ping-command>SELECT 1</ping-command>
      <metadata-provider>com.spotfire.ws.im.ds.sql.teradata.TeradataMetadataProvider</metadata-provider>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <table-types>TABLE, VIEW</table-types>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>true</supports-procedures>
      <max-in-clause-size>1000</max-in-clause-size>
      <condition-list-threshold>7000</condition-list-threshold>
      <expand-in-clause>false</expand-in-clause>
      <table-expression-pattern>[$$schema$$.]$$table$$[@$$catalog$$]</table-expression-pattern>
      <procedure-expression-pattern>[$$schema$$.]$$procedure$$[@$$catalog$$]</procedure-expression-pattern>
      <create-temp-table-command>CREATE VOLATILE TABLE $$name$$ $$column_list$$ ON COMMIT PRESERVE ROWS; ET;</create-temp-table-command>
      <drop-temp-table-command>DROP TABLE $$name$$</drop-temp-table-command>
      <lob-threshold>2000</lob-threshold>
      <procedure-table-jdbc-type>-10</procedure-table-jdbc-type>
    </jdbc-type-settings>
     

    Teradata Cloud

    The recommended way to connect Spotfire to Teradata and Teradata Cloud is to use the native connector, please see this article for more details.

    However, if you are using Information Links there are certain settings that might increase performance towards Teradata Cloud.

    There are quite a few parameters with the Teradata JDBC driver including message size. They can be found here:

    http://developer.teradata.com/doc/connectivity/jdbc/reference/current/jdbcug_chapter_2.html#BABJIHBJ

    Setting the TCP parameter, TCP=SEND1048576+RECEIVE1048576, will expand the socket send and receive buffer sizes to 1MB from the default 64K. This could result in significant throughput improvement?.

    TIBCO Data Virtualization (TDV)

    The recommended way to connect Spotfire to TIBCO Data Virtualization is to use the native connector.

    This is the data source templated shipped with the Spotfire server. It should be ready to use.

    Data Source Template for TIBCO Data Virtualization (TDV)

    <jdbc-type-settings>
      <type-name>Composite</type-name>
      <driver>cs.jdbc.driver.CompositeDriver</driver>
      <connection-url-pattern>jdbc:compositesw:dbapi@&lt;server&gt;:&lt;port9401&gt;?domain=&lt;domain&gt;&amp;dataSource=&lt;data_source&gt;</connection-url-pattern>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
      <metadata-provider>com.spotfire.ws.im.ds.sql.composite.CompositeMetadataProvider</metadata-provider>
      <supports-procedures>true</supports-procedures>
    </jdbc-type-settings>
     

    This template is shipped with the Spotfire Server

    TIBCO Data Virtualization (TDV) with Kerberos

    The recommended way to connect Spotfire to TIBCO Data Virtualization is to use the native connector.

    This chapter assumes that the steps outlined in KB Article 000028976 is completed and TDV Studio is able to log you on via SSO.

    Data Source Template for TIBCO Data Virtualization (TDV) with Kerberos With Constrained Delegation

    <jdbc-type-settings>
    	<type-name>Composite_Kerberos_constrained</type-name>
    	<driver>cs.jdbc.driver.CompositeDriver</driver>
    	<connection-url-pattern>jdbc:compositesw:dbapi@&lt;server&gt;:&lt;port9401&gt;?domain=&lt;domain&gt;&amp;dataSource=&lt;data_source&gt;&amp;authenticationMethod=&lt;auth_method&gt;&amp;kerberos.spn=&lt;spn&gt;&amp;kerberos.useTicketCache=true</connection-url-pattern>
    	<ping-command>SELECT 1 FROM /services/databases/system/all_domains</ping-command>
    	<connection-properties>
          <connection-property>
            <key>spotfire.connection.pool.factory.data.source</key>
            <value>kerberos.data.source</value>
          </connection-property>
          <connection-property>
            <key>spotfire.kerberos.gsscredential.property</key>
            <value>userGSSCredential</value>
          </connection-property>
    	</connection-properties>
    	<supports-catalogs>true</supports-catalogs>
    	<supports-schemas>true</supports-schemas>
    	<use-ansii-style-outer-join>true</use-ansii-style-outer-join>
    	<metadata-provider>com.spotfire.ws.im.ds.sql.composite.CompositeMetadataProvider</metadata-provider>
    	<supports-procedures>true</supports-procedures>
    </jdbc-type-settings>
     

    The exact parameters needed on the connection-url may change with the environment based on how Kerberos is configured.  For Kerberos with Unconstrained Delegation, one will remove the spotfire.kerberos.gsscredential.property connection property.

    TIBCO Data Virtualization (TDV) with OAuth

    The recommended way to connect Spotfire to TIBCO Data Virtualization is to use the native connector.

    Data Source Template for TIBCO Data Virtualization (TDV) with OAuth

    <jdbc-type-settings>
      <type-name>tdv_oauth2</type-name>
      <driver>cs.jdbc.driver.CompositeDriver</driver>
      <connection-url-pattern>jdbc:compositesw:dbapi@&lt;server&gt;:&lt;port9401&gt;?domain=&lt;domain&gt;&amp;dataSource=&lt;data_source&gt;</connection-url-pattern>
      <connection-properties>
        <connection-property>
          <key>user</key>
          <value>dummy</value>
        </connection-property>
        <connection-property>
          <key>password</key>
          <value>dummy</value>
        </connection-property>
      </connection-properties>
      <credentials-provider-class>com.spotfire.ws.im.ds.sql.TokenCredentialsProvider</credentials-provider-class>
      <credentials-provider-init-params>
        <parameter>
          <key>issuer</key>
          <value>&lt;issuer&gt;</value>
        </parameter>
        <parameter>
          <key>scope</key>
          <value></value>
        </parameter>
        <parameter>
          <key>access_token_property_name</key>
          <value>AccessToken</value>
        </parameter>
      </credentials-provider-init-params>
      <supports-catalogs>true</supports-catalogs>
      <supports-procedures>true</supports-procedures>
      <supports-schemas>true</supports-schemas>
      <use-ansii-style-outer-join>true</use-ansii-style-outer-join>
      <metadata-provider>com.spotfire.ws.im.ds.sql.composite.CompositeMetadataProvider</metadata-provider>
    </jdbc-type-settings>
     

    For more information, see Using OAuth2 with Information Services.

    Tibero

    Data Source Template for Tibero Database (example)

    <jdbc-type-settings>
      <type-name>tibero</type-name>
      <driver>com.tmax.tibero.jdbc.TbDriver</driver>
      <connection-url-pattern>jdbc:tibero:thin:@&lt;host&gt;:&lt;port1521&gt;:&lt;sid&gt;</connection-url-pattern>
      <ping-command>SELECT 1</ping-command>
      <metadata-provider>com.spotfire.ws.im.ds.sql.BasicJDBCMetadataProvider</metadata-provider>
      <sql-filter>com.spotfire.ws.im.ds.sql.BasicSQLFilter</sql-filter>
      <sql-runtime>com.spotfire.ws.im.ds.sql.BasicSQLRuntime</sql-runtime>
      <fetch-size>10000</fetch-size>
      <batch-size>100</batch-size>
      <table-types>TABLE, VIEW</table-types>
      <supports-catalogs>true</supports-catalogs>
      <supports-schemas>true</supports-schemas>
      <supports-procedures>true</supports-procedures>
      <table-expression-pattern>[$$schema$$.]$$table$$[@$$catalog$$]</table-expression-pattern>
      <procedure-expression-pattern>[$$schema$$.]$$procedure$$[@$$catalog$$]</procedure-expression-pattern>
    </jdbc-type-settings>
     

    Vertica

    Data Source Template for Vertica (sample)

    <jdbc-type-settings>
        <type-name>Vertica</type-name>
        <driver>com.vertica.jdbc.Driver</driver>
        <connection-url-pattern>jdbc:vertica://&lt;host&gt;:&lt;port&gt;/&lt;db&gt;,&lt;dbadmin&gt;,&lt;dbpassowrd&gt;</connection-url-pattern>
        <ping-command>SELECT 1 FROM DUAL</ping-command>
    </jdbc-type-settings>
     

    jdbc_data_source_templates.zip

     

     

    image.png

    • Like 1

    User Feedback

    Recommended Comments

    There are no comments to display.


×
×
  • Create New...