Azure Databricks:- Read and write data into SQL dataase

Here in this post I would like to explain how we can connect SQL Server database from databricks to read and write. One of my customer project need this as the processed data is moving from Azure data lake layer to the aggregate layer which is SQL server database.

Steps to connect to SQL server from databricks is clearly written in azure documentation but I would like to describe my experience.

Code is developed in spark Scala

Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver")

val jdbcUsername = dbutils.secrets.get(scope = "dev-cluster-scope", key = "dev-sql-user")
val jdbcPassword = dbutils.secrets.get(scope = "dev-cluster-scope", key = "dev-sql-pwd")

val jdbcHostname = "SQL Server Name here"
val jdbcPort = 1433
val jdbcDatabase = "database name" -
val jdbcUrl = s"jdbc:sqlserver://${jdbcHostname}:${jdbcPort};database=${jdbcDatabase}"
// Create a Properties() object to hold the parameters.
import java.util.Properties
val connectionProperties = new Properties()
connectionProperties.put("user", s"${jdbcUsername}")
connectionProperties.put("password", s"${jdbcPassword}")


val driverClass = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
connectionProperties.setProperty("Driver", driverClass)


spark.table("<<table name /data frame name to be exported to SQL Server>>")
     .write.mode("overwrite")
     .jdbc(jdbcUrl, "SQL server table name to be created or overwrite", connectionProperties)


Keep this whole code in the notebook and can be called from ADF or keep this code inside the SQL server table and pass it as parameter to notebook.

Comments

Popular posts from this blog

Hadoop - Hive - Load data from csv/xls files

Microsoft BI Implementation - Cube back up and restore using XMLA command