Azure Databricks:- Read and write data into SQL dataase
Here in this post I would like to explain how we can connect SQL Server database from databricks to read and write. One of my customer project need this as the processed data is moving from Azure data lake layer to the aggregate layer which is SQL server database. Steps to connect to SQL server from databricks is clearly written in azure documentation but I would like to describe my experience. Code is developed in spark Scala Class.forName(" com.microsoft.sqlserver.jdbc.SQLServerDriver ") val jdbcUsername = dbutils.secrets.get(scope = " dev-cluster-scope ", key = " dev-sql-user ") val jdbcPassword = dbutils.secrets.get(scope = " dev-cluster-scope ", key = " dev-sql-pwd ") val jdbcHostname = " SQL Server Name here " val jdbcPort = 1433 val jdbcDatabase = " database name " - val jdbcUrl = s"jdbc:sqlserver://${jdbcHostname}:${jdbcPort};database=${jdbcDatabase}" // Create a Properties()