Azure Database for PostgreSQL Part 1 — Deployment

I started to play around this service because a request came in about migrating data from a normal PostgreSQL server to a PostgreSQL server with customer-managed key (CMK) data encryption. Before even we can start testing the migration, we would need to


When performing the steps mentioned here, Azure takes care of adding PostgreSQL server managed identity (MI) to AKV’s Access Control List (ACL) with Get, Wrap, Unwrap permissions. So, that is the little discrepancy between documentation and practice. When it is done, “Data encryption” on the service should be showing information like below.

Next, we would try to contact the server through PostgreSQL command line interface, psql. However, this is not documented thorough, so it would take quite a while if you would like to piece everything together on your own. Follow the steps below to save time.

  • Set the PosgreSQL DB server firewall correctly. Make sure “Deny public network access = No” and “Enforce SSL connection = Enabled”.

For me, I want to test the connection both from local working machine (Ubuntu 20.04 on Windows Subsystem for Linux 2) and Azure Cloud Shell since it has the psql installed already. So, I added both my local working machine’s outbound public IP address and Azure Cloud Shell’s public IP address to the PostgreSQL server firewall ACL.

  • Download the required certificate from here and rename the file without “.pem” at the end.
  • Copy the client certificate “BaltimoreCyberTrustRoot.crt” to the right path for each client to leverage.

Ubuntu 20.04 on WSL2:

Azure Cloud Shell:

There is a storage account associated with the cloud shell. The file structure looks similar to below.


Make sure to upload the client certificate in “.cloudconsole”. The HTTPS file path should look similar to below.

# File path:<username>-10033fff9ba66a39/.cloudconsole/BaltimoreCyberTrustRoot.crt

Another way to upload the client certificate file is to launch Azure Cloud Shell and upload the file through the console. This article would provide the step-by-step details.

Test Access

Finally, we could try to connect to the PostgreSQL server.

Ubuntu 20.04 on WSL2:

  • Ensure PSPing has proper response from the outbound public IP address.
# Create the file repository configuration:
sudo sh -c 'echo "deb $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
# Import the repository signing key:
wget --quiet -O - | sudo apt-key add -
# Update the package lists:
sudo apt-get update
# Install the latest version of PostgreSQL.
sudo apt-get -y install postgresql
# download the client certificate
  • Locate the client root certificate file
  • Copy and paste the command provided by this site.
psql "sslmode=verify-full sslrootcert="./BaltimoreCyberTrustRoot.crt" port=5432 dbname=postgres user=jonw@jonwpostgressqlsrv password=xxxx"

Azure Cloud Shell:

  • Locate the uploaded client certificate file
  • Copy and paste the command provided by this site.
psql "sslmode=verify-full sslrootcert="/usr/csuser/clouddrive/.cloudconsole/BaltimoreCyberTrustRoot.crt" port=5432 dbname=postgres user=jonw@jonwpostgressqlsrv password=xxxx"


When executing commands in psql, if we accidentally put wrong hostname in the user parameter, such as “jonw@pgsqlsrv1” when the actual hostname should be “pgsqlsrv2”, we would be logging into pgsqlsrv1 if all other information is accurate.

psql "sslmode=verify-full sslrootcert="./BaltimoreCyberTrustRoot.crt" port=5432 dbname=postgres user=jonw@jonwpostgressqlsrv password=xxxx"

Make sure the user parameter information is correct. Otherwise, the administration would become messy and hard to manage.

That is all for PostgreSQL server deployment and the way to access it! Hope you could setup your own resource and start storing data within it in no time. Happy learning!

Learning new things about Kubernetes every day. Hopefully, the learning notes could help people on the same journey!