Skip to content

Tchimwa/Frontdoor-Appgw-PrivateEndpoint-Storage

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Leveraging Azure Front Door to expose some blob containers globally : Case scenarios with the Azure Front Door Classic and the new Azure Front Door Premium

In this lab, we will show how to configure an Azure Front Door and an application Gateway in front of a storage account using Private Endpoint, to better expose the storage containers globally and secure access to them with custom domains.

Introduction

Azure Front Door delivers your content using the global and local POPs distributed around the world close to ends users. Usually Front Door needs public VIP or a publicly available DNS name to route the traffic to, so it supports most of the PaaS services. This scenario is special as the customer here is looking to limit access to his storage account using the Private endpoint, but also would like the content of his blob containers to be available globally. The solution here will be to have an Application gateway in from of the storage account using the private endpoint, then use the AppGW as endpoint of the Front Door to make the blobs available publicly. We'll study the case with the AppGW + Front Door first, then we will work on the case scenario with the New Front Door Premium + PrivateLInk to the storage account directly.

Prerequisites and architecture

To complete this lab, there is no much needs beside of what is listed below:

  • Valid Azure subscription
  • Git
  • SSL certificate for the AppGW and the Front Door custom domain(optional, we can use the Azure managed certificate here) from a well-known CA

Architecture will be as simple as it is shown below:

Architecture

Deployment and configuration

The terraform template has most of the essential configuration already, but we will review the most important points of the lab for more clarity. Feel free to clone the repo and use your own SSL certificate to deploy it.

git clone https://github.com/Tchimwa/Frontdoor-Appgw-PrivateEndpoint-Storage.git
cd ./Frontdoor-Appgw-PrivateEndpoint-Storage
terraform init
terraform plan 
terraform apply

Storage account

Limit access to your storage account publicly and only allow the subnet hosting the AppGW and your public IP for the storage account management. With this set up, access will only be allowed to the AppGW. This can be done by accessing the Networking tab on the left panel of the storage account page.

StorageFW

From the "Configuration" tab, leave the default value set for the TLS (which is 1.2 and will be the same use on the Front door and the AppGW) and the secure transfer and make sure that "Allow Blob public access" is set to "Enabled"

BlobAccess

Let's make sure that we have our private endpoint successfully set up on the Storage account. Use the Networking tab, and select "Private endpoint connections"

pe

Private endpoint DNS configuration :

pednsconf

Application Gateway

When it comes to the AppGW configuration, some of the configuration here will depends on your own flavors. I chose to have a Multisite type listener just in case I would like to add more backend targets in the future, but as of now we only one hostname which is "data.ced-sougang.com" and of course we are doing HTTPS so Port 443. I would like to mention that ven the "Basic" type should have worked in this case scenario!

AppgwListener

When it comes to the HTTPS settings, the most important point will be the well-known domain "blob.core.windows.net" that belongs to Microsoft and consequently its certificate is recognized by the AppGW as provided by a well-known CA. This is actually preventing you to upload any root trusted certificate for your HTTPS settings.

WellKnownCA

Now, the custom probe use on the HTTP settings is quite interesting due to the fact that the storage account is not a website, so it will not return the HTTP 200 OK that we're all accustomed to.

  • From the public access, you will be receiving an HTTP error response 400 as it is shown below from the browser Dev tools

    DevToolError

  • If you choose to have a private frontend IP on the AppGW, you will be receiving a 409 HTTPS response instead of the 400

Based on those 2 response codes, we can set up or custom probe to match with them so the requests can forwarded to the Storage account when they come in.

CustomProbe

As result, we have the result below since the HTTP error codes are expected. Mine is showing 400 because I do have a public frontend IP configuration on my AppGW.

ProbeResult

As result, we have a successful access to the Storage file through the AppGW :

A CNAME record was already created with the public IP of the AppGW and the hostname "data.ced-sougang.com".

AppGW URL: https://data.ced-sougang.com/media/cloud-automation-logo.png

C:\Users\tcsougan>curl -I https://data.ced-sougang.com/media/cloud-automation-logo.png
HTTP/1.1 200 OK
Date: Mon, 11 Apr 2022 16:45:54 GMT
Content-Type: image/png
Content-Length: 40080
Connection: keep-alive
Content-MD5: HGv9IhxMvhtR+IS7npkLog==
Last-Modified: Mon, 28 Mar 2022 04:31:55 GMT
ETag: 0x8DA1073E3E197DE
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: e57426e7-501e-0011-61c3-4db674000000
x-ms-version: 2009-09-19
x-ms-lease-status: unlocked
x-ms-blob-type: BlockBlob

A connection troubleshoot on the storage account FQDN from the AppGW confirms that the AppGw is currently using the Private endpoint set up on the storage account:

ConnectionTB

Front Door configuration

A Front Door classic is the Tier used in this first configuration. Add the backend pool with a full path to a file available in the blob for the Health Probe to be successful. AFD doesn't support any other response beside the 200 OK for a successful probe. There is no way to customized the matching code as we do with the AppGW.

AFDBEPool

The backend host type here will be "Custom Host", and the hostname will be the hostname of the AppGW as well as the backend host header.

AFDBackend

Enabling the frontdoor custom domain "media.ced-sougang.com":

az network front-door frontend-endpoint enable-https --front-door-name appgwsto-afd
                                                       --name media.ced-sougang.com
                                                       --resource-group appgwsto-rg
                                                       --certificate-source FrontDoor
                                                       --minimum-tls-version 1.2

On the afd-rule routing rule which is the principal rule, the Accepted protocol will be "HTTPS Only" as well as the forwarding protocol. I also configured a second rule to redirect HTTP to HTTPS as well.

az network front-door routing-rule create --front-door-name appgwsto-afd --frontend-endpoints media-ced-sougang-com 
                                                                                --custom-host media.ced-sougang.com 
                                                                                --name afd-rule-http
                                                                                --resource-group appgwsto-rg 
                                                                                --route-type Redirect 
                                                                                --disabled false 
                                                                                --redirect-protocol HttpsOnly 
                                                                                --redirect-type Found
                                                                                                                                    

As result, we have our storage account expose globally using the Azure Front Door.

Frontdoor URL: https://media.ced-sougang.com/media/cloud-automation-logo.png

C:\Users\tcsougan>curl -I https://media.ced-sougang.com/media/cloud-automation-logo.png
HTTP/1.1 200 OK
Content-Length: 40080
Content-Type: image/png
Content-MD5: HGv9IhxMvhtR+IS7npkLog==
Last-Modified: Mon, 28 Mar 2022 04:31:55 GMT
ETag: 0x8DA1073E3E197DE
x-ms-request-id: 0f0beb1e-d01e-000f-12cb-4d5aac000000
x-ms-version: 2009-09-19
x-ms-lease-status: unlocked
x-ms-blob-type: BlockBlob
X-Cache: CONFIG_NOCACHE
X-Azure-Ref: 0NGlUYgAAAADpVziXMDlkTKGriNM0oPwCQVRMMzMxMDAwMTEwMDMxADJkMzA5NmVhLWE3MDgtNDE0Zi1hMjUzLTdjMWI3ZDIxMDU4Ng==
Date: Mon, 11 Apr 2022 17:45:23 GMT


C:\Users\tcsougan>curl -I http://media.ced-sougang.com/media/cloud-automation-logo.png
HTTP/1.1 302 Found
Content-Length: 0
Location: https://media.ced-sougang.com/media/cloud-automation-logo.png
X-Azure-Ref: 0PmlUYgAAAACg+S9DXT+hR4K5Qu5+vGZIQVRMMzMxMDAwMTA5MDM3ADJkMzA5NmVhLWE3MDgtNDE0Zi1hMjUzLTdjMWI3ZDIxMDU4Ng==
Date: Mon, 11 Apr 2022 17:45:33 GMT

Using the New Azure Front Door

The next point will be to use the new Azure Front Door to expose the same storage account using the Private link and getting rid of the Application Gateway.

New AFD URL: https://newlink.ced-sougang.com/media/cloud-automation-logo.png

NewAFDArchitecture

Following the link below, I was able to connect my storage account to the Azure Front Door Premium (Tier offering the Private Link) using an Azure Azure Private endpoint.

Public doc: https://docs.microsoft.com/en-us/azure/private-link/tutorial-private-endpoint-storage-portal

Key points with the new Azure Front Door:

  • The endpoint is created while you are creating a route

Route

  • The usual Backend Pool from the classic AFD is replaced by Origin Group and the backend target by Origin

Origin:

Origin

  • The Private endpoint is created and managed by the AFD. The only thing you need to do is to approve it on the storage account side once it is created. Supported origin types are Azure Blobs, App services, Internal Load Balancers as of now.

newpe

  • Finally the domain validation when you choose the Azure managed certificate is done via TXT. Also, the custom domain has to be associated to the default endpoint and route created earlier.

DomainValidation

As result we have our storage account accessed securely using the new Azure Front Door.

C:\Users\tcsougan>curl -I https://newlink.ced-sougang.com/media/cloud-automation-logo.png
HTTP/1.1 200 OK
Content-Length: 40080
Content-Type: image/png
Content-MD5: HGv9IhxMvhtR+IS7npkLog==
Last-Modified: Mon, 28 Mar 2022 04:31:55 GMT
Accept-Ranges: bytes
ETag: 0x8DA1073E3E197DE
x-ms-request-id: b2605b13-501e-005c-58e5-4d7998000000
x-ms-version: 2009-09-19
x-ms-lease-status: unlocked
x-ms-blob-type: BlockBlob
x-azure-ref: 0RpRUYgAAAAAj+68ipG+yRrJpyjdCnTJvQVRBRURHRTEyMjAAOTQ2ZWRmNjQtMmNmMC00MDlhLWI1NDYtY2IyOGE0MWUyN2E4
X-Cache: CONFIG_NOCACHE
x-fd-int-roxy-upstream-error-info: NoError
X-Cache: CONFIG_NOCACHE
Date: Mon, 11 Apr 2022 20:49:10 GMT

Conclusion

The new Azure Front Door has certainly changed the game as it is now capable of being connected to some PaaS services on a few regions, and also offered more features than the classic. As we just saw, the new AFD by accessing the blob via Private endpoint enables the Cx to get rid of the Application Gateway and probably reduces the latency from the customer to the blob files. However, the classic AFD is still available in those regions where the AFD isn't yet supported.

More about the new AFD here.

About

Leveraging Azure Front Door to expose some blob containers globally : Case scenarios with the Azure Front Door Classic and the new Azure Front Door Premium

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages