normalian blog

Let's talk about Microsoft Azure, ASP.NET and Java!

Let's utlize Azure Front Door to route requests globally

Azure Front Door is useful feature to manage and monitor your web traffics with global routing. Azure Front Door enables you to manage and optimize your global(multi-regions) customers easily. I believe readers of my blog want to acquire practical knowledge, so let's start to try!

At first, you need to note that Azure Front Door is global resource as you can confirm on Azure Portal below. This means you're no longer to be bothered by regional perspective at least for Azure Front Door.

What's resources we can setup on Azure Front Door?

Azure Front Door can choose various types of resources like below. You are also possible to route requests out of Azure Platform with "Custom Host" and setup FQDNs. In this post, you can acquire knowledge how to use "Storage", "Public IP Address" and "Custom Host"

Setup a sample scenarios

Here is an example architecture which I have setup as a sample for Azure Front Door.

After creating your Azure Front Door instance, choose "Front Door designer" and add your owned domain on Azure Portal like below.

Next, you can add your backend resources on "Backend pools" menu like below.

You can add "Microsoft Cloud Workshop" site like below by setting up as "Custom host".

Finally, you can setup rules how to forward or redirect HTTP/HTTPS requests into backend pools. In this example, all requests match with "/storage01/*" will be forwarded to my Azure Storage account into "/" path. Don't forget to specify as "/storage01/*" not ""/storage01/"

Access a sample with Azure Front Door

You can confirm each request routings like below.

Azure NAT Gateway enables Azure VMs to access internet without assigning Public IP

I guess some folks are not familiar with Azure NAT Gateway because this feature is quite useful but it's a little bit hard to recognize use cases. Here are my idea for Azure NAT Gateway use cases.

  1. Azure VMs, attached with Standard Internal Load Balancer, are required to assign PIP(Public IP) to access internet. Now, your Azure VMs are possible to access internet with Azure NAT Gateway without PIPs
  2. Azure VMs access Global IPs are identified as PIPs but this forces lots of effort to allow accesses from Azure to environments. Now, you can simplify Azure VMs access Global IPs by using Azure NAT Gateway

Of course, there should be much more use cases for Azure NAT Gateway. Please let me such use cases with comments of this blog. Here are architecture diagram for #1 and #2 scenarios.

You can find each Azure VMs will access to internet via Azure NAT Gateway and their global IPs will be identified as PIP assigned to Azure NAT Gateway.

Create and attach Azure NAT Gateway to subnets

Go to Azure Portal and start to create like below. You need to put your Azure NAT Gateway name and choose region here.

Next, choose your PIP to assign Azure NAT Gateway.

Finally, you need to associate this Azure NAT Gateway to your subnets like below.

PIP access via Azure NAT Gateway

Login to WildFlyVM0 having no PIP but as private IP. Next, run "curl ''" to confirm global ip like below.

Access Azure VMs individually through Private Link connections

I have posted about Azure Private Link both for "private endpoint" and "private link service". You can acquire knowledge how to exclusively expose your endpoints to your other VNETs and how to utilize such endpoints from your VMs on other VNETs.
This previous post has introduced for load balancing rules but I believe you will need to access specific VMs to take logs, confirm settings or others. Let's talk with an example in this case.

Expose WildFly endpoints with Private Link

I believe as you know, WildFly is one of the most popular Java application servers. WildFly exposes webapps endpoint as 8080 and management endpoints as 9990, so you have to meet requirements like below.

  • Need to setup load balancing rule for webapps endpoint - 8080
  • Need to access VMs individually for management endpoint - 9990

At first you need to enable both "private endpoint" and "private link service" to communicate the VNETs each others. And you can satisfy these requirements with "Load Balancing Rules" and "Inbound NAT Rules" on your Standard Load Balancer like below.
You can put "Load Balancing Rules" for webapps endpoints and put "Inbound NAT Rules" to access each VMs by assigning ports per VM. Don't miss to pass parameters for WildFly, so here is an example to launch WildFly.

/opt/wildfly/bin/ -b -bmanagement

Load Balancing Rules

Create a rule for port 8080 of WildFly like below.

Just specify a port mapping and a backend pool.

You can access WildFly VMs with VNIC IP like below.

Inbound NAT Rules

You need to create rules per VMs

Here is setting for VM1. Please note to setup "Port" as "9991" because you need to mimic this setting for other VMs ex. VM2 is 9992.

You can access each VMs with changing ports like below.

How to expose your endpoints exclusively by using "private endpoint" and "private link service" of Azure Private Link

I believe Azure Private LInk is really essential feature especially for enterprise customers because this feature enables to exclusively expose your Azure PaaS resources and Azure VM resources. At first, we should confirm again that Azure Private Link has two types of features.

  • private link service: Expose endpoints by using Standard Load Balancer. These endpoints will be used by "private endpoint"
  • private endpoint: You can access Azure PaaS Services (for example, Azure Storage and SQL Database) and your endpoints exposed by "private link service" over a private endpoint in your virtual network.

This is a simple architecture with Private Link. Private Link( Private Endpoint and Private Link Service) will create VNICs automatically into VNETs enabled Private Link like below.
Azure resources communicate each others exclusively with the created NICs. In this image, CentOSVM01 on myVNet exposes its endpoints with SLB(Standard Load Balancer) and SLB privately exposes endpoints with Private Link Service, so WinVM01 can access CentOSVM01 by using Private Endpoint.
You can find IP address spaces are overlapped in both VNETs but it works well by using Private Link.

What's benefits?

I believe one of the biggest benefits is you will no longer need to worry about IP addresses overlapping. VNET Peering is also quite useful feature but you have to always note addresses overlapping. You will get error messages below if you will try to connect overlapped VNETs.

Try "private link service"

You can create your "private link service" just following an article below. Please note to use "Standard Load Balancer". In addition that, you have to choose "Internal" Load Balancer to expose your endpoints exclusively.
Quickstart - Create a Private Link service by using the Azure portal | Microsoft Docs

Go to Private Link Center page on Azure Portal and Click "Create private link service" below.

You can find only Standard Load from this menu and setup each items properly by following wizards.

Finally, you can confirm the result like here.

Try "private endpoint"

It's easy to enable "private endpoint" for Azure PaaS features but we have to utilize command lines for your "private link service" endpoints. Here is sample and also refer to az network private-endpoint | Microsoft Docs.

az login
az account set -s "your subscription ID"
az network private-endpoint create \
 --resource-group "resource group name having a vnet to connet your endpoints" \
 --name "name of private endpoint" \
 --vnet-name "vnet name to connet your endpoints" \
 --subnet "subnet name of the vnet" \
 --private-connection-resource-id "/subscriptions/your subscription name/resourceGroups/your resource group name/providers/Microsoft.Network/privateLinkServices/your endpointname" 
 --connection-name "Name of the private link service connection"\
 --location "region ex. westus"

You can confirm your private endpoint like below if the command works well.

Access via Private Endpoint

Access to Windows VM by using Remote Desktop at first. And access CentOSVM01 with private IP like below.

It's also possible to utilize Azure Private DNS, so you can access as internal FQDN.

Object Replication - easiest way to replicate Block BLOBs into other regions?

Object Replication is a new feature for Azure Storage. This feature enable you to transfer BLOB objects into different regions easily with minimizing latency. You might know Data redundancy - Azure Storage | Microsoft Docs, but this feature is possible to replicate your BLOBs to only your paired region. In addition that, it's a little bit tricky to reach out your data on paired regions.

Object Replication offers feature to replicate your Block BLOBs to containers in any regions with just few setting on your Storage accounts, so this feature should be quite useful to replicate your data across countries. I believe most readers of this article would be quite busy, so here are summaries for Object Replication at this time - please note this Object Replication is under preview on June 2020 now.

  • This feature is for Block BLOBs, so you can't utilize this feature for VHD files, namely, Page BLOB
  • It takes about 2 minutes to transfer BLOB objects regardless regions but it would depends on size.
  • Need to set source containers "Public Access Level" as "Container" or "Blob", and this means it's not possible to use this feature with "Private". update on 6/14/2020
  • Enable to utilize all "Public Access Level" as "Private", "Container" or "Blob"
  • Need to setup as container level on Azure Storage accounts to replicate. You can setup up to two outbound policies per Azure Storage account
  • Available only France Central, Canada East and Canada Central on June 2020 now.
  • Pricing Tier, "Hot" "Cool" and "Archive", won't be propagated accurately. Refer to a section below. - update on 6/14/2020

How to enable Object Replication on your subscription

Follow this article at first. You need to register a couple of resource providers because this feature depends on other features such as Change feed and Versioning. Just a reference, it took about a week to enable Object Replication on my subscription.

Setup and try Object Replication on your Storage accounts

After provision of Object Replication on your subscription, you can find menu of Object Replication on your storage accounts like below. You can confirm both destination and source accounts at once.

You can setup policies with specifying containers, Filters and "Copy over". It's also possible to handle objects which should be copied into other accounts.

As you can confirm below, it takes about 1 minutes with less than 1MB file to replicate.

Today, it's possible to setup up to two outbound policies per Storage accounts like below.

Pricing Tier propagation

I have tried three types tire cases.

  • Red Box: Upload a file as "Hot" tier at first and change the tier into "Archive"
  • Green Box: Upload a file as "Archive" tier
  • Blue Box: Upload a file as "Cool" tier


As you can confirm with the screenshot, here are results.

  • Red Box: Upload a file as "Hot" tier at first and change the tier into "Archive" -> pricing tier isn't propagated into dest blob
  • Green Box: Upload a file as "Archive" tier -> The blobs won't be copyed into dest containers
  • Blue Box: Upload a file as "Cool" tier -> "Cool" tier blobs will be copied into dest containers as "Hot" tier.

Azure VMs cost reduction tips for dev and test environment

I believe Azure VMs is the most popular feature for all Azure users, and Azure VM usage would occupy most charge among your Azure billing. You will require high performance VM at the beginning because it would be needed to setup something to build up your development or test enviroments, but such requirements are not so much after the setup. There are two good options to offer good tips for your wallet-friendly.

  • Choose B-Series type for Azure VMs
  • Change disk type from Premium to Standard when your VMs are deallocated

Keep in mind that don't adopt this concept into your production envrionments.

Choose B-Series type for Azure VMs

I believe no need to explain too much about this topic. B-Serise is burstable instances. This type of instances offers a quite good cost effective way to utilize Azure VMs.

Change disk type from Premium to Standard when your VMs are deallocated

This is a little bit tricky than just choosing B-Series VMs. You might misundersand that you can't change Disk types after attaching your disks to your Azure VMs. It's partially true because it's not possible to change Disk Types when your Azure VMs are running like below.

But you can find quite interesting description in red box on this image. You can change your disk types when your Azure VMs are deallocated like below.

You can choose three disk types - "Premium SSD", "Standard SSD" or "Standard HDD". What's the pro-con for them? You can confirm details both performance and pricing perspectives by referring articles below.

"Premium SSD" has much better IOPS than cheapest type "Standard HDD" but the price is almost three times. In addition this, test and development environments won't be utilized so much IOPS in most of cases. You should acquire quite good cost reduction by following this tips.

Tips to utilize Windows Server containers on AKS

Microsoft has announced that Azure Kubernetes Service (AKS) supports Windows Server containers as GA. This is quite useful and essential feature to containerize your ASP.NET Framework applications. In this article, you can acquire tiny tips to utilize Windows Server containers on AKS.

Enable Azure CNI (advanced) for Windows Server Container

Note that AKS requires to enable " Azure CNI (advanced) network plugin" to utilize Windows Server Containers. Choose "Advanced" as Network configuration like below when you try to create AKS clusters.

You can confirm your AKS clusters are enabled Azure CNI on Azure Portal.

Next, you need to create node pools as Windows OS type to deploy your Windows Server Container applications like below.

Windows Server Container size

Windows Server Container requires huge capacity than Linux images. I have just pushed a simple hello world ASP.NET application into my Azure Container Repository(ACR) but it uses 1.08GB on my ACR. It will take a much time to upload your container images first time, so please note your network bandwidth not only ACR capacities when you push your container images into ACRs.