normalian blog

Let's talk about Microsoft Azure, ASP.NET and Java!

Access Azure VMs individually through Private Link connections

I have posted about Azure Private Link both for "private endpoint" and "private link service". You can acquire knowledge how to exclusively expose your endpoints to your other VNETs and how to utilize such endpoints from your VMs on other VNETs.
normalian.hatenablog.com
This previous post has introduced for load balancing rules but I believe you will need to access specific VMs to take logs, confirm settings or others. Let's talk with an example in this case.

Expose WildFly endpoints with Private Link

I believe as you know, WildFly is one of the most popular Java application servers. WildFly exposes webapps endpoint as 8080 and management endpoints as 9990, so you have to meet requirements like below.

  • Need to setup load balancing rule for webapps endpoint - 8080
  • Need to access VMs individually for management endpoint - 9990

At first you need to enable both "private endpoint" and "private link service" to communicate the VNETs each others. And you can satisfy these requirements with "Load Balancing Rules" and "Inbound NAT Rules" on your Standard Load Balancer like below.
f:id:waritohutsu:20200626104601p:plain
You can put "Load Balancing Rules" for webapps endpoints and put "Inbound NAT Rules" to access each VMs by assigning ports per VM. Don't miss to pass parameters for WildFly, so here is an example to launch WildFly.

/opt/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8888 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false

Load Balancing Rules

Create a rule for port 8080 of WildFly like below.
f:id:waritohutsu:20200626105626p:plain

Just specify a port mapping and a backend pool.
f:id:waritohutsu:20200626105653p:plain

You can access WildFly VMs with VNIC IP like below.
f:id:waritohutsu:20200626105932p:plain

Inbound NAT Rules

You need to create rules per VMs
f:id:waritohutsu:20200626105741p:plain

Here is setting for VM1. Please note to setup "Port" as "9991" because you need to mimic this setting for other VMs ex. VM2 is 9992.
f:id:waritohutsu:20200626105955p:plain

You can access each VMs with changing ports like below.
f:id:waritohutsu:20200626110158p:plain

How to expose your endpoints exclusively by using "private endpoint" and "private link service" of Azure Private Link

I believe Azure Private LInk is really essential feature especially for enterprise customers because this feature enables to exclusively expose your Azure PaaS resources and Azure VM resources. At first, we should confirm again that Azure Private Link has two types of features.

  • private link service: Expose endpoints by using Standard Load Balancer. These endpoints will be used by "private endpoint"
  • private endpoint: You can access Azure PaaS Services (for example, Azure Storage and SQL Database) and your endpoints exposed by "private link service" over a private endpoint in your virtual network.

This is a simple architecture with Private Link. Private Link( Private Endpoint and Private Link Service) will create VNICs automatically into VNETs enabled Private Link like below.
f:id:waritohutsu:20200620084516p:plain
Azure resources communicate each others exclusively with the created NICs. In this image, CentOSVM01 on myVNet exposes its endpoints with SLB(Standard Load Balancer) and SLB privately exposes endpoints with Private Link Service, so WinVM01 can access CentOSVM01 by using Private Endpoint.
You can find IP address spaces are overlapped in both VNETs but it works well by using Private Link.

What's benefits?

I believe one of the biggest benefits is you will no longer need to worry about IP addresses overlapping. VNET Peering is also quite useful feature but you have to always note addresses overlapping. You will get error messages below if you will try to connect overlapped VNETs.
f:id:waritohutsu:20200620065347p:plain

Try "private link service"

You can create your "private link service" just following an article below. Please note to use "Standard Load Balancer". In addition that, you have to choose "Internal" Load Balancer to expose your endpoints exclusively.
Quickstart - Create a Private Link service by using the Azure portal | Microsoft Docs

Go to Private Link Center page on Azure Portal and Click "Create private link service" below.
f:id:waritohutsu:20200620070858p:plain

You can find only Standard Load from this menu and setup each items properly by following wizards.
f:id:waritohutsu:20200620071413p:plain

Finally, you can confirm the result like here.
f:id:waritohutsu:20200620071557p:plain

Try "private endpoint"

It's easy to enable "private endpoint" for Azure PaaS features but we have to utilize command lines for your "private link service" endpoints. Here is sample and also refer to az network private-endpoint | Microsoft Docs.

az login
az account set -s "your subscription ID"
az network private-endpoint create \
 --resource-group "resource group name having a vnet to connet your endpoints" \
 --name "name of private endpoint" \
 --vnet-name "vnet name to connet your endpoints" \
 --subnet "subnet name of the vnet" \
 --private-connection-resource-id "/subscriptions/your subscription name/resourceGroups/your resource group name/providers/Microsoft.Network/privateLinkServices/your endpointname" 
 --connection-name "Name of the private link service connection"\
 --location "region ex. westus"

You can confirm your private endpoint like below if the command works well.
f:id:waritohutsu:20200620081627p:plain
f:id:waritohutsu:20200620081636p:plain

Access via Private Endpoint

Access to Windows VM by using Remote Desktop at first. And access CentOSVM01 with private IP like below.
f:id:waritohutsu:20200620085034p:plain

It's also possible to utilize Azure Private DNS, so you can access as internal FQDN.
f:id:waritohutsu:20200620085211p:plain

Object Replication - easiest way to replicate Block BLOBs into other regions?

Object Replication is a new feature for Azure Storage. This feature enable you to transfer BLOB objects into different regions easily with minimizing latency. You might know Data redundancy - Azure Storage | Microsoft Docs, but this feature is possible to replicate your BLOBs to only your paired region. In addition that, it's a little bit tricky to reach out your data on paired regions.

Object Replication offers feature to replicate your Block BLOBs to containers in any regions with just few setting on your Storage accounts, so this feature should be quite useful to replicate your data across countries. I believe most readers of this article would be quite busy, so here are summaries for Object Replication at this time - please note this Object Replication is under preview on June 2020 now.

  • This feature is for Block BLOBs, so you can't utilize this feature for VHD files, namely, Page BLOB
  • It takes about 2 minutes to transfer BLOB objects regardless regions but it would depends on size.
  • Need to set source containers "Public Access Level" as "Container" or "Blob", and this means it's not possible to use this feature with "Private". update on 6/14/2020
  • Enable to utilize all "Public Access Level" as "Private", "Container" or "Blob"
  • Need to setup as container level on Azure Storage accounts to replicate. You can setup up to two outbound policies per Azure Storage account
  • Available only France Central, Canada East and Canada Central on June 2020 now.
  • Pricing Tier, "Hot" "Cool" and "Archive", won't be propagated accurately. Refer to a section below. - update on 6/14/2020

How to enable Object Replication on your subscription

Follow this article at first. You need to register a couple of resource providers because this feature depends on other features such as Change feed and Versioning. Just a reference, it took about a week to enable Object Replication on my subscription.

Setup and try Object Replication on your Storage accounts

After provision of Object Replication on your subscription, you can find menu of Object Replication on your storage accounts like below. You can confirm both destination and source accounts at once.
f:id:waritohutsu:20200608025444p:plain

You can setup policies with specifying containers, Filters and "Copy over". It's also possible to handle objects which should be copied into other accounts.
f:id:waritohutsu:20200608030439p:plain

As you can confirm below, it takes about 1 minutes with less than 1MB file to replicate.
f:id:waritohutsu:20200608031250p:plain

Today, it's possible to setup up to two outbound policies per Storage accounts like below.
f:id:waritohutsu:20200608031718p:plain

Pricing Tier propagation

I have tried three types tire cases.

  • Red Box: Upload a file as "Hot" tier at first and change the tier into "Archive"
  • Green Box: Upload a file as "Archive" tier
  • Blue Box: Upload a file as "Cool" tier

f:id:waritohutsu:20200615033736p:plain

As you can confirm with the screenshot, here are results.

  • Red Box: Upload a file as "Hot" tier at first and change the tier into "Archive" -> pricing tier isn't propagated into dest blob
  • Green Box: Upload a file as "Archive" tier -> The blobs won't be copyed into dest containers
  • Blue Box: Upload a file as "Cool" tier -> "Cool" tier blobs will be copied into dest containers as "Hot" tier.

Azure VMs cost reduction tips for dev and test environment

I believe Azure VMs is the most popular feature for all Azure users, and Azure VM usage would occupy most charge among your Azure billing. You will require high performance VM at the beginning because it would be needed to setup something to build up your development or test enviroments, but such requirements are not so much after the setup. There are two good options to offer good tips for your wallet-friendly.

  • Choose B-Series type for Azure VMs
  • Change disk type from Premium to Standard when your VMs are deallocated

Keep in mind that don't adopt this concept into your production envrionments.

Choose B-Series type for Azure VMs

I believe no need to explain too much about this topic. B-Serise is burstable instances. This type of instances offers a quite good cost effective way to utilize Azure VMs.
https://azure.microsoft.com/en-us/blog/introducing-b-series-our-new-burstable-vm-size/

Change disk type from Premium to Standard when your VMs are deallocated

This is a little bit tricky than just choosing B-Series VMs. You might misundersand that you can't change Disk types after attaching your disks to your Azure VMs. It's partially true because it's not possible to change Disk Types when your Azure VMs are running like below.
f:id:waritohutsu:20200607074035p:plain

But you can find quite interesting description in red box on this image. You can change your disk types when your Azure VMs are deallocated like below.
f:id:waritohutsu:20200607074322p:plain

You can choose three disk types - "Premium SSD", "Standard SSD" or "Standard HDD". What's the pro-con for them? You can confirm details both performance and pricing perspectives by referring articles below.

"Premium SSD" has much better IOPS than cheapest type "Standard HDD" but the price is almost three times. In addition this, test and development environments won't be utilized so much IOPS in most of cases. You should acquire quite good cost reduction by following this tips.

Tips to utilize Windows Server containers on AKS

Microsoft has announced that Azure Kubernetes Service (AKS) supports Windows Server containers as GA. This is quite useful and essential feature to containerize your ASP.NET Framework applications. In this article, you can acquire tiny tips to utilize Windows Server containers on AKS.

Enable Azure CNI (advanced) for Windows Server Container

Note that AKS requires to enable " Azure CNI (advanced) network plugin" to utilize Windows Server Containers. Choose "Advanced" as Network configuration like below when you try to create AKS clusters.
f:id:waritohutsu:20200511055511p:plain

You can confirm your AKS clusters are enabled Azure CNI on Azure Portal.
f:id:waritohutsu:20200511055548p:plain

Next, you need to create node pools as Windows OS type to deploy your Windows Server Container applications like below.
f:id:waritohutsu:20200511055835p:plain

Windows Server Container size

Windows Server Container requires huge capacity than Linux images. I have just pushed a simple hello world ASP.NET application into my Azure Container Repository(ACR) but it uses 1.08GB on my ACR. It will take a much time to upload your container images first time, so please note your network bandwidth not only ACR capacities when you push your container images into ACRs.
f:id:waritohutsu:20200511060031p:plain

Manage authorization for your application with user account attributes

Azure AD offers quite useful features to manage accessibilities for your applications. I believe most Azure developers has already utilized user groups to assign privilege easily, but I guess many people don’t know “Dynamic User” user group. This user group enable to authorize users with user account attributes.

Let's setup to manage accessibilities with job title by using Dynamic User group. Here are accounts which are verified by the group.
f:id:waritohutsu:20200509043431p:plain
f:id:waritohutsu:20200509043445p:plain

How to create Dynamic User group

Let’s go to Azure Portal, choose Azure Active Directory, and click to “New Group” at first .
f:id:waritohutsu:20200509043119p:plain

You can choose “Dynamic User” as membership type like below.
f:id:waritohutsu:20200509043133p:plain

Click “Add dynamic query” to setup query to authorize users. This sample authorize users who contain “Principal” for their job title. It’s also possible to create complex queries to meet your business requirements.
f:id:waritohutsu:20200509043143p:plain

Click “Validate Rules (Preview)” like below. You can confirm your queries will works well or not.
f:id:waritohutsu:20200509043152p:plain

Reduce AKS clusters cost by setup zero node count for user mode node pools

Here is interesting article - Release Release 2020-04-13 · Azure/AKS · GitHub. You can find that "AKS now allows User nodepools to scale to 0" in the article. This feature enables to reduce AKS cost in your environments. I believe you would try to change node count by using az command, but it won't work well at this time - 5/1/2020. Please note this setting is possible for only User mode node pools not System mode.

$subcriptionId = "YOUR SUBSCRIPTION ID"
$rg = "YOUR RESOURCE GROUP"
$clustername = "YOUR AKS CLUSTER NAME"
$poolname = "YOUR NODE POOL NAME"
$count = 0
az aks scale --resource-group $rg --name $clustername --node-count $count --nodepool-name $poolname

f:id:waritohutsu:20200502043921p:plain

This issue is caused that az command doesn't support to setup zero node count for user mode node pools at this time. There are two options to achieve this setting here.

Change node count on https://resources.azure.com/

Open https://resources.azure.com/ and find your user mode node pool of your AKS clusters. Put "Edit" button to enable to change Azure resources setting and edit value of "count" as zero.
f:id:waritohutsu:20200502044622p:plain

Please note this setting is possible only User mode node pools. It will fail to change node count into zero for System mode node pools.
f:id:waritohutsu:20200502044934p:plain

Use REST API to change node count

You can REST API by using az command. Here is example to setup zero node count for user mode node pools.

$subcriptionId = "YOUR SUBSCRIPTION ID"
$rg = "YOUR RESOURCE GROUP"
$clustername = "YOUR AKS CLUSTER NAME"
$poolname = "YOUR NODE POOL NAME"
$count = 0

$body = "{  \`"properties\`": {    \`"count\`": ${count} } }"
$header = "{\`"Content-Type\`": \`"application/json\`"}"
az rest -u "https://management.azure.com/subscriptions/${subcriptionId}/resourceGroups/${rg}/providers/Microsoft.ContainerService/managedClusters/${clustername}/agentPools/${poolname}?api-version=2020-03-01" --method put --headers $header --body $body

You can confirm this setting on Azure Portal.
f:id:waritohutsu:20200502045455p:plain