normalian blog

Let's talk about Microsoft Azure, ASP.NET and Java!

How to copy comma separated CSV files into Azure SQL Database with Azure Data Factory

You can learn how to copy data of CSV files into SQL Database with Azure Data Factory following this post. Please note, you must install SSMS from Download SQL Server Management Studio (SSMS) before following this post.
After setting up SSMS, follow below step to copy data of CSV files into SQL Database.

  • Upload CSV files into Azure Storage
  • Create a table in your SQL Database
  • Copy CSV files into your SQL Database with Azure Data Factory

After downloading "USDJPY.csv" file from http://www.m2j.co.jp/market/historical.php, upload the file into your Azure Storage.

Create a table in your SQL Database

Setup your SQL Database instance if you don't have it. After creating the instance, setup firewall following Azure SQL Database server-level and database-level firewall rules to access with "sqlcmd" command from your computer.
Execute below command from your client computer. And note to execute below with one liner, because I just add line breaks to read easily.

normalian> sqlcmd.exe -S "server name".database.windows.net -d "database name" -U "username"@"server name" -P "password" -I -Q 
"CREATE TABLE [dbo].[USDJPY]
(
    [ID] INT NOT NULL PRIMARY KEY IDENTITY(1,1), 
    [DATE] DATETIME NOT NULL, 
    [OPEN] FLOAT NOT NULL, 
    [HIGH] FLOAT NOT NULL, 
    [LOW] FLOAT NOT NULL, 
    [CLOSE] FLOAT NOT NULL
)"

You can remove the table using below command if you mistake something for this setting.

normalian> sqlcmd.exe -S "server name".database.windows.net -d "database name" -U "username"@"server name" -P "password" -I -Q "DROP TABLE [dbo].[USDJPY]"

Copy CSV files into your SQL Database with Azure Data Factory

At first, create your Azure Data Factory instance. And choose "Copy data(PREVIEW)" button like below.
f:id:waritohutsu:20170904232426p:plain

Next, choose "Run once now" to copy your CSV files.
f:id:waritohutsu:20170904232454p:plain

Choose "Azure Blob Storage" as your "source data store", specify your Azure Storage which you stored CSV files.
f:id:waritohutsu:20170904232523p:plain
f:id:waritohutsu:20170904232551p:plain

Choose your CSV files from your Azure Storage.
f:id:waritohutsu:20170904232635p:plain

Choose "Comma" as your CSV files delimiter and input "Skip line count" number if your CSV file has headers.
f:id:waritohutsu:20170904232717p:plain

Choose "Azure SQL Database" as your "destination data store".
f:id:waritohutsu:20170904232750p:plain

Input your "Azure SQL Database" info to specify your instance.
f:id:waritohutsu:20170904232823p:plain

Select your table from your SQL Database instance.
f:id:waritohutsu:20170904232852p:plain

Check your data mapping.
f:id:waritohutsu:20170904232921p:plain

Execute data copy from CSV files to SQL Database just confirming next wizards.
f:id:waritohutsu:20170904232958p:plain

After completing this pipeline, execute below command in your machine. You can get data from SQL Database.

normalian> sqlcmd.exe -S "server name".database.windows.net -d "database name" -U "username"@"server name" -P "password" -I -Q "SELECT * FROM [dbo].[USDJPY] ORDER BY 1;"

You can get some data from SQL Database if you have setup correctly.

ロサンゼルス生活日誌 ~その3アメリカの銀行って日本と違うの?~

ちょっと間が空きましたがこんにちは、今回はアメリカの銀行について紹介します。アメリカの銀行口座は日本と少々考え方が違うので、私が実際に色々と手続きした時にちょっと困ったことも含めて紹介します。私は日本人にやさしいという噂をうのみにして三菱東京 UFJ が買収した Union Bank で銀行口座を開設しましたが、Bank of America, Chase, Wells Fargo 等々と銀行は色々あるので用途に応じて開設先の銀行を選んでください。

Social Security Number が必要か?

Social Security Number 通称 SSN はアメリカでのマイナンバーの様なものです(検索すると山の様に情報がでてくるので、ここでの説明は割愛します)。原則これに全部(銀行口座、クレジットカード、こちらでの運転免許等々)が紐づきます。とある人に聞いたところによると犯罪歴(スピード違反ややらかし事故等々を含む)も SSN で全部紐づけられるらしいあげく、一生ものらしいので一度作ったら帰国しようが消えず、再入国しても元の SSN が使われるらしいです。
本題の「銀行口座の開設に SSN は必要か?」ですが、結論から言うと必ずしも要りません。知人は SSN 無しで口座を作ったとのことですし、私は「SSN 申請中」の報告をして口座開設を完了しました(後日 SSN 取得後にちゃんと送付しましたが)。

Checking Account と Saving Account って何? Routing Number って何?

この Checking Account と Saving Account の存在が日本の銀行口座とのもっとも大きな差異の一つだと思うのですが、アメリカで銀行口座を開設すると自動的にこの二つ( Checking, Saving )が作られます。日本の銀行に強引に例えるなら Checking Account が普通口座、Saving Account は 定期預金口座 が一番近いと思いますが結構気軽に金額の引き出しは出来たりします。私は Union Bank で口座を開設しましたが、オンラインバンキングでの表示は以下の様になります。
f:id:waritohutsu:20170904071135p:plain

現地の方に聞いたところ、クレジットカード等は checking account に紐づけ、貯めるためのお金(文字通り saving なお金)を checking account から saving account へ移動・もしくは直接 saving account へ振り込みをするそうです。saving account から checking account へのお金の移動もできますが、saving のお金を月に何度も移動すると以下の様に手数料がとられる様なので、そこには注意してください。
f:id:waritohutsu:20170904071355p:plain

また、給与振り込みを会社にお願いする場合等に銀行口座を指定する場合に Account Number と Routing Number というのが求められます。2点注意があり、それぞれ以下です。

Check って何さ?

日本の銀行システムでいう「小切手」に相当すると思うこれ、日本だと企業間しか原則使わないと思いますが、アメリカでは民間でも多用します。こっちに来た時に「check?(チェックでの支払いか?)」とだけ聞かれたりすることがあったのですが、当初は全く意味が分からなくて混乱しました(英語の問題とか関係なく、こういう社会の仕組みが違うのが一番大変)。
特に多いのが家賃の初回支払いは「only check(小切手だけ)」なんてところも多く、電気の支払いや固めなところの支払いでも利用することが多いです(電気の支払いはネットでも大丈夫でしたが)。日本から来た方は書き方が分からないと思いますので、支払いを求める人に金額・振込先を含む Check の記載をしてもらい、問題ないかを確認してサインをすると楽だと思います。
本来、銀行からは Check Book と呼ばれるものが送付されるはずなのですが、私にはなぜか送られてこなかったせいで Check が枯渇して困っていたりします。。。

他者への送金

上記の Union Bank オンラインバンキングの画面に "Transfer" があると思いますが、こちらを利用して他者(私は他人の Union Bank にしか送金したことはありませんが、他行でもできると思います)にお金を送金できます。この際に注意が必要なのが「送金先の登録に数日、送金額の submit から数日しないと相手にお金が届かない」という点です。私は知人から車を購入し、相手に購入額を送金しようとしたのですが大分時間がかかってしまいました(相手はこの手の事情が分かる位に在住期間が長かったので問題になりませんでしたが)ので、送金の際には気を付けてください。
f:id:waritohutsu:20170904071529p:plain

Debit Card

Union Bank 限定かどうかはしりませんが、口座を開設すると Debit Card が配布されます。クレジットカード的に使えるので便利だなぁと思っていたのですが、以下の様なのでお気を付けを。

Debit Card の利用には Activation が必要なのですが、電話という中々ハードルが高い挙句に自動音声とのコンボで対応が面倒だったので店舗に直接行ったら優しく対応してくれました。ありがとう Union Bank のスタッフさん。

Create joined query result from Nikkei and DJIA using Spark APIs with HDInsight

In previous topic, I have introduced how to use Hive tables with HDInsight in How to use Hive tables in HDInsight cluster with Nikkei and DJIA. I will introduce how to use Spark APIs with HDInsight in this topic.

requirements

You have to complete below requirements to follow this topic.

Modify csv file titles

You have already downloaded USDJPY.csv and nikkei_stock_average_daily_jp.csv files, but titles of the csv files are written by Japanese. Modify the titles into English to use from Spark APIs easily like below.

  • USDJPY.csv file
日付,始値,高値,安値,終値
2007/04/02,117.84,118.08,117.46,117.84

DATE,OPEN,HIGH,LOW,CLOSE
2007/04/02,117.84,118.08,117.46,117.84
  • nikkei_stock_average_daily_jp.csv
データ日付,終値,始値,高値,安値
"2014/01/06","15908.88","16147.54","16164.01","15864.44"

DATE,CLOSE,OPEN,HIGH,LOW
"2014/01/06","15908.88","16147.54","16164.01","15864.44"

And save the csv files as "USDJPY_en.csv" and "nikkei_stock_average_daily_en.csv". And upload the csv files into your Azure Storage associated with your Spark cluster like below.
f:id:waritohutsu:20170903124441p:plain

Refer below URL and Path example if you can't figure out which path you should locate the csv files, because some people sometimes confuse them.

Create Spark application with Scala

At first, refer https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-apache-spark-intellij-tool-plugin. You have to follow the topic until "Run a Spark Scala application on an HDInsight Spark cluster" at section "Run a Spark Scala application on an HDInsight Spark cluster". Now, you have a skeleton of your spark application. Update your scala file like below.

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.sql.types.TimestampType
import org.apache.spark.sql.{SaveMode, SparkSession}

object MyClusterApp {
  def main(args: Array[String]): Unit = {
    val spark = SparkSession.builder().appName("MyClusterApp").getOrCreate()

    val dataset_djia = "wasb://hellosparkxxxxxxx-2017-08-77777-33-yy-zzz@hellosparkatxxxxxxxtorage.blob.core.windows.net/financedata/DJIA.csv"
    val dataset_nikkei = "wasb://hellosparkxxxxxxx-2017-08-77777-33-yy-zzz@hellosparkatxxxxxxxtorage.blob.core.windows.net/financedata/nikkei_stock_average_daily_en.csv"
    val dataset_usdjpy = "wasb://hellosparkxxxxxxx-2017-08-77777-33-yy-zzz@hellosparkatxxxxxxxtorage.blob.core.windows.net/financedata/USDJPY_en.csv"

    // Load csv files and create a DataFrame in temp view, you have to change this when your data will be massive
    val df_djia = spark.read.options(Map("header" -> "true", "inferSchema" -> "true", "ignoreLeadingWhiteSpace" -> "true")).csv(dataset_djia)
    df_djia.createOrReplaceTempView("djia_table")
    val df_nikkei = spark.read.options(Map("header" -> "true", "inferSchema" -> "true", "ignoreLeadingWhiteSpace" -> "true")).csv(dataset_nikkei)
    df_nikkei.createOrReplaceTempView("nikkei_table")
    val df_usdjpy = spark.read.options(Map("header" -> "true", "inferSchema" -> "true", "ignoreLeadingWhiteSpace" -> "true")).csv(dataset_usdjpy)
    df_usdjpy.createOrReplaceTempView("usdjpy_table")

    // Spark reads DJIA date as "DATE" type but it reads Nikkei and USDJPY date as "STRING", so you have to cast the data type like below.
    val retDf = spark.sql("SELECT djia_table.DATE, djia_table.DJIA, nikkei_table.CLOSE/usdjpy_table.CLOSE as Nikkei_Dollar FROM djia_table INNER JOIN nikkei_table ON djia_table.DATE = from_unixtime(unix_timestamp(nikkei_table.DATE , 'yyyy/MM/dd')) INNER JOIN usdjpy_table on djia_table.DATE = from_unixtime(unix_timestamp(usdjpy_table.DATE , 'yyyy/MM/dd'))")
    //val retDf = spark.sql("SELECT * FROM usdjpy_table")
    retDf.write
      .mode(SaveMode.Overwrite)
      .format("com.databricks.spark.csv")
      .option("header", "true")
      .save("wasb://hellosparkxxxxxxx-2017-08-77777-33-yy-zzz@hellosparkatxxxxxxxtorage.blob.core.windows.net/financedata/sparkresult")
  }
}

After updating your scala file, run the application following https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-apache-spark-intellij-tool-plugin. You can find result files in your Azure Storage like below if your setup is correct!
f:id:waritohutsu:20170903124521p:plain

Download the part-xxxxxxxxxxxxxxx-xxx-xxxxx.csv file and check the content, and you can get date, DJIA dollar and Nikkei dollar data like below.

DATE,DJIA,Nikkei_Dollar
2014-01-06T00:00:00.000Z,16425.10,152.64709268854347
2014-01-07T00:00:00.000Z,16530.94,151.23238022377356
2014-01-08T00:00:00.000Z,16462.74,153.77193819152996
2014-01-09T00:00:00.000Z,16444.76,151.54432674873556
2014-01-10T00:00:00.000Z,16437.05,152.83892037268274
2014-01-14T00:00:00.000Z,16373.86,148.021883098186
2014-01-15T00:00:00.000Z,16481.94,151.22182896498947
2014-01-16T00:00:00.000Z,16417.01,150.96539162112933
2014-01-17T00:00:00.000Z,16458.56,150.79988499137434
2014-01-20T00:00:00.000Z,.,150.18415746519443
2014-01-21T00:00:00.000Z,16414.44,151.47640966628308
2014-01-22T00:00:00.000Z,16373.34,151.39674641148324
2014-01-23T00:00:00.000Z,16197.35,151.97414794732765
2014-01-24T00:00:00.000Z,15879.11,150.54342723004694

How to use Hive tables in HDInsight cluster with Nikkei and DJIA

As you know, Nikkei Stock Average called Nikkei and Dow Jones Industrial Average called by DJIA are both famous stock market indexes. We can get daily data of them easily from below sites.

This topic introduces how to use hive tables with Nikkei and DJIA.

Create a HDInsight cluster

Go to Azure Portal and create new HDInsight cluster! In this sample, I choose HDInsight Spark Cluster, but it's not matter to choose other component which are available to use Hive. Please create or associate a Azure Storage to your cluster like below when you create it, because CSV data will be stored into the Azure Storage.
f:id:waritohutsu:20170901075326p:plain

Create Nikkei and DJIA Hive Tables

Go to cluster portal called Ambari, and its URL is https://"your cluster name".azurehdinsight.net/. Click top right side button on the portal and choose "Hive View" to use Hive query.
f:id:waritohutsu:20170901075359p:plain

And execute below Hive query in the portal.

CREATE DATABASE IF NOT EXISTS FINANCEDB;

DROP TABLE FINANCEDB.DJIATABLE;
CREATE EXTERNAL TABLE FINANCEDB.DJIATABLE
(
    `DATE` STRING,
    `DJIA` DOUBLE
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' lines terminated by '\n' STORED AS TEXTFILE LOCATION 'wasbs:///financedata/DJIA.csv' TBLPROPERTIES("skip.header.line.count"="1");

DROP TABLE FINANCEDB.NIKKEITABLE;
CREATE EXTERNAL TABLE FINANCEDB.NIKKEITABLE
(
    `DATE` STRING,
    `NIKKEI` DOUBLE,
    `START` DOUBLE,
    `HIGHEST` DOUBLE,
    `LOWEST` DOUBLE
) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
   "separatorChar" = ",",
   "quoteChar"     = "\""
) STORED AS TEXTFILE LOCATION 'wasbs:///financedata/nikkei_stock_average_daily_jp.csv' TBLPROPERTIES("skip.header.line.count"="1");

Now, you can watch your hive table names in your Azure Storage which you have associated in your HDInsight cluster.
f:id:waritohutsu:20170901075434p:plain

Note those BLOB files are size zero. You should avoid to upload data before executing above queries, because you will get below errors when you execute the queries if you upload them before it.

 java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:wasbs://helloxxxxxxxxxxxxxx-2017-08-31t01-26-06-194z@hellosparyyyyyyyyyy.blob.core.windows.net/financedata/DJIA.csv is not a directory or unable to create one)

After executing CREATE TABLE queries, upload CSV data into the Azure Storage and override existing BLOB files like below.
f:id:waritohutsu:20170901075451p:plain

Confirm Hive Tables data

Go to "Hive View" in Ambari again, and execute below queries separately to avoid override. You can get some result data if your setup is correct.

SELECT * FROM FINANCEDB.DJIATABLE LIMIT 5;
SELECT * FROM FINANCEDB.NIKKEITABLE LIMIT 5;

You should check "TEXTFILE LOCATION" path and "default container" for HDInsight cluster in the Azure Storage if you can't get any data from the queries. Full path of a CSV file is "https://hellosparyyyyyyyyyy.blob.core.windows.net/helloxxxxxxxxxxxxxx-2017-08-31t01-26-06-194z/financedata/DJIA.csv, but some people confuse "default container" path.

Extract data as joined one using Nikkei and DJIA Hive tables

Execute below query to get data with joind one, because Nikkei CSV file expresses date as "2014/01/06" and DJIA one expresses date as "2013-12-16".

SELECT d.`DATE`, d.DJIA, n.NIKKEI
FROM FINANCEDB.DJIATABLE d JOIN FINANCEDB.NIKKEITABLE n 
ON ( regexp_replace(d.`DATE`, '-', '') = regexp_replace(n.`DATE`, '/', '') ) LIMIT 5;

You can view below query result if you have setup correctly, but please note Nikkei is expressed as "YEN" and DJIA is expressed as "Dollar". Please improve this sample to express same concurrency if it's possible!
f:id:waritohutsu:20170901075512p:plain

Data copy from FTP server to SQL Data Lake using Azure Data Factory

You can achieve to setup data copy scenarios with Azure Data Factory from your FTP server to your SQL Data Lake by following this article. I believe this scenario is quite simple, but you can avoid to be confused by utilizing this article.

How to setup FTP server on Microsoft Azure

Create a Linux, CentOS7, virtual machine at first. After that, connect the VM with ssh, and run below commands.

[root@ftpsourcevm ~]# sudo su -
[root@ftpsourcevm ~]# yum -y update && yum -y install vsftpd

Please setup this vsftp server as passive mode with below sample. As far as I have confirmed, Azure Data Factory supports only passive mode ftp servers.

[root@ftpsourcevm ~]# vi /etc/vsftpd/vsftpd.conf

# When "listen" directive is enabled, vsftpd runs in standalone mode and
# listens on IPv4 sockets. This directive cannot be used in conjunction
# with the listen_ipv6 directive.
listen=YES
#
# This directive enables listening on IPv6 sockets. By default, listening
# on the IPv6 "any" address (::) will accept connections from both IPv6
# and IPv4 clients. It is not necessary to listen on *both* IPv4 and IPv6
# sockets. If you want that (perhaps because you want to listen on specific
# addresses) then you must run two copies of vsftpd with two configuration
# files.
# Make sure, that one of the listen options is commented !!
listen_ipv6=NO

pam_service_name=vsftpd
userlist_enable=YES
tcp_wrappers=YES

pasv_enable=YES
pasv_addr_resolve=YES
pasv_min_port=60001 ( you need add this port to this VM NSG setup
pasv_max_port=60010 ( you need add this port to this VM NSG setup
pasv_address=(update global ip address of your ftp server vm e.g. 52.1xx.47.xx)

Run below commands to reflect your config change.

[root@ftpsourcevm ~]# systemctl restart vsftpd
[root@ftpsourcevm ~]# systemctl enable vsftpd

Finally, you need to add allow port configuration between pasv_min_port and pasv_max_port into NSG. Please refer below image.
f:id:waritohutsu:20170829212808p:plain

How to setup Azure Data Lake for Azure Data Factory

Just create your Azure Data Lake instance, and add a directory for Azure Data Factory like below.
f:id:waritohutsu:20170829212827p:plain

How to setup Azure Data Factory to copy from your FTP server to your Azure Data Lake

After creating your Azure Data Factory instance, choose "Copy data (PREVIEW)" to setup this.
f:id:waritohutsu:20170829214430p:plain

Change this schedule period if it's needed.
f:id:waritohutsu:20170829212934p:plain

Choose "FTP" as "CONNECT TO A DATA SOURCE", but you can also choose other data sources such like S3 and other cloud data sources.
f:id:waritohutsu:20170829212957p:plain

Change to "Disable SSL" at "Secure Transmission" in this sample, and please setup SSL when you will deploy this pipeline in your production environments. Input a global in address of your ftp server and credential account info of your ftp server. You will get a connection error if you setup active mode FTP servers.
f:id:waritohutsu:20170829213020p:plain

Choose a folder for data source of Azure Data Factory. In this sample, we setup as binary copy mode. But you can setup other data copy types such like cvs and others.
f:id:waritohutsu:20170829213121p:plain

Choose "Azure Data Lake Store" as "CONNECT TO A DATA STORE" in this article.
f:id:waritohutsu:20170829213141p:plain

Choose your Azure Data Lake Store instance for storing data like below.
f:id:waritohutsu:20170829213202p:plain

Choose a folder for data storing destination.
f:id:waritohutsu:20170829213224p:plain

Confirm your setup info, and submit to deploy this pipeline.
f:id:waritohutsu:20170829213246p:plain

Confirm your setup

You can view your data copy pipeline in your Azure Data Factory like below. Azure Data Factory will copy your data on your FTP server into your Azure Data Lake following your schedule.
f:id:waritohutsu:20170829215545p:plain

Get started with Apache Storm on HDInsight for your jar files

HDInsight provides you to create Apache Storm clusters easily. Please read reference articles in this post if you don't know overview of Apache Storm.

Create Storm Cluster on HDInsight

Follow https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-apache-storm-tutorial-get-started-linux article until "Create a Storm cluster" section. It takes about 15 minutes to create your Storm cluster, and pick up below info to connect your Storm cluster.

  • SSH login URL: engiyoistorm02-ssh.azurehdinsight.net
  • Dashboard URL: https://"your cluster name".azurehdinsight.net/
  • StormUI URL: https://"your cluster name".azurehdinsight.net/stormui/index.html

Deploy your jar files into your Storm cluster

Create your jar file including Topology class to deploy into your Storm Cluster. Please refer below example if you don't have such Java projects.
https://github.com/apache/storm/tree/master/examples/storm-starter

After making your jar file, try to connect to your cluster via ssh. Here is a sample connecting to your cluster using WinSCP.
f:id:waritohutsu:20170724143720p:plain
Transfer your jar file from your computer into your cluster. Now, you can run your jar file into your cluster.

Connect to you cluster via ssh. Here is a sample connecting to your cluster using putty.
f:id:waritohutsu:20170724143739p:plain
Follow below commands to run your jar file. Specify second argument as topology class and third argument as topology name.

sshuser@xxxxxxxx:~$ storm jar /home/sshuser/hellostorm-0.0.1-SNAPSHOT.jar com.mydomain.hellostorm.HelloTopology hello-topology
sshuser@xxxxxxxx:~$ storm list
6244 [main] INFO  o.a.s.u.NimbusClient - Found leader nimbus : 10.0.0.10:6627
Topology_name        Status     Num_tasks  Num_workers  Uptime_secs
-------------------------------------------------------------------
hello-topology       ACTIVE     8          3            8758

Monitor your application

Open https://"your cluster name".azurehdinsight.net/stormui/index.html via your browser. You can find you topology in Storm UI.
f:id:waritohutsu:20170724143827p:plain

Azure Container Service overview of Kubernetes for Java applications

Here is a sample architecture ACS Kubernetes. People sometimes confuse components of Container Services, because there are so many components such like Java, Docker Windows, private registry, cluster and others. This architecture helps such people to understand overview of ACS Kubernetes.
f:id:waritohutsu:20170713014405p:plain

Steps to run your Java applications using ACS Kubernetes

Follow below steps to run your Java applications.

  1. Build your Java applications
  2. Create Docker images
  3. Push your Docker images into Private Registry on Azure
  4. Get Kubernetes credentials
  5. Deploy your docker images using “kubectl” command

How to install kubectl into your client machine on "Bash on Ubuntu on Windows"

Run below commands.

normalian@DESKTOP-QJCCAGL:~$ echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ wheezy main" | sudo tee /etc/apt/sources.list.d/azure-cli.list
[sudo] password for normalian:
deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ wheezy main
normalian@DESKTOP-QJCCAGL:~$ sudo apt-key adv --keyserver packages.microsoft.com --recv-keys 417A0893
normalian@DESKTOP-QJCCAGL:~$ sudo apt-get install apt-transport-https
normalian@DESKTOP-QJCCAGL:~$ sudo apt-get update && sudo apt-get install azure-cliExecuting: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.5tm3Sb994i --no-auto-check-trustdb --trust-model always --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver packages.microsoft.com --recv-keys 417A0893

...


normalian@DESKTOP-QJCCAGL:~$ az

Welcome to Azure CLI!
---------------------
Use `az -h` to see available commands or go to https://aka.ms/cli.

...


normalian@DESKTOP-QJCCAGL:~$ az login
To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code XXXXXXXXX to authenticate.

...


normalian@DESKTOP-QJCCAGL:~$ az acs kubernetes install-cli
Downloading client to /usr/local/bin/kubectl from https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kubectl
Connection error while attempting to download client ([Errno 13] Permission denied: '/usr/local/bin/kubectl')
normalian@DESKTOP-QJCCAGL:~$ sudo az acs kubernetes install-cli
Downloading client to /usr/local/bin/kubectl from https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/linux/amd64/kubectl
normalian@DESKTOP-QJCCAGL:~$ az acs kubernetes get-credentials --resource-group=<resource group name> --name=<cluster name>  --ssh-key-file=<ssh key file>
normalian@DESKTOP-QJCCAGL:~$ kubectl get pods
NAME                             READY     STATUS    RESTARTS   AGE