This is my attempt at the Capstone Project challenge from LearnToCloud. The project's goal is to create:
A bash-based CLI tool that allows users to quickly upload files to a specified cloud storage solution, providing a simple and seamless upload experience similar to popular storage services.
What should the tool do?
The tool should be able to upload a file when run like below:
clouduploader /path/to/file.txt
The script will be a wrapper to your cloud service provider's CLI tool which allows you to upload the specified file without typing all the flags required when using the CLI tool by itself.
Prerequisites
Choose a cloud provider. As a starting point, I chose to work with Microsoft Azure as I am more familiar with working with Microsoft products. I created an account with Azure and chose their free tier which will give me US$200 free credits good for 30 days, and free access to some of Azure's offerings for 12 months.
Setup authentication. Once you have your account created, we need to install the Azure CLI tool, and set up authentication so that we can use it from the command line. You can find out how to install the tool here.
For setting up the authentication, you need to type in:
az login --use-device-code
Follow the link generated and paste the code from the output of the command to set up your credentials.
Create a Resource Group. A resource group is a logical container into which Azure resources are deployed and managed. We can create one using the below command:
az group create --name $(resource group) --location $(location)
๐กRemember to replace the placeholders ($(value)
) with your intended values.Create a storage account. A storage account is used for all four services: blobs, tables, files, and queues.
az storage account create --name $(storage-account) --resource-group $(resource-group) --location $(location) --encryption-services blob
๐กRemember to replace the placeholders ($(value)
) with your intended values.๐ฉYou might encounter an error like the one above. This is caused if you are not yet subscribed the Microsoft.Storage resource provider.Enroll to the Microsoft.Storage resource provider. You can use the Azure CLI to enroll.
az provider register --namespace "Microsoft.Storage"
If you check your Azure Portal, you should see that Microsoft.Storage resource provider is now being registered (if not previously registered).
It will take a few minutes for the portal to show the resource provider as Registered.
Once complete, you can re-run the command in point no. 4 to create your storage blob.
Create a container. We'll create a container where we can upload our blobs. This is similar to organizing files into directories.
As per Microsoft Azure docs:
Before you create the container, assign the Storage Blob Data Contributor role to yourself. Even if you are the account owner, you need explicit permissions to perform data operations against the storage account.
So we'll do just that using the below code:
az ad signed-in-user show --query id -o tsv | az role assignment create --role "Storage Blob Data Contributor" --assignee @- --scope "/subscriptions/$(subscription)/resourceGroups/$(resource-group)/providers/Microsoft.Storage/storageAccounts/$(storage-account)"
az storage container create --account-name $(storage-account) --name $(container) --auth-mode login
$(value)
) with your intended values.Our blob storage is now ready to receive our file/s.
to make our lives a bit easier, we will set the values for
AZURE_STORAGE_ACCOUNT
andAZURE_STORAGE_KEY
. We will do that by typing the below command:az storage account keys list --account-name (acc-name-here) --resource-group (resource-group-name-here)
This will output your keys in
JSON
format.We can use the
jq
command to simplify the output and just give us the key values:az storage account keys list --resource-group $(resource-group) --account-name $(account-name) | \ jq '.[0,1].value?'
$(value)
) with your intended values.Now we can export the values as Environment Variables:
export AZURE_STORAGE_ACCOUNT="$(storage-account-name)" ; export AZURE_STORAGE_KEY="$(storage-key-value)"
If all goes well, we should be able to run commands without specifying the storage-account-name
, resource-group-name
, connection-string
, etc. This will simplify our commands, and prevent having the above secrets showing in our .bash_history
file.
az storage blob list --container-name $(container-name) --auth-mode login
I appended the --auth-mode login
in the command as we granted ourselves the Storage Blob Data Contributor
role in the previous steps. If we don't include it in the command, it will throw the below warning:
Steps on how to create the bash script
Let's place a
shebang
so that our script knows that it needs to usebash
as its interpreter:#!/bin/bash
Our
bash
script needs arguments for it to work, so we need to check if there are any arguments passed to our command, and if we have the correct number of arguments:if [[ $# -eq 0 ]]; then echo '[X] No arguments supplied' echo '[!] Usage: ./clouduploader <container-name> <filename1>' exit 1 elif [[ $# -eq 1 ]]; then echo '[!] Too few arguments!' echo '[!] Usage: ./clouduploader <container-name> <filename1>' exit 1 elif [[ $# -ge 3 ]]; then echo '[!] Too many arguments' echo '[!] Usage: ./clouduploader <container-name> <filename1>' exit 1 [...snip...]
I placed
echo
commands here to help our user know how to use our tool. We need our arguments in this exact order,container-name
first, and thefile
as second. Since when we are using ouraz
command to upload to our blob storage it asks us what container name to use, we'll need to reflect that in ourbash
script. We need to be specific with the number of arguments passed, so I made some checks here.Next we will check if the file exists. If it does, we will proceed with the program, if not, then we will throw an error.
[...snip...] elif [[ ! -f $2 ]]; then echo '[X] The file does not exist!' exit 1 else [...snip...] fi
Inside the
elif-else
block, we will nest anotherif-else
statement.This handles the logic to check if the file already exists in the storage. If it does, the program exists and displays an error to the user. If it does not exist, the file script will upload the file to the storage.
else does_exist=$(az storage blob exists --container-name $1 --name $2 --auth-mode login | grep exi sts) if [[ $does_exist == *"true"* ]]; then echo '[!] The file with the same name already exists in your storage.' exit 1 else az storage blob upload --container-name $1 -f $2 --auth-mode login echo '[โ] File successfully uploaded!' az storage blob list --container-name $1 --output table --auth-mode login exit 1 fi exit 1 fi
Our final script will look like this
#!/bin/bash
if [[ $# -eq 0 ]]; then
echo '[X] No arguments supplied'
echo '[!] Usage: ./clouduploader <container-name> <filename1>'
exit 1
elif [[ $# -eq 1 ]]; then
echo '[!] Too few arguments!'
echo '[!] Usage: ./clouduploader <container-name> <filename1>'
exit 1
elif [[ $# -ge 3 ]]; then
echo '[!] Too many arguments'
echo '[!] Usage: ./clouduploader <container-name> <filename1>'
exit 1
elif [[ ! -f $2 ]]; then
echo '[X] The file does not exist!'
exit 1
else
does_exist=$(az storage blob exists --container-name $1 --name $2 --auth-mode login | grep exists)
if [[ $does_exist == *"true"* ]]; then
echo '[!] The file with the same name already exists in your storage.'
exit 1
else
az storage blob upload --container-name $1 -f $2 --auth-mode login
echo '[โ] File successfully uploaded!'
az storage blob list --container-name $1 --output table --auth-mode login
exit 1
fi
exit 1
fi
Now let's try testing it with a few cases to see if it works:
- with the file already existing in storage
- with the storage empty (i.e. no file with the same name)
- with a file that does not exist
Making this script took me a lot longer than I would like to admit. That's because I also need to learn how to setup and interact with the Azure platform.
All in all, this is a good exercise to practice my scripting skills, as well as my research skills, and my cloud skills.
I will be making more of these blogs to document my journey into the cloud industry.
Thank you for reading my blog!