Backup Files from Ubuntu to Azure

Hey there, how are you doing? I have been busy setting up a VPS (not in Azure) with PostgreSQL and wanted to have a recovery mechanism set for it. Because the VPS is not in Azure, I have zero options that would allow me to take a snapshot or backup of the server. So, I decided to write a simple script allowing me to take backups of files and upload them to Azure Blob Storage using Azure CLI. If you want to use something other than Azure, feel free to do so. I use Azure on my account, and the charges are low. You can check the pricing here.

Alright then, let’s start taking a backup of the database using the pgdump command:

#$USERNAME is the admin username that you have set
#$DATABASE_NAME is the Database name
#$BACKUP_FILE is the path where the file will be dumped
pg_dump -U $USERNAME -d $DATABASE_NAME > $BACKUP_FILE

Now that we have a way to export a dump of the database, let’s go ahead and identify the Access Key for the Container where I want to store the files. This can be found in the Access keys within the Security + networking section of the Storage Account. If you want to learn more about Azure Blob Storage, please visit the documentation.

Here, either of the key1 or key2 values can be used.

Next, to use this, we need to install the Azure CLI. Let’s execute this command to install the Azure CLI:

curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
az --version

A successful installation would result in az --version giving the version info.

Now that the dependencies are sorted out, we can look into the backup script. Note that this script can be modified to back up any files. Save this file as backup_script.sh in /home/vps_backup/scripts directory.

#!/bin/bash

# Set the password for PostgreSQL user
export PGPASSWORD='YOUR_PASSWORD'

# Replace these values with your actual Azure Blob Storage account details
AZURE_STORAGE_ACCOUNT="MYBACKUP"
AZURE_STORAGE_ACCESS_KEY="STORAGE_ACCESS_KEY"
CONTAINER_NAME="backups"

# Define variables for the backup
USERNAME=postgres
DATABASE_NAME=mydatabse
# Get the current date and time
today=$(date +"%Y-%m-%d_%H-%M-%S")
todayDate=$(date +"%Y-%m-%d")

# Set the filenames to todays date. 
BACKUP_FILE=/home/vps_backup/backups/backup_$today.sql
BACKUP_ZIPFILE=/home/vps_backup/backups/backup_$today.tar.gz

# Perform the backup using pg_dump
pg_dump -U $USERNAME -d $DATABASE_NAME > $BACKUP_FILE

# Unset the password to clear it from the environment
unset PGPASSWORD

# Generating a compressed file using tar
tar -czvf $BACKUP_ZIPFILE $BACKUP_FILE

# Upload the backup files to Azure Blob Storage using Azure CLI
# using -d for directory using the $todayDate variable to store the files based on dates
az storage blob directory upload --account-name $AZURE_STORAGE_ACCOUNT --account-key $AZURE_STORAGE_ACCESS_KEY --container $CONTAINER_NAME --source $BACKUP_ZIPFILE -d $todayDate

The final step is to set the cron so that our script gets executed every hour.

crontab -e
0 * * * * /home/vps_backup/scripts/backup_script.sh

Although I have used Azure CLI for Azure, you can use any medium to store your files, such as a database dump from PostgreSQL, MongoDB, SQL, MySQL, etc., or any other file.

Here is a snapshot of the Azure Storage Browser showing the Container with a specific date directory:

Read Gzip Log Files without Extracting

Lately, I have been getting many warnings on my MySQL Server, getting into some crashes and restarting.

I looked at the status of MySQL and found the logs stating,

mysql.service: Main process exited, code=killed, status=9/KILL

To analyse this correctly, I looked at the MySQL logs at /var/log/mysql directory.
However, log rotation was enabled, compressing the previous log files.

To read all of these files means I will have to extract each one and then open them.

But there is a better way, using the command zcat

To use this command, you need to pass on the path of the gzip file as the argument.

zcat /var/log/mysql/error.log.1.gz

Find 10 Memory Consuming Processes in Ubuntu

Let’s first talk about the reason I started looking for this. I have a couple of services running in Ubuntu including DBs like MySQL, MongoDB, etc. along with running nGinx and other services.

However, sometimes, I noticed that the memory consumption goes upwards and it’s wise to know which process could be responsible for this.

I decided to look into this using the ps command

ps -eo pmem,pcpu,pid,args | tail -n +2 | sort -rnk 1 | head
Output for the above ps command

Let’s look at the arguments provided:

psCurrent process snapshot report
-eSelect all processes. Identical to -A.
-oformat is a single argument in the form of a blank-separated or comma-separated list, which offers a way to specify individual output columns.
pmemthe ratio of the process’s resident set size to the physical memory on the machine, expressed as a percentage.
pcpuCPU utilization of the process in the “##.#” format. Currently, it is the CPU time used divided by the time the process has been running (cputime/real time ratio), expressed as a percentage.
pidA number representing the process ID
argsCommand with all its arguments as a string.
tail -n +2Output lines starting to the second line
sort -rnk 1r (reverse) n(numeric sort) by column 1 i.e., pmem
headOutput the first 10 lines
Based on man ps

Hope it helps!

Server reached pm.max_children

Has there been an instance that you are trying to load your site, say even a simple WordPress and it feels just so slow that you might not even get a proper response. You might just only end up seeing an error.

This is exactly what happened to me today. I have a Ubuntu VPS with 4GB RAM and all of a sudden I get emails from Jetpack saying my site appears to be down. It hit me with so many questions especially, if something happened to my VPS. Did I lose my data? Blah…Blah…

But then I thought, let me just jump into the terminal and see what’s going on. The very first thing I did is restart my Nginx Server.

sudo service nginx restart

Now that my server had restarted, I was still seeing weird responses to my pages and other services. It was as if the entire system was choked down. But my graphs were showing still a huge amount of memory left out. So I decided to dig deeper but before doing that, let’s hit the command of restarting the Ubuntu server.

sudo shutdown -r now

Once my system restarted, I went to first check the logs at Nginx to see if something got screwed up. But I didn’t find anything useful. So I went to check the logs of the php-fpm engine and this is what I found.

server reached pm.max_children setting (5), consider raising it

It hit me as to what happened all of a sudden and I remembered, it’s probably due to some of the changes I made to one of the Image Caching Server. Anyways, I started and digging around the error message, especially the setting for pm.max_children = 5

After spending sometime, I found the place for this setting in /etc/php/7.0/fpm/pool.d/www.conf

Now, this is where the tricky part begins, as you have to ensure that the settings you do are not going to hamper your server with overload as well as they are sufficient enough to handle the load. Before we jump into it, let’s understand what we are trying to do here.

The log message with pm.max_children; pm stand for process manager. This process manager is responsible for instantiating child processes to handle the requests sent over to php-fpm engine.

Here the above image explains all the required values that need to be tweaked to get optimum performance from the server. Let’s look at the current configuration which I am having:

pm.max_children5
pm.start_servers2
pm.min_spare_servers1
pm.max_spare_servers3

So, now I needed to ensure that my configurations are somewhat reasonable based on my VPS configuration. But in order to do so, I need to know the max capacity that I can expect my child processes to reach. So I executed this command to know the current memory being utilized by the child processes.

ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }' | grep php-fpm

ps -eo size,pid,user,command –sort -size | awk ‘{ hr=$1/1024 ; printf(“%13.2f Mb “,hr) } { for ( x=4 ; x<=NF ; x++ ) { printf(“%s “,$x) } print “” }’ | grep php-fpm

Now, once I hit the enter key, I got this result in my terminal

If you look closely, the max that one of the child processes reached was 89.00 Mb. So to compute the Approx. value of pm.max_children can be as simple as this:

pm.max_children = Total RAM dedicated to the webserver / Max of child process size

pm.max_children = 3948 / 89 = 44.36

Considering the memory for other processes, I believe it’s safe to say that I can easily set my pm.max_children to 40 at least.

pm.max_children40
pm.start_servers2
pm.min_spare_servers1
pm.max_spare_servers3

[09-May-2020 13:24:45] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 0 idle, and 5 total children
[09-May-2020 13:24:46] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16 children, there are 0 idle, and 6 total children

As you can see above, I also had to adjust the values for pm.start_servers, pm.min_spare_servers, and pm.max_spare_servers else without doing, I was getting those warnings. Here are the latest values that I had set considering the warnings

pm.max_children40
pm.start_servers15
pm.min_spare_servers15
pm.max_spare_servers30

Please note that very high values may not do anything good for your system. It might even burden your system.

Hope it helps đŸ™‚

Installing Wine 5.0 in Ubuntu

Before we dive into the commands for installing Wine, let’s first talk about what Wine is in General in the Linux World.

As described in Wine’s Site:
Wine (originally an acronym for “Wine Is Not an Emulator”) is a compatibility layer capable of running Windows applications on several POSIX-compliant operating systems, such as Linux, macOS, & BSD. Instead of simulating internal Windows logic like a virtual machine or emulator, Wine translates Windows API calls into POSIX calls on-the-fly, eliminating the performance and memory penalties of other methods and allowing you to cleanly integrate Windows applications into your desktop.

In short, it allows you to run Win32.exe Applications built for Windows System on Linux.

Let’s look at the steps required for installing Wine 5.0 on a Ubuntu 18.04 LTS system using the apt-get package manager

1. Setup PPA

If this is a 64-bit system , then we need to enable the 32-bit architecture. Once done, then install the key used to sign the wine package

$ sudo dpkg --add-architecture i386 
$ wget -qO - https://dl.winehq.org/wine-builds/winehq.key | sudo apt-key add - 

2. Enable the Wine Apt repository

$ sudo apt-add-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ bionic main' 
$ sudo add-apt-repository ppa:cybermax-dexter/sdl2-backport 

3. Install Wine on Ubuntu

Time to install Wine packages from the apt repository.
The –install-recommends option will install all the recommended packages by winehq stable versions on your Ubuntu system.

$ sudo apt update 
$ sudo apt install --install-recommends winehq-stable 

In case the install fails due to some unforeseen circumstances, you can try and install the same using aptitude.

$ sudo apt install aptitude 
$ sudo aptitude install winehq-stable 

4. Check Wine version

You can check the wine version installed by running the below command:

$ wine --version 

wine-5.0 

Hope this helps.

Happy Coding!