Use TOTP for securing API Requests

Did you know that if APIs are left unprotected, anyone can use them, potentially resulting in numerous calls that can bring down the API (DoS/DDoS Attack) or even update the data without the user’s consent?

Let’s look at this from the perspective of a curious developer. Sometimes, I only want to trace the network request in the browser by launching the Dev Tools (commonly by pressing the F12 key) and looking at the Network Tab.

In the past, WebApps were directly linked with Sessions. Based on whether a session is valid, the request would go through. If no request is performed in a given time frame, it would simply exhaust the session in 20 minutes (default in some servers unless configured). Now, we build WebApps purely on the client side, allowing them to consume Rest-based APIs. Our services are strictly API-first because it will enable us to scale them quickly and efficiently. Modern WebApps are built using frameworks like ReactJS, NextJS, Angular, and VueJS, resulting in single-page applications (SPA) that are purely client-side.

Let’s look at this technically: HTTP is a Stateless Protocol. This means the server doesn’t need to remember anything between requests when using HTTP. It just receives a URL and some headers and sends back data.

We use attributes like [Authorize] in our Asp.Net Core-based Web APIs to authorize them securely. This involves generating JWT (JSON Web Tokens) and sending them along with the Login Response to be stored on the Client Side. Any future requests include the JWT Token, which gets automatically validated. JWTs are an open, industry-standard RFC 7519 method for representing claims securely between two parties and can be used with various technologies beyond Asp.Net Core, such as Spring.

When you send the JWT back to the server, it’s typically sent using the Header in subsequent requests. The server generally looks for the Authorization Header values, which comprise the keyword Bearer, followed by the JWT Token.

<code>Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkFudWJoYXYgUmFuamFuIiwiaWF0IjoxNTE2MjM5MDIyfQ.ALU4G8LdHbt6FCqxtr2hgfJw1RR7nMken2x0SC_hZ3g</code>

The above is an example of the token being sent. You can copy the JWT Token and visit https://jwt.io to check its values.

If I missed something about JWT, feel free to comment.

JWT Decoded

This is one of the reasons why I thought of protecting my APIs. I stumbled upon features like Remember Me when developing a WebApp using Asp.Net Core WebAPI as the backend and ReactJS as the front end. Although a simple feature, Remember Me could be implemented in various ways. Initially, I thought, why complicate things with cookies? I will be building mobile apps for the site anyway! However, it always bugged me that my JWT was stored in LocalStorage. The reason is simple: I can have users for this website ranging from someone who has zero knowledge of how it works to someone like me or, at worst, a potential hacker. A simple attack vector is impersonation if my JWT token is accessed. Any JavaScript can easily access this token stored in Local Storage. Due to this, I thought, yes, we can save the JWT in a Cookie sent from the Server, but it needs properties like HttpOnly, etc. But what if my API is used from a Mobile App? Considering the Cookie, it’s not a norm to extract the said token from a Cookie (although it is doable). Thus, I started looking into TOTP.

Now, let’s explore TOTP (Time-based One-Time Password) and its role in securing API requests. TOTP authenticates users based on a shared secret key and the current time. It generates a unique, short-lived password that changes every 30 seconds.

Have you heard of TOTP before? It’s the same thing when you use your Microsoft Authenticator or Google Authenticator Apps to provide a 6-digit code for Login using 2FA (2-Factor Authentication).

Why Use TOTP for API Security?

While JWTs provide a robust mechanism for user authentication and session management, they are not immune to attacks, especially if the tokens are stored insecurely or intercepted during transmission. TOTP adds an extra layer of security by requiring a time-based token in addition to the JWT. This ensures that even if a JWT is compromised, the attacker still needs the TOTP to authenticate, significantly reducing the risk of unauthorized access.

Implementing TOTP in APIs

Here’s a high-level overview of how to implement TOTP for API requests:

1. Generate a Shared Secret: When a user registers or logs in, generate a TOTP secret key dynamically, hidden from any storage. This key is used to create TOTP tokens.

2. TOTP Token Generation: Use libraries to generate TOTP tokens based on the shared secret and the current time.

3. API Request Validation: On the server side, validate the incoming JWT as usual. Additionally, the TOTP token is required in the request header or body. Validate the TOTP token using the same shared secret and the current time.

<code>// Example code snippet for validating TOTP in Node.js

const speakeasy = require('speakeasy');

// Secret stored on the server
const secret = 'KZXW6YPBOI======';

function validateTOTP(token) {
  const verified = speakeasy.totp.verify({
    secret: secret,
    encoding: 'base32',
    token: token,
  });
  return verified;
}

// On receiving an API request
const tokenFromClient = '123456'; // TOTP token from client
if (validateTOTP(tokenFromClient)) {
  console.log('TOTP token is valid!');
} else {
  console.log('Invalid TOTP token!');
}
</code>

Using TOTP ensures that even if the JWT is compromised, unauthorized access is prevented because the TOTP token, which changes every 30 seconds, is required.

Backup Files from Ubuntu to Azure

Hey there, how are you doing? I have been busy setting up a VPS (not in Azure) with PostgreSQL and wanted to have a recovery mechanism set for it. Because the VPS is not in Azure, I have zero options that would allow me to take a snapshot or backup of the server. So, I decided to write a simple script allowing me to take backups of files and upload them to Azure Blob Storage using Azure CLI. If you want to use something other than Azure, feel free to do so. I use Azure on my account, and the charges are low. You can check the pricing here.

Alright then, let’s start taking a backup of the database using the pgdump command:

#$USERNAME is the admin username that you have set
#$DATABASE_NAME is the Database name
#$BACKUP_FILE is the path where the file will be dumped
pg_dump -U $USERNAME -d $DATABASE_NAME > $BACKUP_FILE

Now that we have a way to export a dump of the database, let’s go ahead and identify the Access Key for the Container where I want to store the files. This can be found in the Access keys within the Security + networking section of the Storage Account. If you want to learn more about Azure Blob Storage, please visit the documentation.

Here, either of the key1 or key2 values can be used.

Next, to use this, we need to install the Azure CLI. Let’s execute this command to install the Azure CLI:

curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
az --version

A successful installation would result in az --version giving the version info.

Now that the dependencies are sorted out, we can look into the backup script. Note that this script can be modified to back up any files. Save this file as backup_script.sh in /home/vps_backup/scripts directory.

#!/bin/bash

# Set the password for PostgreSQL user
export PGPASSWORD='YOUR_PASSWORD'

# Replace these values with your actual Azure Blob Storage account details
AZURE_STORAGE_ACCOUNT="MYBACKUP"
AZURE_STORAGE_ACCESS_KEY="STORAGE_ACCESS_KEY"
CONTAINER_NAME="backups"

# Define variables for the backup
USERNAME=postgres
DATABASE_NAME=mydatabse
# Get the current date and time
today=$(date +"%Y-%m-%d_%H-%M-%S")
todayDate=$(date +"%Y-%m-%d")

# Set the filenames to todays date. 
BACKUP_FILE=/home/vps_backup/backups/backup_$today.sql
BACKUP_ZIPFILE=/home/vps_backup/backups/backup_$today.tar.gz

# Perform the backup using pg_dump
pg_dump -U $USERNAME -d $DATABASE_NAME > $BACKUP_FILE

# Unset the password to clear it from the environment
unset PGPASSWORD

# Generating a compressed file using tar
tar -czvf $BACKUP_ZIPFILE $BACKUP_FILE

# Upload the backup files to Azure Blob Storage using Azure CLI
# using -d for directory using the $todayDate variable to store the files based on dates
az storage blob directory upload --account-name $AZURE_STORAGE_ACCOUNT --account-key $AZURE_STORAGE_ACCESS_KEY --container $CONTAINER_NAME --source $BACKUP_ZIPFILE -d $todayDate

The final step is to set the cron so that our script gets executed every hour.

crontab -e
0 * * * * /home/vps_backup/scripts/backup_script.sh

Although I have used Azure CLI for Azure, you can use any medium to store your files, such as a database dump from PostgreSQL, MongoDB, SQL, MySQL, etc., or any other file.

Here is a snapshot of the Azure Storage Browser showing the Container with a specific date directory:

Read Gzip Log Files without Extracting

Lately, I have been getting many warnings on my MySQL Server, getting into some crashes and restarting.

I looked at the status of MySQL and found the logs stating,

mysql.service: Main process exited, code=killed, status=9/KILL

To analyse this correctly, I looked at the MySQL logs at /var/log/mysql directory.
However, log rotation was enabled, compressing the previous log files.

To read all of these files means I will have to extract each one and then open them.

But there is a better way, using the command zcat

To use this command, you need to pass on the path of the gzip file as the argument.

zcat /var/log/mysql/error.log.1.gz

Update Property in a Nested Array of entities in MongoDB

MongoDB

Many of us would have already used MongoDB for storing data as Documents. However, I come from a world of SQL where everything is stored in Tables following the Normalization process. Hence, anything that can be further broken down, should be considered storing in a separate table.

In the world of MongoDB or rather, No-SQL, we do not have anything like that. Here, the more connected the data is supposed to have, the more chances of storing them together makes sense. Let’s look at the below example:

{
    "_id":"d5ebb427",
    "name":"Anubhav Ranjan",
    "email":"[email protected]",
    "subscriptions":[
        {
            "subscriptionId":"1abc",
            "showId":"d060b8ca",
            "notificationEnabled":true
        },
        {
            "subscriptionId":"2abc",
            "showId":"d060b8cb",
            "notificationEnabled":true
        }
    ]
}

Let’s consider the code snippet above. I have an object User having Two Subscriptions. Now I want to make one of the Subscriptions as false. This should be done using C#.

Let’s see how this can be achieved using Mongo shell

db.users.find({ _id: "d5ebb427", "subscriptions.subscriptionId":"1abc"});

db.users.update({ _id: "d5ebb427", "subscriptions.subscriptionId":"1abc"}, { $set: {"subscriptions.$.notificationEnabled": false}});

Let’s go ahead and check out the C# implementation

// C# Mongo Schema
public class User
{
    [BsonId]
    public string Id { get; set; }
    [BsonElement("name")]
    public string Name { get; set; }
    [BsonElement("email")]
    public string Email { get; set; }
    [BsonElement("subscriptions")]
    public Subscription[] Subscriptions { get; set; }
}


public class Subscription
{
    [BsonElement("subscriptionId")]
    public string SubscriptionId{ get; set; }
    [BsonElement("showId")]
    public string ShowId { get; set; }
    [BsonElement("notificationEnabled")]
    public bool NotificationEnabled { get; set; }
}

In order to do this, we are using the positional $ operator.
As per the MongoDB docs, The positional $ operator identifies an element in an array to update without explicitly specifying the position of the element in the array.

var filter = Builders.Filter;
var userSubscriptionFilter = filter.And(
    filter.Eq(u => u.Id, id),
    filter.ElemMatch(u => u.Subscriptions, s => s.SubscriptionId == subId)
);
// Find User with Id and Subscription Id
var user = await usersCollection.Find(userSubscriptionFilter).SingleOrDefaultAsync();

// Update using the positional operator
var update = Builders.Update;
var subscriptinonSetter = update.Set("subscriptions.$.notificationEnabled", false);
var updateResult = await usersCollection.UpdateOneAsync(userSubscriptionFilter, subscriptinonSetter);

After further reading, I have even found that you can change the line from this:

var subscriptinonSetter = update.Set("subscriptions.$.notificationEnabled", false);

to this:

var subscriptinonSetter = update.Set(s =>  s.Subscriptions[-1].NotificationEnabled, false);

Happy Coding!

Find 10 Memory Consuming Processes in Ubuntu

Let’s first talk about the reason I started looking for this. I have a couple of services running in Ubuntu including DBs like MySQL, MongoDB, etc. along with running nGinx and other services.

However, sometimes, I noticed that the memory consumption goes upwards and it’s wise to know which process could be responsible for this.

I decided to look into this using the ps command

ps -eo pmem,pcpu,pid,args | tail -n +2 | sort -rnk 1 | head
Output for the above ps command

Let’s look at the arguments provided:

psCurrent process snapshot report
-eSelect all processes. Identical to -A.
-oformat is a single argument in the form of a blank-separated or comma-separated list, which offers a way to specify individual output columns.
pmemthe ratio of the process’s resident set size to the physical memory on the machine, expressed as a percentage.
pcpuCPU utilization of the process in the “##.#” format. Currently, it is the CPU time used divided by the time the process has been running (cputime/real time ratio), expressed as a percentage.
pidA number representing the process ID
argsCommand with all its arguments as a string.
tail -n +2Output lines starting to the second line
sort -rnk 1r (reverse) n(numeric sort) by column 1 i.e., pmem
headOutput the first 10 lines
Based on man ps

Hope it helps!

Server reached pm.max_children

Has there been an instance that you are trying to load your site, say even a simple WordPress and it feels just so slow that you might not even get a proper response. You might just only end up seeing an error.

This is exactly what happened to me today. I have a Ubuntu VPS with 4GB RAM and all of a sudden I get emails from Jetpack saying my site appears to be down. It hit me with so many questions especially, if something happened to my VPS. Did I lose my data? Blah…Blah…

But then I thought, let me just jump into the terminal and see what’s going on. The very first thing I did is restart my Nginx Server.

sudo service nginx restart

Now that my server had restarted, I was still seeing weird responses to my pages and other services. It was as if the entire system was choked down. But my graphs were showing still a huge amount of memory left out. So I decided to dig deeper but before doing that, let’s hit the command of restarting the Ubuntu server.

sudo shutdown -r now

Once my system restarted, I went to first check the logs at Nginx to see if something got screwed up. But I didn’t find anything useful. So I went to check the logs of the php-fpm engine and this is what I found.

server reached pm.max_children setting (5), consider raising it

It hit me as to what happened all of a sudden and I remembered, it’s probably due to some of the changes I made to one of the Image Caching Server. Anyways, I started and digging around the error message, especially the setting for pm.max_children = 5

After spending sometime, I found the place for this setting in /etc/php/7.0/fpm/pool.d/www.conf

Now, this is where the tricky part begins, as you have to ensure that the settings you do are not going to hamper your server with overload as well as they are sufficient enough to handle the load. Before we jump into it, let’s understand what we are trying to do here.

The log message with pm.max_children; pm stand for process manager. This process manager is responsible for instantiating child processes to handle the requests sent over to php-fpm engine.

Here the above image explains all the required values that need to be tweaked to get optimum performance from the server. Let’s look at the current configuration which I am having:

pm.max_children5
pm.start_servers2
pm.min_spare_servers1
pm.max_spare_servers3

So, now I needed to ensure that my configurations are somewhat reasonable based on my VPS configuration. But in order to do so, I need to know the max capacity that I can expect my child processes to reach. So I executed this command to know the current memory being utilized by the child processes.

ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x&lt;=NF ; x++ ) { printf("%s ",$x) } print "" }' | grep php-fpm

ps -eo size,pid,user,command –sort -size | awk ‘{ hr=$1/1024 ; printf(“%13.2f Mb “,hr) } { for ( x=4 ; x<=NF ; x++ ) { printf(“%s “,$x) } print “” }’ | grep php-fpm

Now, once I hit the enter key, I got this result in my terminal

If you look closely, the max that one of the child processes reached was 89.00 Mb. So to compute the Approx. value of pm.max_children can be as simple as this:

pm.max_children = Total RAM dedicated to the webserver / Max of child process size

pm.max_children = 3948 / 89 = 44.36

Considering the memory for other processes, I believe it’s safe to say that I can easily set my pm.max_children to 40 at least.

pm.max_children40
pm.start_servers2
pm.min_spare_servers1
pm.max_spare_servers3

[09-May-2020 13:24:45] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 0 idle, and 5 total children
[09-May-2020 13:24:46] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16 children, there are 0 idle, and 6 total children

As you can see above, I also had to adjust the values for pm.start_servers, pm.min_spare_servers, and pm.max_spare_servers else without doing, I was getting those warnings. Here are the latest values that I had set considering the warnings

pm.max_children40
pm.start_servers15
pm.min_spare_servers15
pm.max_spare_servers30

Please note that very high values may not do anything good for your system. It might even burden your system.

Hope it helps 🙂

Reinstall NuGet packages after upgrading a project

We all have had projects which have been running for quite sometime. However, when we open them after a few months, we find many upgrades that can be performed.

Well, this happened to me this week and I found that my project which has been running for almost 5 years now, started off with .Net Framework 4.5. At present, it was still targeted to .Net Framework 4.5.2. So I decided to upgrade the runtime version to .Net Framework 4.7.2. Although this is good then I started seeing some warnings.

Some NuGet packages were installed using a target framework different from the current target framework and may need to be reinstalled. Visit http://docs.nuget.org/docs/workflows/reinstalling-packages for more information. Packages affected:

To be honest, we programmers always tend to ignore the warnings and focus more on errors. However, this is not the one that one should ignore especially if this is a library that will have to be referenced somewhere.

How to fix this?

The easiest way to do this is by executing this command in Package Manager Console

Update-Package -Reinstall -ProjectName Project.Name.Here

In the above command, we can see a parameter -Reinstall. It instructs the NuGet Package Manager to remove the NuGet packages and reinstall the same versions. This gives NuGet a chance to determine which assembly is most appropriate for the current framework targeted by the project.

To conclude, it was really easy to get rid of this warning as it can occur whenever a project is upgraded to a different target framework.

Happy Coding!

Installing Wine 5.0 in Ubuntu

Before we dive into the commands for installing Wine, let’s first talk about what Wine is in General in the Linux World.

As described in Wine’s Site:
Wine (originally an acronym for “Wine Is Not an Emulator”) is a compatibility layer capable of running Windows applications on several POSIX-compliant operating systems, such as Linux, macOS, & BSD. Instead of simulating internal Windows logic like a virtual machine or emulator, Wine translates Windows API calls into POSIX calls on-the-fly, eliminating the performance and memory penalties of other methods and allowing you to cleanly integrate Windows applications into your desktop.

In short, it allows you to run Win32.exe Applications built for Windows System on Linux.

Let’s look at the steps required for installing Wine 5.0 on a Ubuntu 18.04 LTS system using the apt-get package manager

1. Setup PPA

If this is a 64-bit system , then we need to enable the 32-bit architecture. Once done, then install the key used to sign the wine package

$ sudo dpkg --add-architecture i386 
$ wget -qO - https://dl.winehq.org/wine-builds/winehq.key | sudo apt-key add - 

2. Enable the Wine Apt repository

$ sudo apt-add-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ bionic main' 
$ sudo add-apt-repository ppa:cybermax-dexter/sdl2-backport 

3. Install Wine on Ubuntu

Time to install Wine packages from the apt repository.
The –install-recommends option will install all the recommended packages by winehq stable versions on your Ubuntu system.

$ sudo apt update 
$ sudo apt install --install-recommends winehq-stable 

In case the install fails due to some unforeseen circumstances, you can try and install the same using aptitude.

$ sudo apt install aptitude 
$ sudo aptitude install winehq-stable 

4. Check Wine version

You can check the wine version installed by running the below command:

$ wine --version 

wine-5.0 

Hope this helps.

Happy Coding!

Akavache losing data in Xamarin.iOS

Akavache Logo
Akavache Logo - Image from https://github.com/reactiveui/Akavache

Today I am sharing an issue that I believe many have faced and probably tried resolving a lot. The issue is mostly seen when using Akavache in Xamarin.iOS (even when using Xamarin.Forms). The issue is this:

When using the App either in Simulator or Device, we try to use say Akavache’s BlobCache to store some data. During the runtime of the App, we tend to read the data back from the BlobCache and play with it. However, the real reason we use BlobCache is that we want to store some data like UserInformation, etc. to persist during App restarts.

But we see that the information is lost during the App restarts.

The main reason behind this is the Akavache is built to be used on top of SQLite. However, if the required plugin or rather the NuGet package is not found, the data is then persisted temporarily as the SQLite is not initialized for use with Akavache.

In order to fix this, I installed the NuGet Package SQLitePCLRaw.bundle_e_sqlite3 in all the projects.

The moment it got installed, I could see that my data was getting cached and persisted during the App restarts. I hope it helps.

Do ensure that you are initializing the Application Name:

BlobCache.ApplicationName = "AkavacheExperiment";

Happy Coding!

Page Navigation using Messaging Center

Messaging Center
Messaging Center- Image used from https://docs.microsoft.com/en-us/xamarin/xamarin-forms/app-fundamentals/messaging-center

MessagingCenter doesn’t need any introduction in this world of Xamarin. As the name suggests, it is clearly used for Messages. Now the question arises What Messages? Are these Chats? Who is Sending? Who is Receiving? etc…etc…etc…

Let’s look at the gist of Messaging Center from here:
The publish-subscribe pattern is a messaging pattern in which publishers send messages without having knowledge of any receivers, known as subscribers. Similarly, subscribers listen for specific messages, without having knowledge of any publishers. You can read more about the common explanation here. If you are still not aware of MessagingCenter, then kindly read this Documentation on MessagingCenter.

We know that while using MessagingCenter, there are two actors in the scene, Publisher and Subscriber. Now, what happens is that the Subscriber subscribes for a specific message and performs an action whenever the Publisher publishes the desired message.

Now let’s look at the use case for this:
We always see this question arising wherein the developers are trying to perform operations on Page Navigation from the ViewModel like Navigation.PushAsync, Navigation.PushModalAsync, Navigation.PopAsync or Navigation.PopModalAsync etc. However, we all know that the Navigation property is only accessible as part of the Page.

The sole reason behind using frameworks like MVVM is to isolate the View from ViewModel. Consider the scenario where we have a ListView and in its header is the Add Button. Now, this Add Button is bound to the Add Command in the ViewModel which needs to perform the function call of Navigation.PushModalAsync().

So here when we think about performing operations like PushModalAsync, the challenge is that we cannot just go ahead and use the Navigation object unless we store the Root/Parent page somewhere in another variable, etc.

If you create a New Blank Xamarin.Forms App with Master Page, you will get a templated App with few lines of code.
In your MainPage constructor, you can try this:

public partial class MainPage : MasterDetailPage
{
    Dictionary MenuPages = new Dictionary();
    public MainPage()
    {
        InitializeComponent();
        MasterBehavior = MasterBehavior.Popover;

        //Subscribing to the Message NavTo
        MessagingCenter.Subscribe("AppName", "NavTo", async (sender, arg) =>   
        {
            await NavigateFromMenu(arg);
        });
    }
 
    public async Task NavigateFromMenu(int id)
    {
        if (!MenuPages.ContainsKey(id))
        {
            switch (id)
            {
                case (int)MenuItemType.Home:
                    MenuPages.Add(id, new NavigationPage(new HomePage()));
                    break;
                case (int)MenuItemType.About:
                    MenuPages.Add(id, new NavigationPage(new AboutPage()));
                    break;
                case (int)MenuItemType.Add:
                    MenuPages.Add(id, new NavigationPage(new AddPage()));
                    break;
            }
        }

        var newPage = MenuPages.ContainsKey(id) ? MenuPages[id] : null;
        if (newPage != null && Detail != newPage)
        {
            Detail = newPage;
            IsPresened = false;
        }
    }
} 

Now we can send the Message from our MainViewModel like this:

private void AddCommand(object obj)
{
    MessagingCenter.Send<string, int>("AppName", "NavTo", (int)MenuItemType.Add);
    //MenuItemType is an Enum.
}

In the above example, the AddCommand sends a Message of NavTo with the Add Parameter. The main page which is already subscribed to this will try to Navigate based on the parameters provided.

As shown above, you can easily perform Page Navigation using MessagingCenter.

Happy Coding!!!