From Insomnia to Innovation: Building SleepScape

This might happen with a lot of people – feeling insomniac at night, taking almost an hour to simply fall asleep. To ensure that I do sleep – I use apps like Calm, Headspace, and often YouTube to listen to something which might simply get my mind to calm down and not think about anything else. I know it’s hard but then again, the goal is to Sleep!

While the Apps do work, often, they don’t fall into the exact requirement that I have. Thus, the developer in me started brainstorming. I started thinking of building a mobile app using React Native for Android and iOS. It took me around a weekend to develop the interface. For APIs, I decided to make use of Cloudflare Workers with KV connected with S3-Compatible DigitalOcean Spaces. Obviously, I will be using Google’s Firebase for lots of things, especially, Auth.

At the moment, the important thing, that I have been trying to achieve is generating a human-like speech using various LLMs and TTS engines. I have tried ElevenLabs but it becomes too costly in the long run in generating these sleep stories. ChatGPT helped me figure out the comparisons between some of the engines and based on the requirements, I tried some of the other engines like Coqui TTS, Bark, and Tortoise TTS. Just to let you know, the configuration of my PC is i7-12700K, 64GB DDR5 RAM, 3TB SSD, and most importantly Nvidia RTX 3060ti (which can run these TTS engines easily).

I thought, I will first come up with an easy task of setting up and finetuning the code to get a better audio and then probably, if needed, I will rent a faster GPU like RTX 4090 from vast.ai or Azure Spot Virtual Machines to run these at a faster level.

Imagine…
You find yourself standing barefoot upon soft, sun-warmed sand…
Gazing out at an endless ocean, stretching toward the horizon…

A sample generated, which felt it’s good at least. Now need to finetune and keep testing to ensure that we are able to generate a set of voice configurations that could be used to generate the audio of the story.

Use TOTP for securing API Requests

Did you know that if APIs are left unprotected, anyone can use them, potentially resulting in numerous calls that can bring down the API (DoS/DDoS Attack) or even update the data without the user’s consent?

Let’s look at this from the perspective of a curious developer. Sometimes, I only want to trace the network request in the browser by launching the Dev Tools (commonly by pressing the F12 key) and looking at the Network Tab.

In the past, WebApps were directly linked with Sessions. Based on whether a session is valid, the request would go through. If no request is performed in a given time frame, it would simply exhaust the session in 20 minutes (default in some servers unless configured). Now, we build WebApps purely on the client side, allowing them to consume Rest-based APIs. Our services are strictly API-first because it will enable us to scale them quickly and efficiently. Modern WebApps are built using frameworks like ReactJS, NextJS, Angular, and VueJS, resulting in single-page applications (SPA) that are purely client-side.

Let’s look at this technically: HTTP is a Stateless Protocol. This means the server doesn’t need to remember anything between requests when using HTTP. It just receives a URL and some headers and sends back data.

We use attributes like [Authorize] in our Asp.Net Core-based Web APIs to authorize them securely. This involves generating JWT (JSON Web Tokens) and sending them along with the Login Response to be stored on the Client Side. Any future requests include the JWT Token, which gets automatically validated. JWTs are an open, industry-standard RFC 7519 method for representing claims securely between two parties and can be used with various technologies beyond Asp.Net Core, such as Spring.

When you send the JWT back to the server, it’s typically sent using the Header in subsequent requests. The server generally looks for the Authorization Header values, which comprise the keyword Bearer, followed by the JWT Token.

<code>Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkFudWJoYXYgUmFuamFuIiwiaWF0IjoxNTE2MjM5MDIyfQ.ALU4G8LdHbt6FCqxtr2hgfJw1RR7nMken2x0SC_hZ3g</code>

The above is an example of the token being sent. You can copy the JWT Token and visit https://jwt.io to check its values.

If I missed something about JWT, feel free to comment.

JWT Decoded

This is one of the reasons why I thought of protecting my APIs. I stumbled upon features like Remember Me when developing a WebApp using Asp.Net Core WebAPI as the backend and ReactJS as the front end. Although a simple feature, Remember Me could be implemented in various ways. Initially, I thought, why complicate things with cookies? I will be building mobile apps for the site anyway! However, it always bugged me that my JWT was stored in LocalStorage. The reason is simple: I can have users for this website ranging from someone who has zero knowledge of how it works to someone like me or, at worst, a potential hacker. A simple attack vector is impersonation if my JWT token is accessed. Any JavaScript can easily access this token stored in Local Storage. Due to this, I thought, yes, we can save the JWT in a Cookie sent from the Server, but it needs properties like HttpOnly, etc. But what if my API is used from a Mobile App? Considering the Cookie, it’s not a norm to extract the said token from a Cookie (although it is doable). Thus, I started looking into TOTP.

Now, let’s explore TOTP (Time-based One-Time Password) and its role in securing API requests. TOTP authenticates users based on a shared secret key and the current time. It generates a unique, short-lived password that changes every 30 seconds.

Have you heard of TOTP before? It’s the same thing when you use your Microsoft Authenticator or Google Authenticator Apps to provide a 6-digit code for Login using 2FA (2-Factor Authentication).

Why Use TOTP for API Security?

While JWTs provide a robust mechanism for user authentication and session management, they are not immune to attacks, especially if the tokens are stored insecurely or intercepted during transmission. TOTP adds an extra layer of security by requiring a time-based token in addition to the JWT. This ensures that even if a JWT is compromised, the attacker still needs the TOTP to authenticate, significantly reducing the risk of unauthorized access.

Implementing TOTP in APIs

Here’s a high-level overview of how to implement TOTP for API requests:

1. Generate a Shared Secret: When a user registers or logs in, generate a TOTP secret key dynamically, hidden from any storage. This key is used to create TOTP tokens.

2. TOTP Token Generation: Use libraries to generate TOTP tokens based on the shared secret and the current time.

3. API Request Validation: On the server side, validate the incoming JWT as usual. Additionally, the TOTP token is required in the request header or body. Validate the TOTP token using the same shared secret and the current time.

<code>// Example code snippet for validating TOTP in Node.js

const speakeasy = require('speakeasy');

// Secret stored on the server
const secret = 'KZXW6YPBOI======';

function validateTOTP(token) {
  const verified = speakeasy.totp.verify({
    secret: secret,
    encoding: 'base32',
    token: token,
  });
  return verified;
}

// On receiving an API request
const tokenFromClient = '123456'; // TOTP token from client
if (validateTOTP(tokenFromClient)) {
  console.log('TOTP token is valid!');
} else {
  console.log('Invalid TOTP token!');
}
</code>

Using TOTP ensures that even if the JWT is compromised, unauthorized access is prevented because the TOTP token, which changes every 30 seconds, is required.

I have published two libraries for Authentication using TOTP on NuGet and NPM:

Backup Files from Ubuntu to Azure

Hey there, how are you doing? I have been busy setting up a VPS (not in Azure) with PostgreSQL and wanted to have a recovery mechanism set for it. Because the VPS is not in Azure, I have zero options that would allow me to take a snapshot or backup of the server. So, I decided to write a simple script allowing me to take backups of files and upload them to Azure Blob Storage using Azure CLI. If you want to use something other than Azure, feel free to do so. I use Azure on my account, and the charges are low. You can check the pricing here.

Alright then, let’s start taking a backup of the database using the pgdump command:

#$USERNAME is the admin username that you have set
#$DATABASE_NAME is the Database name
#$BACKUP_FILE is the path where the file will be dumped
pg_dump -U $USERNAME -d $DATABASE_NAME > $BACKUP_FILE

Now that we have a way to export a dump of the database, let’s go ahead and identify the Access Key for the Container where I want to store the files. This can be found in the Access keys within the Security + networking section of the Storage Account. If you want to learn more about Azure Blob Storage, please visit the documentation.

Here, either of the key1 or key2 values can be used.

Next, to use this, we need to install the Azure CLI. Let’s execute this command to install the Azure CLI:

curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
az --version

A successful installation would result in az --version giving the version info.

Now that the dependencies are sorted out, we can look into the backup script. Note that this script can be modified to back up any files. Save this file as backup_script.sh in /home/vps_backup/scripts directory.

#!/bin/bash

# Set the password for PostgreSQL user
export PGPASSWORD='YOUR_PASSWORD'

# Replace these values with your actual Azure Blob Storage account details
AZURE_STORAGE_ACCOUNT="MYBACKUP"
AZURE_STORAGE_ACCESS_KEY="STORAGE_ACCESS_KEY"
CONTAINER_NAME="backups"

# Define variables for the backup
USERNAME=postgres
DATABASE_NAME=mydatabse
# Get the current date and time
today=$(date +"%Y-%m-%d_%H-%M-%S")
todayDate=$(date +"%Y-%m-%d")

# Set the filenames to todays date. 
BACKUP_FILE=/home/vps_backup/backups/backup_$today.sql
BACKUP_ZIPFILE=/home/vps_backup/backups/backup_$today.tar.gz

# Perform the backup using pg_dump
pg_dump -U $USERNAME -d $DATABASE_NAME > $BACKUP_FILE

# Unset the password to clear it from the environment
unset PGPASSWORD

# Generating a compressed file using tar
tar -czvf $BACKUP_ZIPFILE $BACKUP_FILE

# Upload the backup files to Azure Blob Storage using Azure CLI
# using -d for directory using the $todayDate variable to store the files based on dates
az storage blob directory upload --account-name $AZURE_STORAGE_ACCOUNT --account-key $AZURE_STORAGE_ACCESS_KEY --container $CONTAINER_NAME --source $BACKUP_ZIPFILE -d $todayDate

The final step is to set the cron so that our script gets executed every hour.

crontab -e
0 * * * * /home/vps_backup/scripts/backup_script.sh

Although I have used Azure CLI for Azure, you can use any medium to store your files, such as a database dump from PostgreSQL, MongoDB, SQL, MySQL, etc., or any other file.

Here is a snapshot of the Azure Storage Browser showing the Container with a specific date directory:

Read Gzip Log Files without Extracting

Lately, I have been getting many warnings on my MySQL Server, getting into some crashes and restarting.

I looked at the status of MySQL and found the logs stating,

mysql.service: Main process exited, code=killed, status=9/KILL

To analyse this correctly, I looked at the MySQL logs at /var/log/mysql directory.
However, log rotation was enabled, compressing the previous log files.

To read all of these files means I will have to extract each one and then open them.

But there is a better way, using the command zcat

To use this command, you need to pass on the path of the gzip file as the argument.

zcat /var/log/mysql/error.log.1.gz

Deconstructing TV Shows Reminder

TV Shows Reminder - Get Reminders for your Favorite TV Shows
TV Shows Reminder - Get Reminders for your Favorite TV Shows

In this post, we’ll dive deep into how TV Shows Reminder is architected, exploring everything from the frontend to backend, infrastructure choices, and integrations that make the WebApp perform smoothly.

Introduction

TV Shows Reminder is designed to help users effortlessly keep track of their favorite TV shows, receiving timely notifications about upcoming episodes. The architecture behind this app blends modern frontend technologies, robust backend services, and cloud infrastructure, ensuring scalability, performance, and security.

Architecture Overview

At a high level, TV Shows Reminder employs a microservices-inspired architecture. The frontend uses ReactJS with Redux for state management, the backend relies on a .NET WebAPI, and Strapi is utilized as a separate content management service, all orchestrated seamlessly through Cloudflare’s infrastructure and various Azure services.

Frontend: ReactJS, Redux, TailwindCSS

The frontend is built with ReactJS, providing a responsive and dynamic user experience. TypeScript ensures type safety and robustness, minimizing bugs at compile-time. TailwindCSS offers a highly maintainable styling solution.

State management is streamlined with Redux, offering predictable state transitions. Data fetched from APIs and search results are cached in local storage, significantly enhancing response times.

The app leverages Cloudflare Pages for hosting, combined with Cloudflare Workers and Workers KV for serving static data such as show details, seasons, and episodes, minimizing backend hits and ensuring rapid content delivery.

Backend and Data Management

The backend services are powered by ASP.NET WebAPI (.NET 8), hosted on Ubuntu servers. Crucial data is cached using Redis, dramatically improving response times and minimizing database latency.

Strapi acts as a separate microservice, managing user-related information and homepage content. This modular approach helps maintain separation of concerns, easy updates, and better security by abstracting unnecessary details away from the frontend.

Firebase Authentication simplifies user credential management, eliminating the overhead of storing sensitive data on internal servers.

Image Handling and Optimization

Images are managed via Imbo, an image server deployed on DigitalOcean Spaces. Imbo offers real-time image resizing and manipulation capabilities, ensuring optimal image delivery speed and size.

Metadata for image lifecycle management is stored in Azure Table Storage, where image identifiers track images older than 60 days, enabling timely cleanup. Meanwhile, Imbo maintains duplicate copies of these images in DigitalOcean Spaces until deletion, ensuring consistency and availability.

Search and External Integrations

Search functionality integrates directly with TMDB’s robust search engine. Results are combined with optimized images from Imbo, ensuring accuracy and visual appeal.

When searches occur, resulting show IDs are queued into Azure Service Bus. This action triggers a .NET-based Worker Service deployed on a separate Ubuntu server. This service fetches detailed show data and updated images through imgcdn.in, communicating with both Imbo and the primary .NET WebAPI.

Notification Management

Notifications are orchestrated via Azure Logic Apps, triggered every 12 hours, and run on Azure App Services. This service processes upcoming shows and user subscriptions to generate personalized email notifications.

To prevent duplication and ensure robustness, notification data is first stored in MongoDB. Emails are dispatched primarily through SendGrid, with AWS Simple Email Service (SES) serving as a reliable backup.

Furthermore, OneSignal enables browser-based push notifications, extending user engagement beyond emails.

Security and Performance

Security is integral to the system architecture. JWT tokens and TOTP codes protect API endpoints, preventing replay attacks and ensuring authenticated access.

Redis caching dramatically reduces latency, enabling faster response times from backend services. Cloudflare Workers play an additional crucial role, managing caching and API security efficiently, offering protection against common web vulnerabilities.

Lessons Learned and Future Improvements

Building TV Shows Reminder provided several key insights:

  • Separation of frontend and backend services significantly eases development and maintenance.
  • Leveraging dedicated microservices like Strapi can significantly simplify content and user-data management.
  • Caching and image optimization considerably enhance performance and scalability.

Looking ahead, there is potential for:

  • Enhanced automation and refinement in image lifecycle management.
  • Integration of machine learning for personalized user recommendations.
  • Continuous improvements to the notification engine to deliver even more targeted and timely alerts.

Conclusion

TV Shows Reminder exemplifies a well-thought-out, scalable architecture using modern frontend frameworks, robust backend services, and strategic cloud integrations. The blend of best practices ensures an optimal user experience and a maintainable codebase poised for future growth and enhancements.

Update Property in a Nested Array of entities in MongoDB

MongoDB

Many of us would have already used MongoDB for storing data as Documents. However, I come from a world of SQL where everything is stored in Tables following the Normalization process. Hence, anything that can be further broken down, should be considered storing in a separate table.

In the world of MongoDB or rather, No-SQL, we do not have anything like that. Here, the more connected the data is supposed to have, the more chances of storing them together makes sense. Let’s look at the below example:

{
    "_id":"d5ebb427",
    "name":"Anubhav Ranjan",
    "email":"[email protected]",
    "subscriptions":[
        {
            "subscriptionId":"1abc",
            "showId":"d060b8ca",
            "notificationEnabled":true
        },
        {
            "subscriptionId":"2abc",
            "showId":"d060b8cb",
            "notificationEnabled":true
        }
    ]
}

Let’s consider the code snippet above. I have an object User having Two Subscriptions. Now I want to make one of the Subscriptions as false. This should be done using C#.

Let’s see how this can be achieved using Mongo shell

db.users.find({ _id: "d5ebb427", "subscriptions.subscriptionId":"1abc"});

db.users.update({ _id: "d5ebb427", "subscriptions.subscriptionId":"1abc"}, { $set: {"subscriptions.$.notificationEnabled": false}});

Let’s go ahead and check out the C# implementation

// C# Mongo Schema
public class User
{
    [BsonId]
    public string Id { get; set; }
    [BsonElement("name")]
    public string Name { get; set; }
    [BsonElement("email")]
    public string Email { get; set; }
    [BsonElement("subscriptions")]
    public Subscription[] Subscriptions { get; set; }
}


public class Subscription
{
    [BsonElement("subscriptionId")]
    public string SubscriptionId{ get; set; }
    [BsonElement("showId")]
    public string ShowId { get; set; }
    [BsonElement("notificationEnabled")]
    public bool NotificationEnabled { get; set; }
}

In order to do this, we are using the positional $ operator.
As per the MongoDB docs, The positional $ operator identifies an element in an array to update without explicitly specifying the position of the element in the array.

var filter = Builders.Filter;
var userSubscriptionFilter = filter.And(
    filter.Eq(u => u.Id, id),
    filter.ElemMatch(u => u.Subscriptions, s => s.SubscriptionId == subId)
);
// Find User with Id and Subscription Id
var user = await usersCollection.Find(userSubscriptionFilter).SingleOrDefaultAsync();

// Update using the positional operator
var update = Builders.Update;
var subscriptinonSetter = update.Set("subscriptions.$.notificationEnabled", false);
var updateResult = await usersCollection.UpdateOneAsync(userSubscriptionFilter, subscriptinonSetter);

After further reading, I have even found that you can change the line from this:

var subscriptinonSetter = update.Set("subscriptions.$.notificationEnabled", false);

to this:

var subscriptinonSetter = update.Set(s =>  s.Subscriptions[-1].NotificationEnabled, false);

Happy Coding!

Find 10 Memory Consuming Processes in Ubuntu

Let’s first talk about the reason I started looking for this. I have a couple of services running in Ubuntu including DBs like MySQL, MongoDB, etc. along with running nGinx and other services.

However, sometimes, I noticed that the memory consumption goes upwards and it’s wise to know which process could be responsible for this.

I decided to look into this using the ps command

ps -eo pmem,pcpu,pid,args | tail -n +2 | sort -rnk 1 | head
Output for the above ps command

Let’s look at the arguments provided:

psCurrent process snapshot report
-eSelect all processes. Identical to -A.
-oformat is a single argument in the form of a blank-separated or comma-separated list, which offers a way to specify individual output columns.
pmemthe ratio of the process’s resident set size to the physical memory on the machine, expressed as a percentage.
pcpuCPU utilization of the process in the “##.#” format. Currently, it is the CPU time used divided by the time the process has been running (cputime/real time ratio), expressed as a percentage.
pidA number representing the process ID
argsCommand with all its arguments as a string.
tail -n +2Output lines starting to the second line
sort -rnk 1r (reverse) n(numeric sort) by column 1 i.e., pmem
headOutput the first 10 lines
Based on man ps

Hope it helps!

Server reached pm.max_children

Has there been an instance that you are trying to load your site, say even a simple WordPress and it feels just so slow that you might not even get a proper response. You might just only end up seeing an error.

This is exactly what happened to me today. I have a Ubuntu VPS with 4GB RAM and all of a sudden I get emails from Jetpack saying my site appears to be down. It hit me with so many questions especially, if something happened to my VPS. Did I lose my data? Blah…Blah…

But then I thought, let me just jump into the terminal and see what’s going on. The very first thing I did is restart my Nginx Server.

sudo service nginx restart

Now that my server had restarted, I was still seeing weird responses to my pages and other services. It was as if the entire system was choked down. But my graphs were showing still a huge amount of memory left out. So I decided to dig deeper but before doing that, let’s hit the command of restarting the Ubuntu server.

sudo shutdown -r now

Once my system restarted, I went to first check the logs at Nginx to see if something got screwed up. But I didn’t find anything useful. So I went to check the logs of the php-fpm engine and this is what I found.

server reached pm.max_children setting (5), consider raising it

It hit me as to what happened all of a sudden and I remembered, it’s probably due to some of the changes I made to one of the Image Caching Server. Anyways, I started and digging around the error message, especially the setting for pm.max_children = 5

After spending sometime, I found the place for this setting in /etc/php/7.0/fpm/pool.d/www.conf

Now, this is where the tricky part begins, as you have to ensure that the settings you do are not going to hamper your server with overload as well as they are sufficient enough to handle the load. Before we jump into it, let’s understand what we are trying to do here.

The log message with pm.max_children; pm stand for process manager. This process manager is responsible for instantiating child processes to handle the requests sent over to php-fpm engine.

Here the above image explains all the required values that need to be tweaked to get optimum performance from the server. Let’s look at the current configuration which I am having:

pm.max_children5
pm.start_servers2
pm.min_spare_servers1
pm.max_spare_servers3

So, now I needed to ensure that my configurations are somewhat reasonable based on my VPS configuration. But in order to do so, I need to know the max capacity that I can expect my child processes to reach. So I executed this command to know the current memory being utilized by the child processes.

ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x&lt;=NF ; x++ ) { printf("%s ",$x) } print "" }' | grep php-fpm

ps -eo size,pid,user,command –sort -size | awk ‘{ hr=$1/1024 ; printf(“%13.2f Mb “,hr) } { for ( x=4 ; x<=NF ; x++ ) { printf(“%s “,$x) } print “” }’ | grep php-fpm

Now, once I hit the enter key, I got this result in my terminal

If you look closely, the max that one of the child processes reached was 89.00 Mb. So to compute the Approx. value of pm.max_children can be as simple as this:

pm.max_children = Total RAM dedicated to the webserver / Max of child process size

pm.max_children = 3948 / 89 = 44.36

Considering the memory for other processes, I believe it’s safe to say that I can easily set my pm.max_children to 40 at least.

pm.max_children40
pm.start_servers2
pm.min_spare_servers1
pm.max_spare_servers3

[09-May-2020 13:24:45] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 0 idle, and 5 total children
[09-May-2020 13:24:46] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16 children, there are 0 idle, and 6 total children

As you can see above, I also had to adjust the values for pm.start_servers, pm.min_spare_servers, and pm.max_spare_servers else without doing, I was getting those warnings. Here are the latest values that I had set considering the warnings

pm.max_children40
pm.start_servers15
pm.min_spare_servers15
pm.max_spare_servers30

Please note that very high values may not do anything good for your system. It might even burden your system.

Hope it helps 🙂

Reinstall NuGet packages after upgrading a project

We all have had projects which have been running for quite sometime. However, when we open them after a few months, we find many upgrades that can be performed.

Well, this happened to me this week and I found that my project which has been running for almost 5 years now, started off with .Net Framework 4.5. At present, it was still targeted to .Net Framework 4.5.2. So I decided to upgrade the runtime version to .Net Framework 4.7.2. Although this is good then I started seeing some warnings.

Some NuGet packages were installed using a target framework different from the current target framework and may need to be reinstalled. Visit http://docs.nuget.org/docs/workflows/reinstalling-packages for more information. Packages affected:

To be honest, we programmers always tend to ignore the warnings and focus more on errors. However, this is not the one that one should ignore especially if this is a library that will have to be referenced somewhere.

How to fix this?

The easiest way to do this is by executing this command in Package Manager Console

Update-Package -Reinstall -ProjectName Project.Name.Here

In the above command, we can see a parameter -Reinstall. It instructs the NuGet Package Manager to remove the NuGet packages and reinstall the same versions. This gives NuGet a chance to determine which assembly is most appropriate for the current framework targeted by the project.

To conclude, it was really easy to get rid of this warning as it can occur whenever a project is upgraded to a different target framework.

Happy Coding!

Installing Wine 5.0 in Ubuntu

Before we dive into the commands for installing Wine, let’s first talk about what Wine is in General in the Linux World.

As described in Wine’s Site:
Wine (originally an acronym for “Wine Is Not an Emulator”) is a compatibility layer capable of running Windows applications on several POSIX-compliant operating systems, such as Linux, macOS, & BSD. Instead of simulating internal Windows logic like a virtual machine or emulator, Wine translates Windows API calls into POSIX calls on-the-fly, eliminating the performance and memory penalties of other methods and allowing you to cleanly integrate Windows applications into your desktop.

In short, it allows you to run Win32.exe Applications built for Windows System on Linux.

Let’s look at the steps required for installing Wine 5.0 on a Ubuntu 18.04 LTS system using the apt-get package manager

1. Setup PPA

If this is a 64-bit system , then we need to enable the 32-bit architecture. Once done, then install the key used to sign the wine package

$ sudo dpkg --add-architecture i386 
$ wget -qO - https://dl.winehq.org/wine-builds/winehq.key | sudo apt-key add - 

2. Enable the Wine Apt repository

$ sudo apt-add-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ bionic main' 
$ sudo add-apt-repository ppa:cybermax-dexter/sdl2-backport 

3. Install Wine on Ubuntu

Time to install Wine packages from the apt repository.
The –install-recommends option will install all the recommended packages by winehq stable versions on your Ubuntu system.

$ sudo apt update 
$ sudo apt install --install-recommends winehq-stable 

In case the install fails due to some unforeseen circumstances, you can try and install the same using aptitude.

$ sudo apt install aptitude 
$ sudo aptitude install winehq-stable 

4. Check Wine version

You can check the wine version installed by running the below command:

$ wine --version 

wine-5.0 

Hope this helps.

Happy Coding!