Different Ways You Must Know To Deploy Your Application | Technical.

Different Ways You Must Know To Deploy Your Application | Technical.

Theoretically Understand the ways you can adopt to deploy your application.

We all as software developers always wanted to develop something usefully fruitful for the audience. After the development of a new project, we are required to showcase that to potential clients.

Photo by [Patrick Tomasso](https://unsplash.com/@impatrickt?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral)

Now, these clients could either be other team members, the QA team, the actual client or the end users. Consider any big organisation whether Facebook, Google, Uber etc, if their source code would be residing on a local machine and they would never be able to see the light of the actual world. What their fortune would be in that case?

So any application whatsoever when developed with passion always needs to be hosted somewhere. Unlike in older times now we have multiple cloud providers with various offering models such as PAAS(Platform as a service), SAAS (Software as a service)and IAAS(Infrastructure as a service).

The tutorial is filled with information so read carefully so that you upskill yourself with some new information.

Difference between PAAS, SAAS AND IAAS

These are mostly 3 different ways of strategy in which we have to use cloud services. Mainly all are doing almost the same thing, but it is just the approach that is different.

Since neither of them is above another, it's just the use case that you want to follow. Also, some ways can provide you with easiness of deployment like firebase/vercel which provides you with the power of one command deployment, while on the other side you need to work a bit on the setup part. The standalone deployment on the server provides you with the ability to design your infrastructure in your own way.

Mainly we will discuss very briefly about differences between all three models. Mainly these three models aren’t specific to the deployment of the code but general guidelines about any available product, but we will talk in respect of the deployment of the source code.

PAAS (Platform as a service)

PAAS Stands for the platform as a service in this type of service a user is provided with a platform on which they can do all the basic initialising before using the service.

The end user needs to be educated about the platform and the rest of the setup of the architecture is handled by the platform provider, so the hustle of setting up each and every server is saved.

Netlify, Firebase etc come under the bracket of the platform, as they give you a web interface for bootstrapping your application. Here you can provide them with information regarding scalability, instances etc. and save yourself from the hustle of setting up the entire infrastructure.

IAAS (Infrastructure as a service)

Infrastructure as a service in simple terms can be defined as providing leverage to users by setting up the entire infrastructure. Infrastructure as a service is for those users who are willing to design their entire infrastructure and want everything to run their way, plus have the technical knowledge to do all the bootstrapping.

The end-user has the freedom to choose from the operating system, the ram, the memory of the server, the type of load balancers etc. Let's suppose you are comfortable with using Linux instead of a windows server, you can choose the Linux-based operating system for your server.

In the same respect, the SAAS operating system and other details are something which you can't opt but you get deployed software in which you can maybe host your application, mostly static. Like serving you applying from Google Drive. Although it is not a very recommended way I will suggest you.

Strategies can be followed to deploy

Following are the different strategies that I have followed for the deployment of my application over the cloud. These are not exhaustive lists of the strategies that can be used to deploy the application. Let's get started with each of them.

  1. Standalone Servers

  2. Services Like Firebase/Vercel

Standalone Servers

The standalone server comes under two brackets i.e. either Infrastructure as a service or as various organisations try to have their in-house server setup.

Photo by [İsmail Enes Ayhan](https://unsplash.com/@ismailenesayhan?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral)

Although most organisations are using services from cloud providers instead of setting up their own servers. Since its an overhead which the company needs to maintain. Regardless of this fact, we are more concerned about the deployment part of the picture.

Mainly we will consider that we have a standalone server provided to us. Now we are the ones who can choose what we want to have as a Linux-powered server with 10 Gb ram and storage of 80 GB or anything else. These configurations we can choose these based on our application requirement whether it required more processing or storage.

Now in this instance, we have a Linux virtual machine ready with us which we can use to deploy the application. It will almost work similarly to your local machine.

Just additional knowledge you should be aware of is the way you need to run the service in the background. For the node js application, I mainly use the pm2 process manager. We can also run it as a service. All these things need to configure.

Some of the major players in the market are Amazon web service, Google Cloud Platform, Microsofts Azure etc. You can try any of these services almost every platform gives you some free credit to experiment, provided you have a valid Debit/Credit card.

Disclaimer: This platform gives you limited-time free credits so beware while experimenting since it might cost a huge bill if not done properly. So do research before using any of their services.

Services Like Firebase/Vercel

Now we have discussed one of the ways of deploying your application i.e. using the standalone server. In the case of a server, we have an IP address which we map to DNS.

The DNS(Domain name system) is given to the end user for accessing the system since IP addresses are alphanumeric digits which are way too hard to remember. If you want to learn about What Happens Whenever You Type a Website URL in the Browser? Do read the article it will give you a good context about the same.

Vercel (Image by author)

Now another way which we will be discussing is using services like vercel, firebase etc. The service provides you with a user interface in which you can configure your application. In these services, you can also set up the CI/CD pipeline which will help you ease the work of deployment.

Firebase Platform (Image by author)

You get the CLI tool which you can configure on your terminal. For example, in firebase, you can simply install the CLI tool. Post-installation we have to merely log in to our account using the firebase login command. This will give you the power to push, and deploy directly from your terminal.

Firebase CLI tool (Image by author)

Now you also can map your DNS with these platforms. The setup for all those things is very easy and well-documented on the respective platform. Basic small applications/projects are mostly free on almost all the applications but as your load increase, they charge you which is fair enough.

Basics

Since I assume you are trying to explore different ways to deploy the project, you must be aware of the basic information for successful deployment.

But to brush up on the facts, I will discuss them very briefly.

Operating System

The operating system is the system which connects a user to the hardware. Basically, the Mac OS, Windows XP, 10 or Linux comes under the operating system. The main purpose of an operating system is to provide the user with the best user experience while doing their work.

Photo by [Dmitry Chernyshov](https://unsplash.com/@oneor0?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral)

An operating system is not just a mere user interface, but a lot of things go behind the system like memory management, garbage management, context switching, conversion of user action to actual task etc.

So most people are comfortable with one of the operating systems. So as a developer if we are choosing the standalone server we have the option to choose amounts of the operating system of choice, as we need to connect to the server using our terminal.

For the operating system, we are comfortable with the commands and using the server becomes easier.

Auto Scaling and Loadbalancing

So the need for load balancing and auto-scaling comes into force once we need to scale up our application. As for a scalable application we need to deploy our code base on more than one server.

Photo by [Austin Neill](https://unsplash.com/@arstyy?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral)

Once the code is deployed on multiple servers, the end user needs to be given a single DNS. How will you redirect the request to the respective server? The simple answer is using a load balancer.

We give a few configurations on the load balancer based upon different strategies like round robin, first come the first server, priority scheduling etc. based upon these the load is distributed. We can manually give the weight as well to a different server if we want to do an extra load on a specific server.

For example, Let me explain to you an example of the need for auto-scaling, considering you have a shopping website and all of a sudden your website load increases manifold with possible sales going up.

Now you don't have enough servers to handle that load. You are doomed. For this, there is a need for auto-scaling, so that your system is able to cope with new traffic load by adding more servers instantly. I hope you go to the point.

CAP theorem.

CAP theorem stands for Consistency, Availability and Partition tolerance. Basically, the cap theorem states that we cannot achieve all three things in one go.

If we have distributed system and have stored data in two different partitions and the data is been written on any one of the two partitions. The available data won't be either consistent or the system can’t be available all the time.

For example, you update your name on one partition and some other user asks for your name from another partition. In that case, the other partition will return the older name, unless the new modified name is updated in both the partition. So syncing requires some time which we can achieve either by making the system unavailable for some time. Or adding latency in the request.

The system can be eventually consistent only if we have latency in the system which will help in making data sync. So CAP theorem works very significantly within distributed system domain.

Final Thoughts

As a software developer, you should be aware of the ways to deploy the application. Generally in an organisation, this work is handled by the DevOps team.

The software developer's work is limited to just the development of the code locally and pushing it to a repository. By learning about the different strategies along with containerisation using docker and levering the power of Kubernetes, and Open Shift you can make your deployment fluidic.

Hope this article gives you theoretical knowledge about the different ways of deployment.

Originally Published at Medium

About The Author

Apoorv Tomar is a software developer and part of Mindroast. You can connect with him on Twitter, Linkedin, Telegram and Instagram. Subscribe to the newsletter for the latest curated content. Don’t hesitate to say ‘Hi’ on any platform, just stating a reference of where did you find my profile.