Why start clouding?

The web moves to the cloud and we are no exception. We are slowly moving our biggest projects, Crossuite and SportNet, into the clouds. We also use it for all new clients in the e-commerce area. What are the benefits of the cloud and for whom is this solution suitable? That’s what we asked our colleague Erik, who has a wealth of experience with cloud.

What are the main differences between a physical server and a cloud solution?

From my point of view, the biggest difference is flexibility. What I need is available in a few clicks, I have everything in my own hands. If something needs to be changed, I don’t have to call any support, order RAMs, drives, wait… And if I find out that there’s a problem with space on a project, I can solve it literally in a few minutes.

The second difference is, say, user-friendliness. With a physical server, all components must be installed and set manually. You create a database, you have to set up its backup, monitoring, a lot of other things. In cloud, you create a database and everything else is set up and executed by itself.

And the third one is, at least as far as I can see, speed. My experience so far is that even with the same settings of a cloud and physical server, the cloud server is faster.

All of this sounds like benefits, are there any risks?

Definitely the price. Cloud solutions, by being able to optimise themselves, can also “collect” huge amounts of money by themselves. For example, if an administrator sets the automatic scaling wrong, it may turn out that at some point in the operation, when the monitoring detects a high load, the cloud itself decides to increase the funds and the user will of course pay for this increase. Therefore you shouldn’t headlessly click, but study it, understand the environment and set safe limits.

With a good setup, in the long run, the monthly price for cloud can be roughly the same as with a physical server. These costs will be reimbursed in a different form, for example in the fact that the administrator will have more time for other things, because they don’t have to deal with the architecture so much. And it pays off because of the monitoring as well, thanks to which we discover a potential problem in time and solve it before it causes a blackout and thus some financial loss. However, you need to do a good calculation and make the right choice.

Another risk I can think of are possible limitations. For example, Amazon Web Services (AWS), by default blocks port 25 (SMTP), which we used as a backup to send e-mails to those clients who didn’t set up their own SMTP. After moving the project, we had to rewrite part of the code so that we omitted this port, so that the functionality was also all right on AWS. However, some such things can also be solved with AWS support, which is very fast and often also reaches out and proposes optimizations and solutions. Perhaps they would even allow sending over the port 25 because of us and our project, if we asked them to.

You talked about the right choice, what are the current choices and according to what criteria to choose the right cloud?

The best known solutions are Amazon Web Services, Google Cloud and Microsoft Azure, but there are certainly many more. We approached the selection very easily – we created a test account on each cloud and we set the same requirements everywhere. Subsequently, we compared the prices for the services, tried the user interface, special accessories, etc. Eventually, we chose Amazon.

Those special requirements are something that can be quite important when choosing (for example, support for certain types and versions of databases, the possibility of extending them, etc.). If a client needs a certain service that only one of the cloud solutions offers, then there’s not much to choose from.

And the last aspect of choosing is the human factor. If we have in the team a person who greatly understands Amazon and knows nothing about the others, it’s definitely better to use Amazon than to teach someone to use the others.

You said it wouldn’t work without the human factor. So how does the administration work and what’s the task of the person who takes care of a cloud server?

There are some extremists who say that as long as it’s working, you don’t even have to look into it. ? But like I said, a client could needlessly lose a lot of money like that. I check the server at least twice a day. For example, I open the memory monitoring and I see that we only use a third of the 16 GB of RAM, which we initially set up. So I make a change immediately, saving the client’s money, of course. I also check the database, for example. If a value has jumped to the extreme, I’m checking to see if it’s a normal increase or if it’s some kind of mistake. In the latter case, I’m looking for a way to solve it right away. Currently, in cooperation with Mr. Madliak, we’re working to ensure that, in the event of unusual increases or decreases in certain values, such as the utilisation of the database, I receive a notification and can deal with it before the values exceed the critical values. 

This first month or two of operation are the most time-consuming because we’re looking for the optimum. But over time, it’ll be fine-tuned and then it’ll really only be about a few seconds-long checks a day.

For whom is it efficient to move to the cloud? 

For anyone whose physical server can’t keep up. Specifically, for fast-growing companies that need to effectively address scalability. For them, cloud is definitely a safer and more efficient solution than building their own server room. 

Perhaps also for companies that offer similar services as we do. Our e-commerce team has a cloud and stores projects of dozens of clients on it, which ultimately makes it worth for us and our clients to operate it.

Is there any way to test the transfer, simulate it, so that the client knows in advance that everything will run smoothly?

Yes. For example, with Crossuite, we made a copy of the “live” database from the original application (internally called Gama), which is currently on our physical server, and we moved these copies to the application’s cloud twin – Delta. We tested this condition manually and automatically for 3 to 4 weeks. It wasn’t until we all – developers, testers and the client – agreed that everything was set up correctly, that we really moved these almost 2,000 users.

How does such a project transfer take place?

The first step is definitely the analysis associated with choosing a cloud and then designing the architecture and creating the account itself. This should be followed by an update – for example, the latest version of the language in which the project is written, and so on. This isn’t a requirement, but it allows the client to take full advantage of the cloud, because older versions may have limitations or may not be supported at all.

The easiest way is to set up a cloud server exactly the same as the original server and monitor it. Monitoring will then show you where to take it up and where to tone it down.

Instead of just “copying” the original server settings, however, it’s better to take advantage of things that only the cloud can do. For example, you make code into a docker image run on the cloud or use serverless code execution. Of course, everything has its advantages, but also disadvantages, and even new “features” need to be used wisely.

Thank you for the interview.