If you could not attend the Invent meeting at the end of last year, we will tell you the novelties that AWS prepared for 2023.

So let’s get started:

AWS Supply Chain:

Created to somehow solve the problems that exist with the global supply chain.

We know that Amazon has been in the market for more than 30 years. And in that period, it has been dealing with and trying to improve its logistics network and supply chain for all the products it sells.

And as they say, no one can take away what you danced! That is why Amazon has used these 30 years. And this experience was captured by creating AWS Supply Chain for all AWS customers.

The benefits offered with this are to provide your users with the ability to:

  • View stock levels.
  • Predictions based on machine learning.
  • Risk projection and cost reduction.
  • Alerts for products that will be out of stock.
  • Have the parties involved have contact to resolve the problem.

It will initially be used as a preview in Northern Virginia, Oregon, and France.

AWS Clean Rooms

As we know, it is not enough with knowing about our data, but also about our biology.

That is why AWS is focusing on the analysis of biological data.

Hence, the creation of AWS Clean Rooms. This works under the concept of a clean room in a hospital and is something that has been brewing for a while and came to light in the past year.

AWS Clean Rooms will allow you to preview for forecasting, as well as share and analyze groups of data without the need to duplicate or share the underlying data itself with third parties.

All this with the intention of “taking care” of the information received and “protecting” it.

Amazon Security Lake

Security will always be important for businesses.

That is why AWS has created Amazon Security Lake to increase security policies when accessing the information handled by different companies.

This product promises is that will be able to manage, aggregate, and analyze the different records and events on the data that is being accessed throughout its entire life cycle.

In this way, threat detection is enabled, and accidents can be investigated and acted on more quickly.


Amazon Omics

As we have said, AWS has focused on database management over the years. And raising its bar, it has now decided to analyze the biological data of human beings, which is why Amazon Omics has emerged.

Which is designed for large-scale research and analysis of “human genome” data from entire populations, it will also store and analyze these data.

For those companies such as Pfizer, and Moderna, among others, who need to know the genomic data of those with whom they have been experimenting and those who will seek to add to these experimental projects.


AWS Glue Data Quality

In the same sense of data storage and analysis, AWS creates this new service to improve the quality of data that is managed in multiple information sources and “data lakes”.

AWS Glue Data Quality’s core offering is to automatically measure, monitor, and manage data quality.

In addition to programming its periodic execution as the stored data changes to simplify the process of maintaining the quality of the information that is processed.


Amazon Athena for Apache Spark y Amazon DocumentDB Elastic Cluster

AWS builds these products for data storage and analysis.

In the case of Amazon Athena, it is created to add compatibility with this famous platform based on the open-source Apache Spark.

The benefit offered by Amazon Athena is to automate the necessary resources, and speed up the execution time and the results of the queries made on said platform.

In the case of Amazon DocumentDB Elastic Cluster, it’s able to support millions of writes per second and store up to 2 petabytes of data, for those who need to manage JSON data at scale.

Amazon DocumentDB Elastic Cluster achieves this by using a distributed storage system, which automatically partitions data sets across multiple nodes without the need for administrators to manage these complex processes.

AWS SimSpace Weaver

In another order of ideas, AWS creates SimSpace Weaver which is a service to run massive spatial simulations without the need to manage the infrastructure.

What does this mean?

This means that with AWS SimSpace Weaver you will be able to create quite complex 3D simulations and also provide models. In addition, bring your SDK toolkit so that you can assemble it yourself and the possibility of incorporating digital twins.

As if you were Bob the builder.

Amazon SageMaker

AWS has improved the popular machine learning service.

Granting 8 new capabilities to optimize, train and deploy Machine Learning models, considering that AI is growing exponentially.

Among these new capabilities is:

Access control and permissions

Documentation and review of information models throughout the machine learning life cycle.

Unify in the same interface the control and monitoring of the models.

Improve data preparation to increase its quality.

Allows data scientists to collaborate in real-time and/or validate models in an automated way, using real-time interference requests.

And if this were not enough, it will be able to support geospacial data.

What is this for?

Scientists can continue to find ways to control our weather, agriculture, and other areas that are natural and where man thinks he can continue to intervene, now using AI.

Tell us what you think of the AWS news, which ones are you interested in implementing in your business, or venture?

A few days ago, we discussed some aspects related to cloud migration. Why migrate? What should we consider? Among other topics. We even talked about some servers.

But… What is a multi-server environment? What benefits does it bring us? Why implement it? Does it apply to everyone?

We are going to work on these questions for you in this post. We will try to delve into which is the most beneficial option for your business.

The first thing is to know or recognize what a multiserver environment is.


A multi-server environment is a server infrastructure type, that uses multiple physical servers, to provide users with access to different services and applications.

In other words, having multiple servers can work better than having a single server. Because you can get better performance, reliability, and availability.

However, this proposal does not apply to everyone, hence the importance of answering the following questions:

·       Does your company need a multiserver environment?

Once you’ve answered those questions with members of the IT team and some of the CEOs that make up the most important areas of your business, you can move on with the process.

If the size of your business is small, it’s important to consider what your core business is and how that infrastructure will be.

If you have already migrated to the cloud, why are you considering this?

Additionally, it is important to know at the cost/benefit level how this investment benefits you.

Now, it is important to know what are the benefits of a multi-server environment.

Let’s see what are the benefits of dividing the resources between several servers.

·       Raise the security level

One of the main concerns of enterprises is online security.  Because as technology increases, access to it and digitization, then greater the exposure of your data and resources.

Therefore, having a multiserver environment, there is an increase in security.


Because you don’t have all your eggs in one basket.

Using multiple servers to allocate the resources, the data is separated from the central servers just like the resources. For that reason, there is less chance of damaging all the resources or data because they are in different places.

In the case of those companies that handle confidential data, it is a non-negotiable point. Because it is a priority to have the databases separated from the servers.

·       Cost-effectiveness: improves with a multi-server environment.

This seems like a contradiction. We will try to explain why it is not.

For example: If you are having problems related to server performance, or with resources such as CPU or RAM, the multi-server environment will be more profitable.

Because separating your server resources will allow you to perform all tasks separately and is one of the most beneficial ways to get the most out of your environment.

In addition, improving reliability will allow you to reduce your costs. Because the needs and functions of database and application servers are different, separating them will improve.

Another example of cost optimization is the implementation of roles. It will offer better performance at a lower price compared to traditional server setups.

·       Resources under effective supervision

A multi-server environment will allow for more effective monitoring of resources because it is directly related to your server’s configuration.

For a company, tracking functions and resources is essential to ensure greater efficiency and obtain better performance. A careful eye often helps to maximize uptime and the wisest allocation of resources. This can be done thanks to multiple servers.

If your business is riding a wave of rapid growth, effective resource monitoring is essential.

·       A multi-server environment reduces dependency

If you liked to play monopoly, with the multi-server environment, this doesn’t apply.

In a multi-server environment, each server is responsible for its own functions and tasks.

Each one fends for himself. This will allow more connections to be made.

Also, this will make your company or online business more secure and reliable.

And if one server in a multi-server environment goes down, the other servers can continue providing access to the services and applications that users need.

In other words, there will be less downtime and fewer interruptions for users.

·       Increased scalability and resilience

This will be another benefit, because the load will be spread across multiple servers, allowing more traffic and less chance of blocking.


Multi-server environments are an excellent option if diversification is important to you as an entrepreneur, putting monopoly aside, and your business is at the point that requires this change.


Call us if you want more

In another post, we talked about why to migrate to the cloud. This time we will talk about how to know which is the best strategy to migrate to the cloud.


The first thing that is recommended to do before migrating to the cloud is:


  • Select the right cloud infrastructure for your needs

This means that it will need to have the power to process, and the storage space to handle the workloads you have today.

Without mentioning the capacity and the power for the growth projections.


  • Vendor Price Assessment

Once the infrastructure is defined, we move on to see the prices of the cloud provider.

The one that best suits the budget and meets the system requirements that were worked on in the previous point will be evaluated.


  • Security

Decisions need to be made about cloud security and compliance, including cloud provider metering policies, where the data will reside (public cloud or private cloud), permission settings, encryption options, and much more. 


And now, What should we do?

Once everything suggested above is resolved, it can be said that we are ready to go after a project plan to migrate to the cloud.

It’s important to note that when you have a clear and understood plan or strategy, it will be done much faster and cost much less than if we didn’t.


  1. What is the best strategy or plan for cloud migration?

Some servers, such as Liquid Web, consider that the best plan or strategy is:

  • Make a map detailing each part of the current infrastructure.
  • Know what will be migrated and where
  • What will happen to exist assets that will no longer be needed?
  • We have to communicate and share this information with the parties involved, and it has to be done in time to take knowledge about what are their concerns. Based on this, We can make the changes that are needed. 

Instead, other benchmarks, such as AWS, recommend the “6 migration strategies”, better known as “the 6 R strategies”.

What’s this about?

They are the most used in the market and are detailed below:

  1. Rehosting – lift and shift

It consists of moving the applications without the need to make any changes. Such as large-scale legacy migrations, where organizations want to move quickly to meet their business goals.

These applications are usually rehosted, and the process can be automated with some cloud tools.

However, some prefer to do it manually, bc they find it easier to optimize or re-architecture apps once they are running.

Because they consider that the most complex part (migrating the application, data, and traffic) has already been done.


   2. Replatforming — I sometimes call this “lift-tinker-and-shift.”

This plan or strategy refers to making “some optimizations” in the cloud.

For what?

To see the fruits of our labor tangibly.

This will allow the company to maintain the core application architecture and save money that would be spent on issues like licenses, electricity, etc.

   3. Refactoring = Re-architecting   

In this strategy it would be good to ask yourself:

Do I want to migrate from a monolithic to a service-oriented architecture? 

What is a business need I have that would be difficult to achieve in the current infrastructure?

This strategy tends to be the most expensive, but it can also be the most beneficial if you have a well-positioned product in the market.


  1. Repurchasing  

Migrate from perpetual licenses to a software-as-a-service model. Moving a CRM to Salesforce.com, an HR system to Workday, a CMS to Drupal, and so on.

     5. Withdraw

In this case, it would be good to ask yourself:

Which applications are no longer needed? What should I remove?
Once you have the answers, you need to know who the people affected or in charge of that area are.

According to AWS “it is estimated that between 10% and 20% of the enterprise, IT portfolios are no longer useful”.

This strategy can reduce the number of applications to protect and can be used as an engine to drive the business forward.


    6. Retire  = Revisit

Maintain business-critical applications that require major refactoring before they can be migrated. You can revisit all the apps that fall under this category later.


Amazon Web Services – AWS Migration White Paper


As you can see, it is important to asset what is good for your business before implementing it. That way, you will be optimizing time, money, and resources.

If you need to migrate or switch to the AWS cloud, at HADO, we can help you.

s for more information.

If you have not yet decided to migrate your business to the cloud, then you are in the right place.

Because today we are going to give you several reasons that will answer the famous question Why migrate to the cloud?

Before we jump right into the why we want to explain:

What is migrating to the cloud?

Well, migrating to the cloud is a move.

That’s right, you’re moving, but instead of taking your stuff with you physically, you’re moving digitally.

It is to take all IT resources, services, applications, databases, and digital assets either partially or completely to the CLOUD, that is, you will no longer have them within the hardware and software solutions you have in your company and you will make them digital.

Note that this term also applies when we move from one cloud to another.

We understand that nowadays it is rare to find businessmen or/and entrepreneurs who have not yet partially migrated their companies or ventures to the cloud.

However, it may happen that you have done this process and you are still not clear why it is so important or you are only contemplating a look at this point.

Hence, we are launching this post to give you new nuances on this topic, and here we go.

Why migrate to the cloud?

Migrating to the cloud is important because:

  1. You will get rid of obsolete software, hardware, and unreliable firewalls that can generate problems in the medium or short term, and when it comes to expanding the work outside the company they do not work, they do not support you. Example: the home office modality, which today is one of the ways of working.


  1. Reduction of operating costs: this is the spearhead for many companies, especially in Latin American countries.

This happens because you only pay for what you use.

You do not have to have data centers, nor have computer equipment dedicated to its maintenance, or equipment that works as a server (which is not cheap at all).

Nor will you have to pay high electricity bills, and you will have a reduction in labor liabilities since the DevOps team and system administrators will not spend their time doing backups or hardware maintenance.

At HADO, we help you reduce these costs and we will also optimize your investment during the migration process.



  1. Security: just like when we move physically, it is very important to feel that we are in a safe place.

That is why reliable cloud providers have taken care to build their respective clouds from scratch, and their services follow the latest industry standards to reduce the risk of cyber-attacks.

And within the cloud you can request that maintenance and disaster recovery be managed within the day-to-day needs, allowing the workload of the IT team to be much lighter and easier to manage.


  1. Scalability: this point has a lot to do with business movements.

 What does this mean?

That when the business is at its peak (fat or lean cows) the business will be able to quickly adapt, either by reducing or expanding its capacity.

This can be done automatically and will not require the use of obsolete applications and/or technologies, nor a lot of time, money, and effort.

You will be able to adapt to the market with new, advanced technology, quickly and elegantly and you will feel tangible how the growth of your company is in your hands.

One example is Yedpay. This payment platform decided to migrate to the cloud after experiencing a problem in its data center.

By not needing to invest heavily in IT or in person to maintain the physical infrastructure, they obtained a 40% reduction in their costs as a result.


  1. Availability: imagine having access 24/7 – 365 days, this is feasible when you are in the cloud.

Accessing your data and applications will no longer depend on whether or not you are in the office or the business, you can do it without any problem in any country and place as long as you have an internet connection with good speed, valid access credentials and the willingness to do so.

A clear example of this benefit is: working remotely, you have an unexpected meeting and you are on vacation in Bali and need access to certain confidential information.

These are some of the reasons we can tell you that answer the initial question of why migrate to the cloud.

However, this is the tip of the penguin’s hat at the tip of the iceberg.

In these times it is also important to ask:  when to migrate? And how to do it?

If you need advice do not hesitate to contact us now, our highly qualified team will be able to clarify your concerns and accompany you in your migration process.


What is the positive impact of DevOps and Agile adoption on your project? What are those metrics?

Suppose we were to use a sports inference to describe DevOps and Agile. In that case, they are a duo like Jordan and Pippen, Clements and Jeter, or in soccer terms, Messi and Suarez, (or Julian Alvarez), i.e., there is no loss when they are together.

DevOps makes a difference when the two primary columns of an enterprise ecosystem, development and operations, work together in an integrated way.

Why do we say this?

Because DevOps cannot be efficient without an Agile setup.

There is no point in accelerating delivery and automating code deployment and infrastructure if the development team only releases builds at huge intervals.

Nor does it make any sense to perform development optimization and accelerate build processes if new code reaches users in the next release.

So what is Agile’s role in all this?

Agile will facilitate continuous and on-time delivery while you are implementing DevOps. That’s why these two work so well together. It’s kind of like Pippen making the pass to Jordan for the basket =P

Now, are there any metrics to measure the effectiveness of this duo?

The answer is Yes.

Not only because it is functional to see what will be good to improve but because it is a necessary, almost fundamental requirement.


Because more and more companies provide DevOps services from Latin America. The competition is greater, and the customer’s chances of changing service providers are increasing.

We say this by taking as an example the statistics of Fortune. Where they indicate that the LATAM countries where there has been the most significant growth of these companies are:

– Brazil

– Mexico

– Argentina/ Colombia

– Chile

– Peru

Therefore, it is increasingly more accessible for a customer to make a change if they feel that the experience, the speed of response, and the cost, are not aligned with their expectations or business paradigms.

So, back to the previous question:

How can you measure the effectiveness of DevOps and Agile?

We will tell you about some of the most commonly used metrics in DevOps + Agile:


  1. Quality across multiple tests

This is not just a metric for DevOps and Agile but arguably for everything in life.

Taking it to this level is vital because it is a great differentiator and may even be the philosopher’s stone of Harry Potter.

The simple reason is that if you do not offer quality in the automation, if it is not properly managed, you can make a big mistake and jeopardize the entire project and its security.

Hence the importance of measuring quality throughout the development-deployment tests.

Note that a recent survey conducted by Forrest indicates that:

72% of companies say software testing is critical to the DevOps + Agile lifecycle and continuous delivery. We agree.

These companies focus on developing their skills while budgeting and testing enough to implement Agile and DevOps practices across the organization.

For example, we recommend doing all of these:

– Perform continuous testing implementation in response to the demand for much faster releases.

– Automate end-to-end functional testing.

– Make testers part of the delivery team.

– Start testing early in the development lifecycle (shift testing).

However, not all companies do it completely but alternate between these tests.

 But what if we look at them individually?

For example:

Imagine that you can make a midfield play to win the game, but you didn’t do the proper testing in the training and don’t know the minimum needs to score without a problem.

Unit tests seem to be a waste of time.

However, they can be critical as the code base evolves. Because it gives information to developers and testers and, if prioritized, can lower the risk levels.

In the case of integration and API testing is increasingly relevant because there are so many things that can happen beyond the user interface that quality and risk levels can get out of hand.

Finally, end-to-end regression testing is essential.

Companies that have been doing this for a while suggest performing these tests at the process or transaction level. However, we know this task can be challenging, as speed is very important.

However, maintaining high levels of automation is a priority; this will only be achieved as more tests are automated.

All this is without leaving aside the speed of delivery and costs. That is when Agile-DevOps make you fall in love and work its magic.


  1. Lead time of execution of the changes

This is vital because we know that speed of delivery can lead to penalties, to put it in a sporting way.

This metric helps us know how long a commitment takes to reach production.

Here we will see the efficiency of the development process, the team’s capacity, and the code’s complexity. This metric is ideal for delivering the software quickly.

  1. Automation

End-to-end automation is a business differentiator.

Companies that rely on manual testing tend to view testing as a bottleneck as opposed to those that automate.

Automating software quality control streamlines what can be very slow manual processes.

In addition, most companies that follow Agile+DevOps best practices consider the automation of their QA processes to be a key differentiator.


  1. Risk measurement

We have been talking about the importance of automating to improve the speed and frequency of releases significantly.

However, accurately measuring and tracking quality throughout the software development lifecycle is vital.


Automating the delivery process could increase the risk of delivering defects in production.

So, if organizations cannot accurately measure business risk, automating development and testing practices can seriously harm the enterprise.

In that sense, companies that have been the longest in the market and have been working for them say is:

Quality and speed are on different scales than risk: most companies still need to establish the connection between speed, quality, and risk.

In general, the importance of risk in software lags behind the established development goals of quality, on time, and on-budget.

Because of this, it is important to measure business risk accurately, and we know this is not a “blow and make a bottle.”

We now know the importance of having good quality control and testing processes.

And this has led us to realize that connecting risk, quality, and speed is complex.


  1. Workflow Efficiency

This metric will measure the ratio between the time the team contributes and when it does not add value while the software is being done.

It will provide the number of active times (when the team is actively working to reach the objective) and waiting times (when the team has had to: prioritize other things, has excess work, etc.) within the total time spent to finish each process.

For this to be measurable, it is important to place tools that clearly define the active and waiting times.

These are some of the metrics we use with DevOps and Agile that have been working for our clients.


If you need help implementing or carrying out these or any of these metrics, write to us now.


Finally, we invite you to look at Forrester’s in-depth survey of companies worldwide using DevOps and Agile.


Amazon Web Service (AWS) will hold the highly anticipated Re: invent 2022. conference from November 28 to December 2.

Why is it so expected?

Because you all know that this conference is like going to a Coldplay concert, since it is one of the most anticipated for all those AWS customers and partners dedicated to everything related to advances, news, and trends in the cloud.

In addition, it will be an ideal time to make alliances and carry out certifications, we will have the opportunity to ask these great personalities questions, more than 1,500 technical sessions, and, of course, to have a good time because we are going to achieve everything.

Will you wonder if it’s virtual or in person?

Well, it is a hybrid event, since you have both options.

If you decide to attend in person, we will tell you that it will be held in Las Vegas and they have very well-designed logistics so that you do not waste any kind of time finding accommodation, how to get to each talk and/or activity, forms of transportation, etc.

But, we know that the one who has decided to launch himself to Las Vegas has already made this investment with enough time because we are not talking about 20 USD.

We are talking about 1,799 USD.

Now if you decide to do it face-to-face, you have free access to all the master classes and talks by AWS leaders.

AWS seeks to provide the best possible experience and if you are not fluent in English, you can use the simultaneous translation that will be available for certain languages.

What kind of talks will you find from AWS leaders?

This conference to be held in Las Vegas will feature the heads of AWS starting with Adam Selipsky – AWS CEO, followed by:

  1. Peter DeSantis – Senior Vice President of AWS Utility Computing
  2. Swami Sivasubramanian – Vice President of Data and Machine Learning at AWS
  3. Ruba Borno – Vice President of AWS Worldwide Channels and Alliances
  4. Werner Vogels – Vice President and CTO of Amazon.com

In addition, we will find from very technical talks such as those given by Barry Cooks – Vice President of Kubernetes for AWS, and Yasser Alsaied – Vice President of IoT, to slightly more controversial topics such as those given by Candi Castleberry – Vice President of Diversity, Equity, and Inclusion ( DEI) at Amazon on AI and Howard Gefen – General Director of Energy and Utilities on energy and sustainability.

What else does this mega conference offer?

Well, not everyone is work, work, work, as Rihanna would say =), there is also an after-work moment and the proposal is quite broad, where you will find:

Actividades Sport activities:


5K race: which cost of 45 USD to enter and you will receive drinks, snacks, and a shirt, you will also be doing it for a noble cause since all the proceeds will be contributed to the Fred Hutchinson Cancer Center organization.


Ping Pong Competition: This competition is not something that is taken at stake, it is a serious thing and it has its playoffs and finals.

In addition, you will have recreational activities such as:

  • Comment wall, photos.
  • Dazzling digital art space.

And if this seems little to you and you still have enough adrenaline, you can go to the party with DJ and guest artists, to the games room, or enjoy the gastronomic section because “Belly full of happy heart”.

In short, as we told you at the beginning, AWS does not play carts when it decides to hold an event, it comes with all to be able to cover the demands that all attendees will have because it knows that we are a demanding public😉

If you know more, please click here

We will participate, we hope to see you there and get to know each other.


It is a habit in any market to talk about tendencies on these dates, and the development of software and emergent technologies is no exception.

For this reason, today we will be explaining what specialists say will happen to DevOps by 2023. What is expected?

Let’s begin by recognizing the growth that DevOps technology has had in the last few years. 

According to some surveys, the increment of the compound annual growth rate (C.A.G.R.) for DevOps will be 24.7% between 2019 and 2026, and it will be flirting with a worth of 20.01 million dollars… Not bad at all! 

This sends a clear message to us: this technology, which has a unique potential, keeps getting stronger and is revolutionizing the industry of software development. That is why we are specialists in this 😉.

But, why do we bet on DevOps?

We do it because DevOps stimulates better and more communication, integration, collaboration, and teamwork among developers (Dev) and TI operators (Ops), without leaving aside clear communication and the possibility of having a better experience with the client, considering that the latter can visualize the project more clearly.

That is why businesses like HSBC have decided to sign agreements with Cloud Bees and their DevOps Cloud Bees platform since 2021, to standardize worldwide software delivery for more than 23,000 developers.

Having said all of this, let’s get to know what is expected of DevOps for 2023.

Some portals like solutionanalysts, point out that there are 10 tendencies in what is referred to as the market of operations development (DevOps).

However, we will single out what we consider to be the 5 most important ones, and we will explain why that is. 

Let’s begin! 

  1. Low Code growth: 

This comes as fast as sound. 

These are great tool to extend the benefits of Agile and DevOps.

But why with a low code?

Businesses prefer low code to develop and unfold applications through the DevOps process in a fast way, given the fact that not everyone has a team of specialists at hand, and many want to do things in the fastest and less expensive way possible.

Moreover, the creation of a piece of software is a job as delicate as a piece of art, considering that the program has to work optimally not only for the user but also for the developer. Additionally, applications are in constant change.

Many softwares use similar patrons and, sometimes, creating them from scratch for each project can be a huge investment of resources and time.

From there comes the opportunity for Low Code to resolve some of these problems.

Also, Gartner’s analysts estimate that Low Code’s market will grow from 2021 to 2025 to almost 30.000 USD.

In addition, Gartner’s people foresee that this will represent 65% of all applications development activity in 2024.

You may think, who are Gartner?

Well, they are a group of experts that, just as they indicate, help you with their tools to… ‘see a clear way to make decisions about people, processes, and technology. 

Basically, they are the best at studying processes, businesses, and technology.

In conclusion, Low Codes will give you agility and they will help you not fall behind while being an active part of the competitive software market. Once mixed with DevOps, it ends up being like chocolate and passion fruit (a perfect combination). 


     2.  IA just around the corner 

IA is more and more present. In the future, it goes hand in hand with DevOps.

Why is that? 

Because IA will replace humans as vital tools for computing and analysis, considering that said humans are not as effective to manage the large amount of data and computing that will need to be handled in daily operations.

IA will join software to improve its functionality.

This will allow DevOps teams to…:

  • Code
  • Test
  • Supervise
  • Launch

… the different softwares they are making, in a more effective way.

    3. Improve security, one of DevOps goals for 2023

Nowadays, having the right security is one of DevOps teams’ greatest challenges.

More than 50% of developers are responsible for the safety of their organizations.

That is why obtaining the right security is such a big deal.

For that reason, the practice of DevSecOps (development, security, and operations) is one of the greatest tendencies in software development for 2023, because it integrates elements of security at every stage until successfully delivering the developed solution. 

This is developed through DevOps, and it will allow development teams to detect and attend to security problems in the present, with the speed of DevOps.


    4. Infrastructure as a code: another big tendency for 2023

Infrastructure as a code or IAC, as it is known in its acronym in English, is estimated to be one of DevOps’s greatest tendencies for 2023.

But, why is that?

That is because it will allow the infrastructure to be managed and supplied automatically, and not manually as it had been done up to that point. 

The continuous supervision, the control of versions to the code that runs the development, the virtualization tests, and the administration of DevOps’ infrastructure will be better using IAC.

Furthermore, it will allow a more “face-to-face” job between infrastructure and development teams, which is vital for DevOps.

    5. Serverless companies.

If you are thinking about taking the big leap, this is one of the important challenges you will have to face: “serverless computing”.

This concept, which is a little abstract for what was known, represents the externalization of infrastructure and its tasks to external providers.

The fact that companies are still without a server as they had known before, will allow a change in their IT operations, and they will be able to take DevOps approach in a much better way.

Moreover, it will allow teams to eliminate the risks and problems related to the management of the pipeline, and to focus more on the development and unfolding.

We believe that these are going to be the 5 greatest tendencies regarding DevOps this 2023. And you, what do you think?

If you want to know about our service, just click here


Before you begin, confirm that you have the following tools, that we’ll need, ready to go:

  • AWS CLI.
  • Session Manager Plugin.

In case you don’t have some of these, I’ve left the corresponding links to install them below.




We must create a role, in this case, we’ll be calling “ecsTaskExecutionRole”, it allows ECS to execute tasks and commands against other AWS services.

  • Go to the IAM console, select “Roles”, then select “Create Role”.

  • In our case, the role we are creating is for ECS to use, so it will be for an AWS Service, to do that follow the steps you can see below:

  • Then we need to add a Policy to our Role, which allows ECS to perform tasks. The one we are looking for is “AmazonECSTaskExecutionRolePolicy”; you can filter by “ECS” word to find it easily.

  • Now, we have to name and describe it:

It seems to be ready, but we need to add another Policy yet, and this one it’s gonna be created by us.

  • Let’s go then, go to the roles section, look up the one you just created and select it.

  • Click on “Add permissions” and select “Attach policies”.

  • Then select “Create Policy”.

  • Click on “JSON”. Here we’re going to see a file like the next one:

  • We need to write our second policy, this one will allow ECS to get the Session Manager able to execute commands inside our containers. So, delete the current content of the file, and below, you’ll find the one that we need to insert.

  • Once it’s ready, we’re able to continue, select “next”.

  • Now, we have to name and describe it:

  • Now we’ve brought back to the “Attach policy” section, refresh the page and search for the policy we just created. Search it, select it and attach it.

ECR – Elastic Container Service:

Let’s create a repository on ECR to store our container images.

  • Go to the ECR console, and once there, select “Get Started”.

  • Choose a name for your repository, in my case I’m naming it “demo”. Leave the rest of the options on their default configuration, and click on “Create repository”.

  • That’s all, now we’re ready to push our docker custom images to the repository using its URI.


  • The following are useful commands to use to log into the ECR, create an image from a Dockerfile, tag and push an image, etc.

Create an image from a Dockerfile:

ECR Login:

Tag an image:

Push an image:

ALB – Application Load Balancer:

So, time to create a Load Balancer

  • Go to the EC2 console, scroll down until the end, and select the “Load Balancer” option that you can find on the left side.

  • Now, click on “Create Load Balancer”.

  • In this case, we’re gonna choose the “Application Load Balancer”:

  • We need to name our Load Balancer, then left the rest options on their default configuration.
  • Select the correct VPC and choose at least two subnets.
  • Also, you’ve to select the Security Group.

  • The next step it’s to choose the listener port of our Load Balancer, the most common ones are HTTP/80 and HTTPS/443. We’re gonna work with the HTTP/80 listener.

Right below, we’re asked to select a Target Group, these are which tells the Load Balancer to where send the traffic that is being received on the listener port.

So, we’re gonna create one, even if in our case (which it’s) we don’t have an instance/container/app to be the target of requests yet, just we need this step to be done for the Load Balancer creation.

Don’t worry, later we’ll be doing this configuration in a way that works for us.

  • Then, click on “Create target group”, it will take you to another window.

Here select the “IP addresses” option, name your Target Group, then leave the rest in their default configuration and click on “Next”.

  • Now, choose the correct VPC (the same one that we chose before for the Load Balancer). Then, click on “Remove” to delete the suggested IPv4 address and, finally, click on “Create target group”.

Once done, we can close the current window and continue working with the Load Balancer creation.

  • So, now we are ready to select a target group, to be able to see in the options the one we just created, click on the refresh button, then expand the options and select the correct one.
  • Then leave the rest as default and click on “Create Load Balancer”.

ECS – Elastic Container Service:

  • Go to the ECS console, once there, on the left side of the page click on “Cluster”, then select “Create cluster”.

  • Give your cluster a name, and choose the correct VPC and subnets, as we’ll be using the ALB that we have created before, please be careful and select the same ones for the cluster.
  • Keep the Infrastructure, Monitoring, and Tags settings without changes. Finally, click on “Create”.

  • Once the cluster it’s created. Select from the left side of the page the “Task Definitions” option.

Here the role we created earlier will be assigned and also we’ll indicate the image that we want to be deployed in our container.

  • Name your Task Definition, for that, it’s useful to be aware of the image’s name that it will be deploying, just to be easier to identify its function in the future. The same applies to containers, services, etc.
  • Then, name the container. For the Image URI, you’ll have to go to your ECR, search in the repo for the image you want to deploy, and copy the “URI” just as you can see below in step “2”.
  • Once done, choose the correct port for the container, and click on “Next”.

  • Leave the Environment as default, choose the size of the container, and select the role that we created before for both places, Task Role and Task Execution Role. Then specify a size for the Ephemeral storage, the minimum it’s set to 21GB.
  • The last item it’s Monitoring and Logging, it’s optional as you can see, just be aware that enabling one or some of the options carries a cost. Once it’s done, click on “Next”.

  • Review all the configurations and click on “Create”.

Now that we have the Task Definition, we can create a Task or a Service from it. There are many differences between these two, one, for example, might be:

A Task creates one or more containers, depending on the configuration that we set, running our apps, if some of the containers get down, it will keep down.

A Service gives you more tools to avoid that problema, because a service can run multiple tasks and even you can set the desired amount of tasks to be running if one of them get down, the service will be in charge of bringing up another.

  • In our case we’re gonna create a Service, so select “Deploy” and then click on “Create Service”.

  • In the Environment space, just select the cluster that we had done earlier and keep the rest without changes.


  • In Deployment Configuration, choose “Service” and give your service a name.


  • In Networking, select the same VPC and subnets that you have chosen when you created the Task Definition, choose the Security Group and be aware to Turn on the “Public IP”.


  • In Load Balancing, select “Application Load Balancer”, we’re gonna use the ALB and the Listener (80:HTTP) that we had created before.

The time to create our useful Target Group has come, so, name it and choose HTTP “Protocol” (same as the listener).

Then, the Path Pattern can be a “/” and, in this way, the requests will match with everything after alb-dns.com/, but if your idea is to deploy many apps will be useful to identify them and redirect the requests to the specific path associated with their names.

In my case, I’m using /demoapp/*, please take note of the *****, it needs to be always at the end of the path, for the requests to match without errors. Also, the Health Check Path needs to be the same as the Pattern Path but without “/” in the end.

Finally, choose the Health Check Grace Period and click on “Deploy”.

  • That’s all, inside your cluster you’re gonna see your Service and the status of the Task that it deployed.
  • Also, if you click on the Service’s name, you can know multiple useful details, such as the Status of the Health Checks, the Task ID, etc.

In the case that you need to ingress into some container to do troubleshooting, maintenance tasks, etc.

I’m leaving below a couple of steps to you get that.

  1. Enable “execute-command” in the Task Definition.
  • For that, you’ll need to know the names of the Cluster, Task Definition, and Service.
  • “Number of Revisions” refers to the version of the Task Definition.
  • “Desired Count” refers to the number of tasks that you pretend to get up and running always, this was defined when you created the Service.

  1. Verify if the “execute-command” it’s enabled.
  • In this case, you’ll need the Cluster’s name and the Task ID.
  • If the “execute-command” appears to be disabled yet, you’ll have to “Stop” the Task and once it’s up again the “execute-command” will be enabled for sure.

  1. Get into the container:
  • Here, you’ll need the Cluster’s and Container’s names, and the Task ID.








Kubernetes is a platform that continues to be renewed. Its birth was in 2014, and Its latest version came out in 2022 and is called Combiner.

But what is the magic behind Kubernetes that still keeps it current?

For us, Kubernetes is the way to work with multiple containers in a friendly, optimal way and through a platform that helps you simplify life, and we say this without going into depth.

Looking back, we could say that Borg and Omega were the ones who paved the way for Kubernetes to exist.

In other words, the world adopted what Linux used in the 1980s or Google used in the 2000s, which was to work with containers on their system.

This today has been replicated on a large scale by a wide range of companies due to the growing adoption of cloud-based solutions, infrastructures, and systems.

Before going to the point, we want to tell you what Kubernetes is in case you still don’t know:


It is a platform that allows us to build an ecosystem of components and tools to ease the use, scale, and manage container-based applications.

You won’t usually see it by its full name, but as K8s and the essence of this open source system is a bit like “tidying up something very messy”.

To explain this sentence we are going to use the NFL teams as an inference.

In which several teams are created, used, and that work independently (offensive, defensive, special).

These act as if they were a single team, that is, a distributed system.

In addition, these computers can work on different platforms that are connected through a network without interrupting their operation as a single one.

If we use the previous inference, it would be said that an offensive team in the NFL trains differently from the defensive one, and the net would be the football field.

Who created Kubernetes?

The one that gave birth to this platform was Google.

Google needed to put some order and simplify what they had already been doing with their management systems (Borg and Omega), which is why they decided to create K8s back in 2014 when it was just born.

However, Google no longer owns Kubernetes.

For whatever reason, Google decided to donate and release Kubernetes to the Cloud Native Computing Foundation (which in turn is part of the Linux Foundation) back in 2014, still in its infancy.

Perhaps this is one of the reasons why K8s are so widely used today.


Foto de ThisIsEngineering: https://www.pexels.com/es-es/foto/mujer-codificacion-en-computadora-3861958/

So far we have talked about who created it, why, and what it is for… but what is the magic behind Kubernetes that still keeps it current?


Well, following the inference that we have been working on, we invite you to imagine the following:

Imagine that a single person was in charge of :

  1. Selecting (creating)
  2. Training
  3. Making each of these pieces of equipment work manually
  4. Being vigilant so that none of them stop working (providing service)

And suppose that this same person is also It is managed by the administrative part, health, legal aspects, marketing, investors and everything that NFL team needs… Wow! just writing it was overwhelming.

Kubernetes will help us orchestrate each of these containers, as it will:


  • Automate programming.
  • Perform implementation.
  • Perform the scalability in a simple way both horizontally and vertically.
  • Balance loads.
  • Locate availability and container networks.
  • Minimize the maintenance of these by the person in charge of their administration.
  • Automatically recover a container if a fall occurs.
  • Perform integration with different platforms and cloud providers.
  • Balance intelligent loads between different nodes.
  • Be independent of the application architecture, since it supports complex applications regardless of the type of architecture used.
  • It will allow you to write your own controllers using your own APIs from a command line tool.
  • Allows developers to maintain sets of clones, so there is no need to replicate the entire program. This results in a project with greater responsiveness and resistance.
  • It is a platform that has been tested multiple times and thanks to this there are many success stories, for example:

          Pokemon Go, Tinder, Airbnb, and New York Times, among others.

  • Their efficiency and success can attest to how K8s can be very useful in DevOps.


As you can see, it has many aspects that allow it to stay current, and for businesses like ours, it is an excellent option.

If you want to use Devops in your project, don’t hesitate to write us



The ubiquitous connection of mobile devices adopts broadband mobile Internet access open technology, open data format, open identity, open reputation, identity and portable roaming identity, Web intelligent technology, semantic technology, such as OWL, RDF, SPARQL, SWRL intelligent application, automatic Reasoning and natural language in progress. Powerful mobile devices with Internet connectivity, embedded intelligence, autonomous identities, and embedded encrypted wallets are all part of the next generation of the Internet: Web 3.0. Web 3.0 is expected to become the new paradigm of the Internet and a continuation of Web 2.0.

To this day, there is still a huge debate about the existence of Web 3.0. There is no precise definition of what Web 3.0 is or could be, as it is still under development. This is somewhat academic and not nearly as popular as the prospect of decentralizing Web 3.0.

It is touted as the next iteration of the Internet-after Web 1.0 and Web 2.0-to decentralize the Internet. Web 2.0 is the current version of the Internet that we are all familiar with, and Web 3.0 represents its next phase, which will be decentralized, open, and more useful. Web 3.0 is a collection of next-generation Web applications that use new technologies such as blockchain, artificial intelligence, Internet of Things, augmented reality, and virtual reality (AR/VR) as part of their core technology stack. These new technologies will shape the way users interact with next-generation networks.

Decentralization, immersive experience, and intelligence (also known as AI or knowledge) are rapidly gaining traction, and we know that both will play a central role in the next generation of the Internet. In this article, we’ll look at the evolution of the Internet infrastructure and how the advent of Web 3.0 is affecting existing business models. We are so addicted to the Internet, but we did not even realize and did not realize how it went from the first static pages to fully interactive websites, and now to decentralized services based on artificial intelligence.

Then, in 2005, Web 2.0 came along and changed the way we use the Internet. When Web 1.0 first hit the scene in 1989, it was only used to exchange static content over the Internet, people created static websites, and the cost of hosting a website was higher. The move to Web 2.0, dubbed social networking, has been heralded by advances in mobile technology such as the Apple App Store and a collection of social networking applications such as Facebook and YouTube that have unleashed our ability to digitally interact socially. kingdom. Blockchain technology has opened an exciting new direction for Web 3.0 applications.

In this Web 2.0 era, the Internet is dominated by content creation and social interactions using Big Tech technologies. Blockchain has entered the digital network transformation and its influence will increase. As the development of Web 3.0 technology continues, blockchain technology will remain a vital component of the online infrastructure. Advances in technologies such as distributed ledgers and blockchain archiving will enable data to be decentralized and create a transparent and secure environment, overcoming centralization, surveillance and Web 2.0 ad exploitation.

Advances in technologies such as distributed ledgers and blockchain archiving will enable data to be decentralized and create a transparent and secure environment by overcoming centralization, surveillance and Web 2.0 ad exploitation. The decentralized blockchain protocol Web 3.0 will allow people to connect to the Internet, where they can own and receive adequate rewards for their time and data, eclipsing the exploitative and unfair network where giant centralized repositories are the only ones who own it. And profit from it. With blockchain, web computing will be decentralized through peer-to-peer information exchange between people, companies and machines.

With Web 3.0, data generated by different and increasingly powerful computing resources such as mobile phones, desktops, household appliances, vehicles, and sensors will be traded by users through a decentralized data network, ensuring that users retain control of their property. Web 3.0 will enable data to be connected in a decentralized manner, which is an improvement from Web 2.0 that traditionally centralizes and isolates data. Therefore, many industry leaders have seen the symbiotic relationship between Web 3.0, blockchain and cryptocurrency.

Additionally, at the Singapore Fintech Festival sessions, executives came together to discuss these elements and what Web 3.0’s decentralized structure might mean for corporate hierarchies. After defining the institutional acceptance of the digital currency, the team members moved on to defining Web 3.0 and exploring the shortcomings of Web 2.0 platforms. The discussion began with some of the shortcomings of Web 2.0 platforms.

Victor describes Web 3.0 as a broad classification of distributed technologies and tools that provide a blockchain-based peer-to-peer Internet. Other technologies such as open APIs, data formats, and open source software can also be used in developing Web 3.0 applications. Finally, the modern developer rapidly deploys applications that are integrated with these Web 3.0 components using technology platforms such as the IBM Cloud and the IBM Blockchain Platform.

In fact, the new technologies that make up the components of the prototype Web 3.0 application are already an integral part of the applications we use today. Simply put, Web 3.0 inherits what we use today, and adds the power of 5G, smart devices and sensors, AI/ML, AR/VR, and blockchain, providing a complete solution that blurs digital and The boundaries between numbers. What’s this. physical. We are slowly seeing the emergence of web 3.0 technology, there are some web 3.0 applications available, but until a complete paradigm shift, we will not be able to realize its true potential. But one thing is certain: Web 3.0 will change our online life, making it easier and more convenient to search for any content on the Internet, while ensuring the security of our sensitive data.

The fragility of the Internet is a problem that will require not only innovative development, but also a radically new way of thinking. Web 3.0 entrepreneurs and decentralization application (dApp) developers will face software development challenges such as user authentication and data storage / querying differently than in the Web 2.0 era. This gradual shift towards what can be a truly free and transparent Internet is undoubtedly an exciting prospect for many people, but it can take a steep learning curve for developers to move from building Web 2.0 applications to exploring the paths required to build decentralized applications or decentralized applications.

As a result, Web 3.0 applications will run on decentralized blockchains, peer-to-peer networks, or a combination of these – such decentralized applications are called Dapps. Web 3.0 is the third generation of Internet services for websites and applications that will focus on leveraging machine understanding of data to deliver a data-driven Semantic Web. Web 3.0 is based on the fundamental concepts of decentralization, openness, and greater user-friendliness. Web 3.0 is about building a decentralized infrastructure that protects individual property and privacy.

Overall, Web 3.0 is the next stage in the evolution of the Internet, which will enable human intelligence to process information with technologies such as big data and machine learning. Core Web 3.0 features such as decentralization and reward-free systems will also give users much better control over their personal data.