If you could not attend the Invent meeting at the end of last year, we will tell you the novelties that AWS prepared for 2023.

So let’s get started:

AWS Supply Chain:

Created to somehow solve the problems that exist with the global supply chain.

We know that Amazon has been in the market for more than 30 years. And in that period, it has been dealing with and trying to improve its logistics network and supply chain for all the products it sells.

And as they say, no one can take away what you danced! That is why Amazon has used these 30 years. And this experience was captured by creating AWS Supply Chain for all AWS customers.

The benefits offered with this are to provide your users with the ability to:

  • View stock levels.
  • Predictions based on machine learning.
  • Risk projection and cost reduction.
  • Alerts for products that will be out of stock.
  • Have the parties involved have contact to resolve the problem.

It will initially be used as a preview in Northern Virginia, Oregon, and France.

AWS Clean Rooms

As we know, it is not enough with knowing about our data, but also about our biology.

That is why AWS is focusing on the analysis of biological data.

Hence, the creation of AWS Clean Rooms. This works under the concept of a clean room in a hospital and is something that has been brewing for a while and came to light in the past year.

AWS Clean Rooms will allow you to preview for forecasting, as well as share and analyze groups of data without the need to duplicate or share the underlying data itself with third parties.

All this with the intention of “taking care” of the information received and “protecting” it.

Amazon Security Lake

Security will always be important for businesses.

That is why AWS has created Amazon Security Lake to increase security policies when accessing the information handled by different companies.

This product promises is that will be able to manage, aggregate, and analyze the different records and events on the data that is being accessed throughout its entire life cycle.

In this way, threat detection is enabled, and accidents can be investigated and acted on more quickly.

 

Amazon Omics

As we have said, AWS has focused on database management over the years. And raising its bar, it has now decided to analyze the biological data of human beings, which is why Amazon Omics has emerged.

Which is designed for large-scale research and analysis of “human genome” data from entire populations, it will also store and analyze these data.

For those companies such as Pfizer, and Moderna, among others, who need to know the genomic data of those with whom they have been experimenting and those who will seek to add to these experimental projects.

 

AWS Glue Data Quality

In the same sense of data storage and analysis, AWS creates this new service to improve the quality of data that is managed in multiple information sources and “data lakes”.

AWS Glue Data Quality’s core offering is to automatically measure, monitor, and manage data quality.

In addition to programming its periodic execution as the stored data changes to simplify the process of maintaining the quality of the information that is processed.

 

Amazon Athena for Apache Spark y Amazon DocumentDB Elastic Cluster

AWS builds these products for data storage and analysis.

In the case of Amazon Athena, it is created to add compatibility with this famous platform based on the open-source Apache Spark.

The benefit offered by Amazon Athena is to automate the necessary resources, and speed up the execution time and the results of the queries made on said platform.

In the case of Amazon DocumentDB Elastic Cluster, it’s able to support millions of writes per second and store up to 2 petabytes of data, for those who need to manage JSON data at scale.

Amazon DocumentDB Elastic Cluster achieves this by using a distributed storage system, which automatically partitions data sets across multiple nodes without the need for administrators to manage these complex processes.

AWS SimSpace Weaver

In another order of ideas, AWS creates SimSpace Weaver which is a service to run massive spatial simulations without the need to manage the infrastructure.

What does this mean?

This means that with AWS SimSpace Weaver you will be able to create quite complex 3D simulations and also provide models. In addition, bring your SDK toolkit so that you can assemble it yourself and the possibility of incorporating digital twins.

As if you were Bob the builder.

Amazon SageMaker

AWS has improved the popular machine learning service.

Granting 8 new capabilities to optimize, train and deploy Machine Learning models, considering that AI is growing exponentially.

Among these new capabilities is:

Access control and permissions

Documentation and review of information models throughout the machine learning life cycle.

Unify in the same interface the control and monitoring of the models.

Improve data preparation to increase its quality.

Allows data scientists to collaborate in real-time and/or validate models in an automated way, using real-time interference requests.

And if this were not enough, it will be able to support geospacial data.

What is this for?

Scientists can continue to find ways to control our weather, agriculture, and other areas that are natural and where man thinks he can continue to intervene, now using AI.

Tell us what you think of the AWS news, which ones are you interested in implementing in your business, or venture?

A few days ago, we discussed some aspects related to cloud migration. Why migrate? What should we consider? Among other topics. We even talked about some servers.

But… What is a multi-server environment? What benefits does it bring us? Why implement it? Does it apply to everyone?

We are going to work on these questions for you in this post. We will try to delve into which is the most beneficial option for your business.

The first thing is to know or recognize what a multiserver environment is.

 

A multi-server environment is a server infrastructure type, that uses multiple physical servers, to provide users with access to different services and applications.

In other words, having multiple servers can work better than having a single server. Because you can get better performance, reliability, and availability.

However, this proposal does not apply to everyone, hence the importance of answering the following questions:

·       Does your company need a multiserver environment?

Once you’ve answered those questions with members of the IT team and some of the CEOs that make up the most important areas of your business, you can move on with the process.

If the size of your business is small, it’s important to consider what your core business is and how that infrastructure will be.

If you have already migrated to the cloud, why are you considering this?

Additionally, it is important to know at the cost/benefit level how this investment benefits you.

Now, it is important to know what are the benefits of a multi-server environment.

Let’s see what are the benefits of dividing the resources between several servers.

·       Raise the security level

One of the main concerns of enterprises is online security.  Because as technology increases, access to it and digitization, then greater the exposure of your data and resources.

Therefore, having a multiserver environment, there is an increase in security.

¿Por qué?

Because you don’t have all your eggs in one basket.

Using multiple servers to allocate the resources, the data is separated from the central servers just like the resources. For that reason, there is less chance of damaging all the resources or data because they are in different places.

In the case of those companies that handle confidential data, it is a non-negotiable point. Because it is a priority to have the databases separated from the servers.

·       Cost-effectiveness: improves with a multi-server environment.

This seems like a contradiction. We will try to explain why it is not.

For example: If you are having problems related to server performance, or with resources such as CPU or RAM, the multi-server environment will be more profitable.

Because separating your server resources will allow you to perform all tasks separately and is one of the most beneficial ways to get the most out of your environment.

In addition, improving reliability will allow you to reduce your costs. Because the needs and functions of database and application servers are different, separating them will improve.

Another example of cost optimization is the implementation of roles. It will offer better performance at a lower price compared to traditional server setups.

·       Resources under effective supervision

A multi-server environment will allow for more effective monitoring of resources because it is directly related to your server’s configuration.

For a company, tracking functions and resources is essential to ensure greater efficiency and obtain better performance. A careful eye often helps to maximize uptime and the wisest allocation of resources. This can be done thanks to multiple servers.

If your business is riding a wave of rapid growth, effective resource monitoring is essential.

·       A multi-server environment reduces dependency

If you liked to play monopoly, with the multi-server environment, this doesn’t apply.

In a multi-server environment, each server is responsible for its own functions and tasks.

Each one fends for himself. This will allow more connections to be made.

Also, this will make your company or online business more secure and reliable.

And if one server in a multi-server environment goes down, the other servers can continue providing access to the services and applications that users need.

In other words, there will be less downtime and fewer interruptions for users.

·       Increased scalability and resilience

This will be another benefit, because the load will be spread across multiple servers, allowing more traffic and less chance of blocking.

 

Multi-server environments are an excellent option if diversification is important to you as an entrepreneur, putting monopoly aside, and your business is at the point that requires this change.

 

Call us if you want more

In another post, we talked about why to migrate to the cloud. This time we will talk about how to know which is the best strategy to migrate to the cloud.

 

The first thing that is recommended to do before migrating to the cloud is:

 

  • Select the right cloud infrastructure for your needs

This means that it will need to have the power to process, and the storage space to handle the workloads you have today.

Without mentioning the capacity and the power for the growth projections.

 

  • Vendor Price Assessment

Once the infrastructure is defined, we move on to see the prices of the cloud provider.

The one that best suits the budget and meets the system requirements that were worked on in the previous point will be evaluated.

 

  • Security

Decisions need to be made about cloud security and compliance, including cloud provider metering policies, where the data will reside (public cloud or private cloud), permission settings, encryption options, and much more. 

 

And now, What should we do?

Once everything suggested above is resolved, it can be said that we are ready to go after a project plan to migrate to the cloud.

It’s important to note that when you have a clear and understood plan or strategy, it will be done much faster and cost much less than if we didn’t.

 

  1. What is the best strategy or plan for cloud migration?

Some servers, such as Liquid Web, consider that the best plan or strategy is:

  • Make a map detailing each part of the current infrastructure.
  • Know what will be migrated and where
  • What will happen to exist assets that will no longer be needed?
  • We have to communicate and share this information with the parties involved, and it has to be done in time to take knowledge about what are their concerns. Based on this, We can make the changes that are needed. 

Instead, other benchmarks, such as AWS, recommend the “6 migration strategies”, better known as “the 6 R strategies”.

What’s this about?

They are the most used in the market and are detailed below:

  1. Rehosting – lift and shift

It consists of moving the applications without the need to make any changes. Such as large-scale legacy migrations, where organizations want to move quickly to meet their business goals.

These applications are usually rehosted, and the process can be automated with some cloud tools.

However, some prefer to do it manually, bc they find it easier to optimize or re-architecture apps once they are running.

Because they consider that the most complex part (migrating the application, data, and traffic) has already been done.

 

   2. Replatforming — I sometimes call this “lift-tinker-and-shift.”

This plan or strategy refers to making “some optimizations” in the cloud.

For what?

To see the fruits of our labor tangibly.

This will allow the company to maintain the core application architecture and save money that would be spent on issues like licenses, electricity, etc.

   3. Refactoring = Re-architecting   

In this strategy it would be good to ask yourself:

Do I want to migrate from a monolithic to a service-oriented architecture? 

What is a business need I have that would be difficult to achieve in the current infrastructure?

This strategy tends to be the most expensive, but it can also be the most beneficial if you have a well-positioned product in the market.

 

  1. Repurchasing  

Migrate from perpetual licenses to a software-as-a-service model. Moving a CRM to Salesforce.com, an HR system to Workday, a CMS to Drupal, and so on.

     5. Withdraw

In this case, it would be good to ask yourself:

Which applications are no longer needed? What should I remove?
Once you have the answers, you need to know who the people affected or in charge of that area are.

According to AWS “it is estimated that between 10% and 20% of the enterprise, IT portfolios are no longer useful”.

This strategy can reduce the number of applications to protect and can be used as an engine to drive the business forward.

 

    6. Retire  = Revisit

Maintain business-critical applications that require major refactoring before they can be migrated. You can revisit all the apps that fall under this category later.

 

Amazon Web Services – AWS Migration White Paper

 

As you can see, it is important to asset what is good for your business before implementing it. That way, you will be optimizing time, money, and resources.

If you need to migrate or switch to the AWS cloud, at HADO, we can help you.

s for more information.

If you have not yet decided to migrate your business to the cloud, then you are in the right place.

Because today we are going to give you several reasons that will answer the famous question Why migrate to the cloud?

Before we jump right into the why we want to explain:

What is migrating to the cloud?

Well, migrating to the cloud is a move.

That’s right, you’re moving, but instead of taking your stuff with you physically, you’re moving digitally.

It is to take all IT resources, services, applications, databases, and digital assets either partially or completely to the CLOUD, that is, you will no longer have them within the hardware and software solutions you have in your company and you will make them digital.

Note that this term also applies when we move from one cloud to another.

We understand that nowadays it is rare to find businessmen or/and entrepreneurs who have not yet partially migrated their companies or ventures to the cloud.

However, it may happen that you have done this process and you are still not clear why it is so important or you are only contemplating a look at this point.

Hence, we are launching this post to give you new nuances on this topic, and here we go.

Why migrate to the cloud?

Migrating to the cloud is important because:

  1. You will get rid of obsolete software, hardware, and unreliable firewalls that can generate problems in the medium or short term, and when it comes to expanding the work outside the company they do not work, they do not support you. Example: the home office modality, which today is one of the ways of working.

 

  1. Reduction of operating costs: this is the spearhead for many companies, especially in Latin American countries.

This happens because you only pay for what you use.

You do not have to have data centers, nor have computer equipment dedicated to its maintenance, or equipment that works as a server (which is not cheap at all).

Nor will you have to pay high electricity bills, and you will have a reduction in labor liabilities since the DevOps team and system administrators will not spend their time doing backups or hardware maintenance.

At HADO, we help you reduce these costs and we will also optimize your investment during the migration process.

 

 

  1. Security: just like when we move physically, it is very important to feel that we are in a safe place.

That is why reliable cloud providers have taken care to build their respective clouds from scratch, and their services follow the latest industry standards to reduce the risk of cyber-attacks.

And within the cloud you can request that maintenance and disaster recovery be managed within the day-to-day needs, allowing the workload of the IT team to be much lighter and easier to manage.

 

  1. Scalability: this point has a lot to do with business movements.

 What does this mean?

That when the business is at its peak (fat or lean cows) the business will be able to quickly adapt, either by reducing or expanding its capacity.

This can be done automatically and will not require the use of obsolete applications and/or technologies, nor a lot of time, money, and effort.

You will be able to adapt to the market with new, advanced technology, quickly and elegantly and you will feel tangible how the growth of your company is in your hands.

One example is Yedpay. This payment platform decided to migrate to the cloud after experiencing a problem in its data center.

By not needing to invest heavily in IT or in person to maintain the physical infrastructure, they obtained a 40% reduction in their costs as a result.

 

  1. Availability: imagine having access 24/7 – 365 days, this is feasible when you are in the cloud.

Accessing your data and applications will no longer depend on whether or not you are in the office or the business, you can do it without any problem in any country and place as long as you have an internet connection with good speed, valid access credentials and the willingness to do so.

A clear example of this benefit is: working remotely, you have an unexpected meeting and you are on vacation in Bali and need access to certain confidential information.

These are some of the reasons we can tell you that answer the initial question of why migrate to the cloud.

However, this is the tip of the penguin’s hat at the tip of the iceberg.

In these times it is also important to ask:  when to migrate? And how to do it?

If you need advice do not hesitate to contact us now, our highly qualified team will be able to clarify your concerns and accompany you in your migration process.

 

¿Cuál es el impacto positivo de la adopción de DevOps y Agile en tu proyecto? ¿Cuáles son esas métricas?

Supongamos que usáramos una inferencia deportiva para describir DevOps y Agile. En ese caso, serían un dúo como Jordan y Pippen, Clements y Jeter, o en términos futbolísticos, Messi y Suárez, (o Julián Álvarez), es decir, no hay pérdida cuando están juntos.

DevOps marca la diferencia cuando las dos columnas principales de un ecosistema empresarial, desarrollo y operaciones, trabajan juntas de manera integrada.

¿Por qué decimos esto?

Porque DevOps no puede ser eficiente sin una configuración Agile.

No tiene sentido acelerar la entrega y automatizar la infraestructura y la implementación del código si el equipo de desarrollo lanza versiones en intervalos enormes.

Como tampoco tienen ningún sentido llevar a cabo la optimización del desarrollo y acelerar los procesos de construcción si el nuevo código no llega a los usuarios sino hasta la próxima versión.

Entonces, ¿cuál es el rol de Agile en todo esto?

Agile facilitará la entrega continua y a tiempo, mientras estás usando DevOps, y por lo que estos dos funcionan tan bien juntos, algo así como Pippen hace el pase y Jordan encesta =P

Ahora bien, ¿existe algún tipo de métricas que permita medir la efectividad de esta dupla?

La respuesta es Sí.

No solo porque es funcional para ver que estará bueno mejorar, sino porque es un requisito importante, casi que fundamental.

¿Por qué?

Porque cada vez son más las empresas que proporcionan servicio DevOps desde Latinoamérica, por lo tanto, la competencia es mayor y las posibilidades de que el cliente cambie de proveedor de servicio, se incrementan cada vez más.

Decimos esto tomando como ejemplo las estadísticas de Fortune. Donde indican que los países de LATAM donde ha habido un crecimiento más significativo de estas empresas son:

– Brasil

– México

– Argentina/ Colombia

– Chile

– Perú

Por lo tanto, es cada vez más fácil que un cliente pueda hacer un cambio si siente que la experiencia, la rapidez de respuesta, el costo, no está alineado con sus expectativas o paradigmas empresariales.

Entonces, volvemos a la pregunta anterior:

¿Cómo se puede medir la efectividad de DevOps y Agile?

A continuación, te hablaremos de algunas de las métricas más utilizadas en DevOps + Agile:

 

  1. Calidad a través de múltiples pruebas

Esto no es solo una métrica para DevOps y Agile sino se podría decir que para todo en la vida.

Pero, llevándolo a este plano es vital, porque es un gran diferenciador y puede inclusive ser la piedra filosofal de Harry Potter.

Por la sencilla razón, que si no ofreces calidad en las automatizaciones, si no se gestiona adecuadamente, puedes cometer una gran cagada y poner en peligro todo el proyecto y la seguridad del mismo.

De allí la importancia de medir la calidad a lo largo de las pruebas de desarrollo – despliegue.

Fíjate que en una encuesta realizada recientemente por Forrest, indica que:

El 72% de las empresas afirman que las pruebas de software son una parte fundamental en el ciclo de vida de DevOps + Agile y la entrega continua, y en esto coincidimos completamente.

Estas empresas, se centran en el desarrollo de sus habilidades mientras presupuestan y prueban lo suficiente para implementar las prácticas Agile y DevOps en toda la organización.

Por ejemplo, algo que se recomienda hacer es:

– Realizar una implementación de pruebas continuas en respuesta a la demanda de lanzamientos mucho más rápidos.

– Automatizar las pruebas funcionales de principio a fin.

– Hacer parte del equipo de entrega a los testers.

– Iniciar las pruebas en una fase temprana del ciclo de vida del desarrollo (pruebas por turnos).

Sin embargo, no todas las empresas lo realizan por completo, sino alternan entre estas pruebas, el porqué es sencillamente el no estar de acuerdo con todas ellas.

 Pero ¿qué pasaría si las miramos individualmente?

Por ejemplo:

Imagínate que tienes la posibilidad de hacer una jugada a medio campo para ganar el juego, pero, no hiciste las debidas pruebas en el entrenamiento y no sabes cual es el mínimo movimiento que necesitas de muñeca o pies para anotar sin problema.

Pues, eso mismo se podría decir que son las pruebas unitarias, a simple vista parecieran ser una pérdida de tiempo.

Sin embargo, pueden ser claves a medida que la base de código va evolucionando, porque además le da información a los desarrolladores y a los testers y si se prioriza, a nivel de riesgo esto es mucho más útil.

En el caso de las pruebas de integración y API consideramos que son cada vez más relevantes porque hay tantas cosas que pueden pasar más allá de la interfaz del usuario que se nos pueden escapar de las manos los niveles de calidad y de riesgo.

Finalmente, las pruebas de regresión de extremo a extremo son muy importantes.

Las empresas que tienen rato haciendo esto, sugieren realizar estas pruebas a nivel de proceso o transacción, aunque sabemos que esta tarea no puede ser fácil ya que es muy importante la velocidad.

Sin embargo, es una prioridad mantener altos niveles de automatización y esto solo lo lograremos mientras más pruebas estén automatizadas.

Y todo esto es sin dejar a un lado la velocidad en la entrega y los costos, y es allí cuando Agile – DevOps te enamoran y hacen su magia.

 

  1. Plazo de ejecución de los cambios

Esto es vital ya que sabemos que la velocidad de entrega puede llevarnos a penales, por decirlo de forma deportiva.

Esta métrica, nos ayuda a conocer cuanto tarda un commit en llegar a producción.

Aquí veremos reflejado la eficiencia del proceso de desarrollo, la capacidad del equipo y la complejidad del código, por ello esta métrica es ideal para entregar el software rápidamente

  1. Otra métrica es: La automatización

La automatización de principio a fin es un diferenciador comercial.

Las empresas que confían en las pruebas manuales tienden a considerar que las pruebas son un cuello de botella a diferencia de quienes las automatizan.

La automatización de la calidad del software agiliza lo que pueden ser procesos manuales muy lentos.

Además, la mayoría de las empresas que siguen las mejores prácticas de Agile+DevOps consideran que la automatización de sus procesos de QA es un diferenciador fundamental.

 

  1. La medición del riesgo es otra métrica importante cuando tenemos DevOps y Agile

Venimos hablando de la importancia de automatizar para mejorar considerablemente la velocidad y la frecuencia de las versiones.

Ahora bien, es vital contar con una forma precisa de medir y realizar un seguimiento de la calidad a lo largo del ciclo de vida del desarrollo del software.

¿Por qué?

La automatización del proceso de entrega podría incrementar el riesgo de entregar defectos en producción.

Entonces, si las organizaciones no pueden medir con precisión el riesgo empresarial, la automatización de las prácticas de desarrollo y pruebas puede convertirse en un gran peligro para la empresa.

En tal sentido, lo que mencionan las empresas que tienen más tiempo en el mercado y que les ha venido funcionando es:

La calidad y la velocidad están en escalas diferentes a la del riesgo: la mayoría de las empresas aún no han establecido la conexión entre velocidad, calidad y riesgo.

En general, la importancia del riesgo en el software de cara al cliente va por detrás de los objetivos de desarrollo establecidos de calidad a tiempo y dentro del presupuesto.

Por esto, es importante medir con precisión el riesgo empresarial y sabemos que esto no “es soplar y hacer botella”

Ahora sabemos la importancia de tener buenas procesos de control de calidad y en pruebas.

And this has led us to realize that connecting risk, quality, and speed is complex.

 

  1. Workflow Efficiency

Esta métrica se encargará de medir la relación entre el tiempo de cuando el equipo aporta valor y el tiempo en el que no lo hace mientras se realiza el software.

Es decir, este proporcionará la cantidad de tiempos activos (cuando el equipo trabaja activamente para llegar al objetivo) y tiempos en espera (cuando el equipo ha tenido que: priorizar otras cosas, tiene exceso de trabajo, etc) dentro del total de tiempo empleado para finalizar cada proceso.

Para que esto sea medible es importante ubicar herramientas que permitan definir con claridad el tiempo activo y el tiempo de espera.

Estas son algunas de las métricas que nosotros usamos con DevOps y Agile que nos han ido funcionando con nuestros clientes.

 

Si requieres ayuda para implementar o para llevar a cabo estas o alguna de estas métricas, escríbenos ya..

 

Finalmente, te invitamos a que vayas a ver la encuesta a profundidad que hizo Forrester a empresas a nivel mundial que utilizan DevOps y Agile

 

Amazon Web Service (AWS) will hold the highly anticipated Re: invent 2022. conference from November 28 to December 2.

Why is it so expected?

Because you all know that this conference is like going to a Coldplay concert, since it is one of the most anticipated for all those AWS customers and partners dedicated to everything related to advances, news, and trends in the cloud.

In addition, it will be an ideal time to make alliances and carry out certifications, we will have the opportunity to ask these great personalities questions, more than 1,500 technical sessions, and, of course, to have a good time because we are going to achieve everything.

Will you wonder if it’s virtual or in person?

Well, it is a hybrid event, since you have both options.

If you decide to attend in person, we will tell you that it will be held in Las Vegas and they have very well-designed logistics so that you do not waste any kind of time finding accommodation, how to get to each talk and/or activity, forms of transportation, etc.

But, we know that the one who has decided to launch himself to Las Vegas has already made this investment with enough time because we are not talking about 20 USD.

We are talking about 1,799 USD.

Now if you decide to do it face-to-face, you have free access to all the master classes and talks by AWS leaders.

AWS seeks to provide the best possible experience and if you are not fluent in English, you can use the simultaneous translation that will be available for certain languages.

What kind of talks will you find from AWS leaders?

This conference to be held in Las Vegas will feature the heads of AWS starting with Adam Selipsky – AWS CEO, followed by:

  1. Peter DeSantis – Senior Vice President of AWS Utility Computing
  2. Swami Sivasubramanian – Vice President of Data and Machine Learning at AWS
  3. Ruba Borno – Vice President of AWS Worldwide Channels and Alliances
  4. Werner Vogels – Vice President and CTO of Amazon.com

In addition, we will find from very technical talks such as those given by Barry Cooks – Vice President of Kubernetes for AWS, and Yasser Alsaied – Vice President of IoT, to slightly more controversial topics such as those given by Candi Castleberry – Vice President of Diversity, Equity, and Inclusion ( DEI) at Amazon on AI and Howard Gefen – General Director of Energy and Utilities on energy and sustainability.

What else does this mega conference offer?

Well, not everyone is work, work, work, as Rihanna would say =), there is also an after-work moment and the proposal is quite broad, where you will find:

Actividades Sport activities:

 

5K race: which cost of 45 USD to enter and you will receive drinks, snacks, and a shirt, you will also be doing it for a noble cause since all the proceeds will be contributed to the Fred Hutchinson Cancer Center organization.

 

Ping Pong Competition: This competition is not something that is taken at stake, it is a serious thing and it has its playoffs and finals.

In addition, you will have recreational activities such as:

  • Comment wall, photos.
  • Dazzling digital art space.

And if this seems little to you and you still have enough adrenaline, you can go to the party with DJ and guest artists, to the games room, or enjoy the gastronomic section because “Belly full of happy heart”.

In short, as we told you at the beginning, AWS does not play carts when it decides to hold an event, it comes with all to be able to cover the demands that all attendees will have because it knows that we are a demanding public😉

If you know more, please click here

We will participate, we hope to see you there and get to know each other.

 

It is a habit in any market to talk about tendencies on these dates, and the development of software and emergent technologies is no exception.

For this reason, today we will be explaining what specialists say will happen to DevOps by 2023. What is expected?

Let’s begin by recognizing the growth that DevOps technology has had in the last few years. 

According to some surveys, the increment of the compound annual growth rate (C.A.G.R.) for DevOps will be 24.7% between 2019 and 2026, and it will be flirting with a worth of 20.01 million dollars… Not bad at all! 

This sends a clear message to us: this technology, which has a unique potential, keeps getting stronger and is revolutionizing the industry of software development. That is why we are specialists in this 😉.

But, why do we bet on DevOps?

We do it because DevOps stimulates better and more communication, integration, collaboration, and teamwork among developers (Dev) and TI operators (Ops), without leaving aside clear communication and the possibility of having a better experience with the client, considering that the latter can visualize the project more clearly.

That is why businesses like HSBC have decided to sign agreements with Cloud Bees and their DevOps Cloud Bees platform since 2021, to standardize worldwide software delivery for more than 23,000 developers.

Having said all of this, let’s get to know what is expected of DevOps for 2023.

Some portals like solutionanalysts, point out that there are 10 tendencies in what is referred to as the market of operations development (DevOps).

However, we will single out what we consider to be the 5 most important ones, and we will explain why that is. 

Let’s begin! 

  1. Low Code growth: 

This comes as fast as sound. 

These are great tool to extend the benefits of Agile and DevOps.

But why with a low code?

Businesses prefer low code to develop and unfold applications through the DevOps process in a fast way, given the fact that not everyone has a team of specialists at hand, and many want to do things in the fastest and less expensive way possible.

Moreover, the creation of a piece of software is a job as delicate as a piece of art, considering that the program has to work optimally not only for the user but also for the developer. Additionally, applications are in constant change.

Many softwares use similar patrons and, sometimes, creating them from scratch for each project can be a huge investment of resources and time.

From there comes the opportunity for Low Code to resolve some of these problems.

Also, Gartner’s analysts estimate that Low Code’s market will grow from 2021 to 2025 to almost 30.000 USD.

In addition, Gartner’s people foresee that this will represent 65% of all applications development activity in 2024.

You may think, who are Gartner?

Well, they are a group of experts that, just as they indicate, help you with their tools to… ‘see a clear way to make decisions about people, processes, and technology. 

Basically, they are the best at studying processes, businesses, and technology.

In conclusion, Low Codes will give you agility and they will help you not fall behind while being an active part of the competitive software market. Once mixed with DevOps, it ends up being like chocolate and passion fruit (a perfect combination). 

 

     2.  IA just around the corner 

IA is more and more present. In the future, it goes hand in hand with DevOps.

Why is that? 

Because IA will replace humans as vital tools for computing and analysis, considering that said humans are not as effective to manage the large amount of data and computing that will need to be handled in daily operations.

IA will join software to improve its functionality.

This will allow DevOps teams to…:

  • Code
  • Test
  • Supervise
  • Launch

… the different softwares they are making, in a more effective way.

    3. Improve security, one of DevOps goals for 2023

Nowadays, having the right security is one of DevOps teams’ greatest challenges.

More than 50% of developers are responsible for the safety of their organizations.

That is why obtaining the right security is such a big deal.

For that reason, the practice of DevSecOps (development, security, and operations) is one of the greatest tendencies in software development for 2023, because it integrates elements of security at every stage until successfully delivering the developed solution. 

This is developed through DevOps, and it will allow development teams to detect and attend to security problems in the present, with the speed of DevOps.

 

    4. Infrastructure as a code: another big tendency for 2023

Infrastructure as a code or IAC, as it is known in its acronym in English, is estimated to be one of DevOps’s greatest tendencies for 2023.

But, why is that?

That is because it will allow the infrastructure to be managed and supplied automatically, and not manually as it had been done up to that point. 

The continuous supervision, the control of versions to the code that runs the development, the virtualization tests, and the administration of DevOps’ infrastructure will be better using IAC.

Furthermore, it will allow a more “face-to-face” job between infrastructure and development teams, which is vital for DevOps.

    5. Serverless companies.

If you are thinking about taking the big leap, this is one of the important challenges you will have to face: “serverless computing”.

This concept, which is a little abstract for what was known, represents the externalization of infrastructure and its tasks to external providers.

The fact that companies are still without a server as they had known before, will allow a change in their IT operations, and they will be able to take DevOps approach in a much better way.

Moreover, it will allow teams to eliminate the risks and problems related to the management of the pipeline, and to focus more on the development and unfolding.

We believe that these are going to be the 5 greatest tendencies regarding DevOps this 2023. And you, what do you think?

If you want to know about our service, just click here

 

Before you begin, confirm that you have the following tools, that we’ll need, ready to go:

  • AWS CLI.
  • Session Manager Plugin.

In case you don’t have some of these, I’ve left the corresponding links to install them below.

https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html

IAM ROLE:

We must create a role, in this case, we’ll be calling “ecsTaskExecutionRole”, it allows ECS to execute tasks and commands against other AWS services.

  • Go to the IAM console, select “Roles”, then select “Create Role”.

  • In our case, the role we are creating is for ECS to use, so it will be for an AWS Service, to do that follow the steps you can see below:

  • Then we need to add a Policy to our Role, which allows ECS to perform tasks. The one we are looking for is “AmazonECSTaskExecutionRolePolicy”; you can filter by “ECS” word to find it easily.

  • Now, we have to name and describe it:

It seems to be ready, but we need to add another Policy yet, and this one it’s gonna be created by us.

  • Let’s go then, go to the roles section, look up the one you just created and select it.

  • Click on “Add permissions” and select “Attach policies”.

  • Then select “Create Policy”.

  • Click on “JSON”. Here we’re going to see a file like the next one:

  • We need to write our second policy, this one will allow ECS to get the Session Manager able to execute commands inside our containers. So, delete the current content of the file, and below, you’ll find the one that we need to insert.

  • Once it’s ready, we’re able to continue, select “next”.

  • Now, we have to name and describe it:

  • Now we’ve brought back to the “Attach policy” section, refresh the page and search for the policy we just created. Search it, select it and attach it.

ECR – Elastic Container Service:

Let’s create a repository on ECR to store our container images.

  • Go to the ECR console, and once there, select “Get Started”.

  • Choose a name for your repository, in my case I’m naming it “demo”. Leave the rest of the options on their default configuration, and click on “Create repository”.

  • That’s all, now we’re ready to push our docker custom images to the repository using its URI.

 

  • The following are useful commands to use to log into the ECR, create an image from a Dockerfile, tag and push an image, etc.

Create an image from a Dockerfile:

ECR Login:

Tag an image:

Push an image:

ALB – Application Load Balancer:

So, time to create a Load Balancer

  • Go to the EC2 console, scroll down until the end, and select the “Load Balancer” option that you can find on the left side.

  • Now, click on “Create Load Balancer”.

  • In this case, we’re gonna choose the “Application Load Balancer”:

  • We need to name our Load Balancer, then left the rest options on their default configuration.
  • Select the correct VPC and choose at least two subnets.
  • Also, you’ve to select the Security Group.

  • The next step it’s to choose the listener port of our Load Balancer, the most common ones are HTTP/80 and HTTPS/443. We’re gonna work with the HTTP/80 listener.

Right below, we’re asked to select a Target Group, these are which tells the Load Balancer to where send the traffic that is being received on the listener port.

So, we’re gonna create one, even if in our case (which it’s) we don’t have an instance/container/app to be the target of requests yet, just we need this step to be done for the Load Balancer creation.

Don’t worry, later we’ll be doing this configuration in a way that works for us.

  • Then, click on “Create target group”, it will take you to another window.

Here select the “IP addresses” option, name your Target Group, then leave the rest in their default configuration and click on “Next”.

  • Now, choose the correct VPC (the same one that we chose before for the Load Balancer). Then, click on “Remove” to delete the suggested IPv4 address and, finally, click on “Create target group”.

Once done, we can close the current window and continue working with the Load Balancer creation.

  • So, now we are ready to select a target group, to be able to see in the options the one we just created, click on the refresh button, then expand the options and select the correct one.
  • Then leave the rest as default and click on “Create Load Balancer”.

ECS – Elastic Container Service:

  • Go to the ECS console, once there, on the left side of the page click on “Cluster”, then select “Create cluster”.

  • Give your cluster a name, and choose the correct VPC and subnets, as we’ll be using the ALB that we have created before, please be careful and select the same ones for the cluster.
  • Keep the Infrastructure, Monitoring, and Tags settings without changes. Finally, click on “Create”.

  • Once the cluster it’s created. Select from the left side of the page the “Task Definitions” option.

Here the role we created earlier will be assigned and also we’ll indicate the image that we want to be deployed in our container.

  • Name your Task Definition, for that, it’s useful to be aware of the image’s name that it will be deploying, just to be easier to identify its function in the future. The same applies to containers, services, etc.
  • Then, name the container. For the Image URI, you’ll have to go to your ECR, search in the repo for the image you want to deploy, and copy the “URI” just as you can see below in step “2”.
  • Once done, choose the correct port for the container, and click on “Next”.

  • Leave the Environment as default, choose the size of the container, and select the role that we created before for both places, Task Role y los Task Execution Role. Then specify a size for the Ephemeral storage, the minimum it’s set to 21GB.
  • The last item it’s Monitoring and Logging, it’s optional as you can see, just be aware that enabling one or some of the options carries a cost. Once it’s done, click on “Next”.

  • Review all the configurations and click on “Create”.

Now that we have the Task Definition, we can create a Task or a Service from it. There are many differences between these two, one, for example, might be:

A Task creates one or more containers, depending on the configuration that we set, running our apps, if some of the containers get down, it will keep down.

A Service gives you more tools to avoid that problema, because a service can run multiple tasks and even you can set the desired amount of tasks to be running if one of them get down, the service will be in charge of bringing up another.

  • In our case we’re gonna create a Service, so select “Deploy” and then click on “Create Service”.

  • In the Environment space, just select the cluster that we had done earlier and keep the rest without changes.

 

  • In Deployment Configuration, choose “Service” and give your service a name.

 

  • In Networking, select the same VPC and subnets that you have chosen when you created the Task Definition, choose the Security Group and be aware to Turn on the “Public IP”.

 

  • In Load Balancing, select “Application Load Balancer”, we’re gonna use the ALB and the Listener (80:HTTP) that we had created before.

The time to create our useful Target Group has come, so, name it and choose HTTP “Protocol” (same as the listener).

Then, the Path Pattern can be a “/” and, in this way, the requests will match with everything after alb-dns.com/, but if your idea is to deploy many apps will be useful to identify them and redirect the requests to the specific path associated with their names.

In my case, I’m using /demoapp/*, please take note of the *****, it needs to be always at the end of the path, for the requests to match without errors. Also, the Health Check Path needs to be the same as the Pattern Path but without “/” in the end.

Finally, choose the Health Check Grace Period and click on “Deploy”.

  • That’s all, inside your cluster you’re gonna see your Service and the status of the Task that it deployed.
  • Also, if you click on the Service’s name, you can know multiple useful details, such as the Status of the Health Checks, the Task ID, etc.

In the case that you need to ingress into some container to do troubleshooting, maintenance tasks, etc.

I’m leaving below a couple of steps to you get that.

  1. Enable “execute-command” in the Task Definition.
  • For that, you’ll need to know the names of the Cluster, Task Definition, and Service.
  • “Number of Revisions” refers to the version of the Task Definition.
  • “Desired Count” refers to the number of tasks that you pretend to get up and running always, this was defined when you created the Service.

  1. Verify if the “execute-command” it’s enabled.
  • In this case, you’ll need the Cluster’s name and the Task ID.
  • If the “execute-command” appears to be disabled yet, you’ll have to “Stop” the Task and once it’s up again the “execute-command” will be enabled for sure.

  1. Get into the container:
  • Here, you’ll need the Cluster’s and Container’s names, and the Task ID.

 

 

 

 

 

 

 

Kubernetes is a platform that continues to be renewed. Its birth was in 2014, and Its latest version came out in 2022 and is called Combiner.

But what is the magic behind Kubernetes that still keeps it current?

For us, Kubernetes is the way to work with multiple containers in a friendly, optimal way and through a platform that helps you simplify life, and we say this without going into depth.

Looking back, we could say that Borg and Omega were the ones who paved the way for Kubernetes to exist.

In other words, the world adopted what Linux used in the 1980s or Google used in the 2000s, which was to work with containers on their system.

This today has been replicated on a large scale by a wide range of companies due to the growing adoption of cloud-based solutions, infrastructures, and systems.

Before going to the point, we want to tell you what Kubernetes is in case you still don’t know:

 

It is a platform that allows us to build an ecosystem of components and tools to ease the use, scale, and manage container-based applications.

You won’t usually see it by its full name, but as K8s and the essence of this open source system is a bit like “tidying up something very messy”.

To explain this sentence we are going to use the NFL teams as an inference.

In which several teams are created, used, and that work independently (offensive, defensive, special).

These act as if they were a single team, that is, a distributed system.

In addition, these computers can work on different platforms that are connected through a network without interrupting their operation as a single one.

If we use the previous inference, it would be said that an offensive team in the NFL trains differently from the defensive one, and the net would be the football field.

Who created Kubernetes?

The one that gave birth to this platform was Google.

Google needed to put some order and simplify what they had already been doing with their management systems (Borg and Omega), which is why they decided to create K8s back in 2014 when it was just born.

However, Google no longer owns Kubernetes.

For whatever reason, Google decided to donate and release Kubernetes to the Cloud Native Computing Foundation (which in turn is part of the Linux Foundation) back in 2014, still in its infancy.

Perhaps this is one of the reasons why K8s are so widely used today.

 

Foto de ThisIsEngineering: https://www.pexels.com/es-es/foto/mujer-codificacion-en-computadora-3861958/

So far we have talked about who created it, why, and what it is for… but what is the magic behind Kubernetes that still keeps it current?

 

Well, following the inference that we have been working on, we invite you to imagine the following:

Imagine that a single person was in charge of :

  1. Selecting (creating)
  2. Training
  3. Making each of these pieces of equipment work manually
  4. Being vigilant so that none of them stop working (providing service)

And suppose that this same person is also It is managed by the administrative part, health, legal aspects, marketing, investors and everything that NFL team needs… Wow! just writing it was overwhelming.

Kubernetes will help us orchestrate each of these containers, as it will:

 

  • Automate programming.
  • Perform implementation.
  • Perform the scalability in a simple way both horizontally and vertically.
  • Balance loads.
  • Locate availability and container networks.
  • Minimize the maintenance of these by the person in charge of their administration.
  • Automatically recover a container if a fall occurs.
  • Perform integration with different platforms and cloud providers.
  • Balance intelligent loads between different nodes.
  • Be independent of the application architecture, since it supports complex applications regardless of the type of architecture used.
  • It will allow you to write your own controllers using your own APIs from a command line tool.
  • Allows developers to maintain sets of clones, so there is no need to replicate the entire program. This results in a project with greater responsiveness and resistance.
  • It is a platform that has been tested multiple times and thanks to this there are many success stories, for example:

          Pokemon Go, Tinder, Airbnb, and New York Times, among others.

  • Their efficiency and success can attest to how K8s can be very useful in DevOps.

 

As you can see, it has many aspects that allow it to stay current, and for businesses like ours, it is an excellent option.

If you want to use Devops in your project, don’t hesitate to write us

 

 

La conexión omnipresente de dispositivos móviles adopta tecnología abierta de acceso a Internet móvil de banda ancha, formato de datos abiertos, identidad abierta, reputación abierta, identidad e identidad itinerante portátil, tecnología web inteligente, tecnología semántica, como OWL, RDF, SPARQL, aplicación inteligente SWRL, Razonamiento automático y lenguaje natural en curso. Los potentes dispositivos móviles con conectividad a Internet, inteligencia integrada, identidades autónomas y billeteras cifradas integradas forman parte de la próxima generación de Internet: Web 3.0. Se espera que la Web 3.0 se convierta en el nuevo paradigma de Internet y una continuación de la Web 2.0.

A día de hoy, todavía hay un gran debate sobre la existencia de la Web 3.0. No existe una definición precisa de lo que es o podría ser la Web 3.0, ya que aún está en desarrollo. Esto es algo académico y no tan popular como la perspectiva de descentralizar la Web 3.0.

Se promociona como la próxima iteración de Internet, después de Web 1.0 y Web 2.0, para descentralizar Internet. La Web 2.0 es la versión actual de Internet con la que todos estamos familiarizados, y la Web 3.0 representa su siguiente fase, que será descentralizada, abierta y más útil. Web 3.0 es una colección de aplicaciones web de próxima generación que utilizan nuevas tecnologías como blockchain, inteligencia artificial, Internet de las cosas, realidad aumentada y realidad virtual (AR/VR) como parte de su pila tecnológica central. Estas nuevas tecnologías darán forma a la forma en que los usuarios interactúan con las redes de próxima generación.

La descentralización, la experiencia inmersiva y la inteligencia (también conocida como IA o conocimiento) están ganando terreno rápidamente, y sabemos que ambas desempeñarán un papel central en la próxima generación de Internet. En este artículo, veremos la evolución de la infraestructura de Internet y cómo la llegada de la Web 3.0 está afectando los modelos comerciales existentes. Somos tan adictos a Internet, pero no nos dimos cuenta y no nos dimos cuenta cómo pasó de las primeras páginas estáticas a sitios web completamente interactivos, y ahora a servicios descentralizados basados ​​​​en inteligencia artificial.

Luego, en 2005, llegó la Web 2.0 y cambió la forma en que usamos Internet. Cuando la Web 1.0 apareció por primera vez en 1989, solo se usaba para intercambiar contenido estático a través de Internet, las personas creaban sitios web estáticos y el costo de alojar un sitio web era más alto. El paso a la Web 2.0, denominada red social, ha sido anunciado por los avances en la tecnología móvil como la App Store de Apple y una colección de aplicaciones de redes sociales como Facebook y YouTube que han desatado nuestra capacidad de interactuar socialmente digitalmente. La tecnología Blockchain ha abierto una nueva y emocionante dirección para las aplicaciones Web 3.0.

En esta era de la Web 2.0, Internet está dominada por la creación de contenido y las interacciones sociales utilizando tecnologías Big Tech. Blockchain ha entrado en la transformación de la red digital y su influencia aumentará. A medida que continúa el desarrollo de la tecnología Web 3.0, la tecnología blockchain seguirá siendo un componente vital de la infraestructura en línea. Los avances en tecnologías como los registros distribuidos y el archivo de cadenas de bloques permitirán que los datos se descentralicen y creen un entorno transparente y seguro, superando la centralización, la vigilancia y la explotación publicitaria de la Web 2.0.

Los avances en tecnologías como los registros distribuidos y el archivo de cadenas de bloques permitirán que los datos se descentralicen y creen un entorno transparente y seguro al superar la centralización, la vigilancia y la explotación publicitaria de la Web 2.0. El protocolo de blockchain descentralizado Web 3.0 permitirá a las personas conectarse a Internet, donde pueden poseer y recibir recompensas adecuadas por su tiempo y datos, eclipsando la red explotadora e injusta donde los repositorios centralizados gigantes son los únicos dueños. Y lucren con ello. Con blockchain, la informática web se descentralizará a través del intercambio de información entre personas, empresas y máquinas.

Con la Web 3.0, los datos generados por recursos informáticos diferentes y cada vez más potentes, como teléfonos móviles, computadoras de escritorio, electrodomésticos, vehículos y sensores, serán comercializados por los usuarios a través de una red de datos descentralizada, asegurando que los usuarios retengan el control de su propiedad. La Web 3.0 permitirá que los datos se conecten de manera descentralizada, lo cual es una mejora de la Web 2.0 que tradicionalmente centraliza y aísla los datos. Por lo tanto, muchos líderes de la industria han visto la relación simbiótica entre la Web 3.0, blockchain y las criptomonedas.

Además, en las sesiones del Festival Fintech de Singapur, los ejecutivos se reunieron para analizar estos elementos y lo que podría significar la estructura descentralizada de la Web 3.0 para las jerarquías corporativas. Después de definir la aceptación institucional de la moneda digital, los miembros del equipo pasaron a definir la Web 3.0 y explorar las deficiencias de las plataformas Web 2.0. La discusión comenzó con algunas de las deficiencias de las plataformas Web 2.0.

Victor describe la Web 3.0 como una clasificación amplia de tecnologías y herramientas distribuidas que proporcionan una Internet punto a punto basada en blockchain. Otras tecnologías, como las API abiertas, los formatos de datos y el software de código abierto, también se pueden utilizar para desarrollar aplicaciones Web 3.0. Finalmente, el desarrollador moderno implementa rápidamente aplicaciones que están integradas con estos componentes Web 3.0 utilizando plataformas tecnológicas como IBM Cloud e IBM Blockchain Platform.

De hecho, las nuevas tecnologías que componen los componentes de la aplicación prototipo Web 3.0 ya son parte integral de las aplicaciones que usamos hoy. En pocas palabras, la Web 3.0 hereda lo que usamos hoy y agrega el poder de 5G, dispositivos y sensores inteligentes, AI/ML, AR/VR y blockchain, brindando una solución completa que difumina lo digital y los límites entre los números. Poco a poco estamos viendo el surgimiento de la tecnología web 3.0, hay algunas aplicaciones web 3.0 disponibles, pero hasta que no se produzca un cambio de paradigma completo, no podremos aprovechar su verdadero potencial. Pero una cosa es segura: la Web 3.0 cambiará nuestra vida en línea, haciendo que sea más fácil y conveniente buscar cualquier contenido en Internet, al tiempo que garantiza la seguridad de nuestros datos confidenciales.

La fragilidad de Internet es un problema que requerirá no solo un desarrollo innovador, sino también una forma de pensar radicalmente nueva. Los empresarios de la Web 3.0 y los desarrolladores de aplicaciones de descentralización (dApp) se enfrentarán a desafíos de desarrollo de software, como la autenticación de usuarios y el almacenamiento/consultas de datos, de manera diferente que en la era de la Web 2.0. Este cambio gradual hacia lo que puede ser una Internet verdaderamente libre y transparente es, sin duda, una perspectiva emocionante para muchas personas, pero puede tomar una curva de aprendizaje empinada para que los desarrolladores pasen de crear aplicaciones Web 2.0 a explorar los caminos necesarios para crear aplicaciones descentralizadas.

Como resultado, las aplicaciones Web 3.0 se ejecutarán en Blockchains descentralizadas, redes peer-to-peer o una combinación de estas; estas aplicaciones descentralizadas se denominan Dapps. Web 3.0 es la tercera generación de servicios de Internet para sitios web y aplicaciones que se centrarán en aprovechar la comprensión de los datos por parte de las máquinas para ofrecer una Web Semántica basada en datos. Web 3.0 se basa en los conceptos fundamentales de descentralización, apertura y mayor facilidad de uso. La Web 3.0 se trata de construir una infraestructura descentralizada que proteja la propiedad y la privacidad individuales.

En general, la Web 3.0 es la próxima etapa en la evolución de Internet, que permitirá que la inteligencia humana procese información con tecnologías como big data y machine learning. Las principales características de la Web 3.0, como la descentralización y los sistemas sin recompensas, también brindarán a los usuarios un control mucho mejor sobre sus datos personales.