As the complexity of an application increases, a breakdown of responsibilities and tasks can be applied to manage it efficiently. This approach is consistent with the separation of tasks and helps keep the expanding code base organized, so that developers can quickly determine where certain functions are implemented. Layered architecture also has a number of other advantages.

By organizing code through layers, common low-level functions can be reused repeatedly throughout the application. This is critical because this approach requires less code and, by standardizing the application at the level of a single implementation, is consistent with the “No Repetition” principle.

Applications with a layered architecture can impose restrictions on interactions between layers. Such an architecture helps to implement encapsulation. When a layer is changed or replaced, only those layers that work directly with it will be affected. By limiting the dependencies of layers on each other, the consequences of changes can be reduced, so that a single change will not affect the whole application.

Applying layers (and encapsulation) makes it much easier to replace functionality within an application. For example, an application can initially use its own SQL Server database for persistence and later switch to a cloud-based or webAPI-based state preservation strategy. If the application properly encapsulates the persistence implementation on a logical layer, that SQL Server layer can be replaced with a new one that implements the same open interface.

In addition to being able to replace implementations due to subsequent changes, the use of layers in an application also allows you to change implementations for testing purposes. Instead of writing tests that apply to the real data or user interface layers of the application, during testing they are replaced with dummy implementations that exhibit a known response to queries. Typically, this greatly simplifies test writing and speeds up test execution compared to testing in the real application infrastructure.

Layers provide a logical layer of separation in the application. If application logic is physically distributed across multiple servers or processes, these separate physical deployment targets are called layers. Thus, it is not only possible but also common to deploy N-layer applications on a single layer.

Traditional applications with an N-layer architecture
Typically, an application defines layers of user interface, business logic, and data access. In this architecture, users make queries through the user interface layer, which interacts only with the business logic layer. The business logic layer, in turn, can call the data access layer to process queries. The user interface layer must not make queries directly to the data access layer or otherwise interact directly with the persistence functions. Similarly, the business logic layer should only interact with persistence functions through the data access layer. Thus, each layer has a clearly defined responsibility.

One disadvantage of the traditional layered approach is that dependency processing during compilation is top-down. This means that the user interface layer depends on the business logic layer, which in turn depends on the data access layer. This means that the business logic layer, which usually contains the key functions of the application, depends on the details of the data access implementation (and often on the availability of the database itself). Testing business logic in such an architecture is often difficult and requires a test database.

Although this application uses multiple projects for streamlining purposes, it is still deployed as a single element, and its clients interact with it as a single web application. This allows for an extremely simple deployment process.

Breaking this project up into multiple projects based on responsibilities makes it easier to maintain the application.

Such an element supports vertical and horizontal scaling, allowing you to take advantage of cloud scaling on demand. Vertical scaling refers to increasing the number of CPUs, memory, disk space, and other resources on the servers where the application is hosted. Horizontal scaling is the addition of additional instances of these physical servers, virtual machines, or containers. If an application is hosted on multiple instances, a load balancing system is used to distribute requests among the application instances.