Securing modern API- and microservices-based apps by design - part 2

Authorization across microservices, AuthN and AuthZ protocols, and advanced security policies

Originally posted by Farshad Abasi of Forward Security on the IBM Developer blog on May 31st.

Part 1 of this two-part series discussed what services and microservices are, the role of APIs and API gateways in modern application architectures, the importance of user-level security context, and end-to-end (E2E) trust. Now part 2 covers authorization and different ways of handing it across microservices, what authentication (AuthN) and authorization (AuthZ) protocols to use, and what to do when an API is invoked by applications and services outside its trust boundary. It also covers considerations for additional security policies beyond AuthN and AuthZ, logging and monitoring, and how group policies can help build a more secure app that is based on APIs and microservices.


Some exposure and previous knowledge of APIs and microservices-based architectures help you better grasp the security aspects discussed. However, it is not necessary. For a better understanding, make sure to read Part 1 of this series.

Estimated time

Take about 30 to 45 minutes to read both parts of the series. Part 2 should take about 15-20 minutes.

The need for AuthZ

Part 1 discussed authenticating users and enforcing the required security policies across the microservices of an application at the API gateway. In addition, each microservice might have specific AuthZ requirements. For example, in an online banking application, a microservice for bank accounts should only allow read operations to a checking account for users with basic authentication context. But the checking account should also allow the users to complete both read and write operations in a two-factor authentication (2FA) authentication context.

The security token service evaluates the information about the service or resource that is requested (typically the HTTP URL location of the target service or resource), the scope or type of request, and each user’s security context. Then it issues a security token that represents the AuthZ issued, including valid claims that are made. The set of claims can include the end-user and issuer’s identities, identities of specific consumers, expiration time, and more.

Consider an architecture where a token-exchange service obtains a security token for each request, as shown in the following flow diagram. The service can handle AuthZ aspects, such as whether a request type (for example, READ) in a specific user-level security context is allowed for the requested service or resource. It can add this information as valid claims and scopes in the token.


The microservice that receives the security token with the service request verifies the authenticity of the token to ensure it is issued by a trusted service. Then, it provides the functionality requested by the sender, based on the valid authorized claims and scope. If you need additional microservices to complete the invocation, use the token-exchange service to obtain a new security token (which the downstream microservice can use with the appropriate protocol, claims, and scope).

If the API gateway issues one E2E trust token for the entire journey across one or more microservices and no token-exchange service is used downstream, each of the microservices across the call chain must handle all AuthZ related aspects. In this situation, make sure that each microservice is configured with the appropriate AuthZ policy and that the policies are enforced correctly. With this approach you don’t need to get a new security token each time. However, you get more flexibility and better access control (as discussed in part 1) with central handling of AuthZ and the token-exchange end-point.

What AuthN and AuthZ protocols can I use?

OpenID Connect and OAuth 2.0 are the two most popular protocols used to secure modern APIs, because they align with common requirements of HTTP-based applications. OpenID was originally designed to only handle AuthN. OAuth was designed to handle AuthZ and long-term access delegation from users to services acting on their behalf. Because AuthN and AuthZ typically go hand-in-hand, OpenID Connect added an AuthN layer (using an ID token) to OAuth 2.0, and replaced the OpenID protocol (now deprecated). Because OpenID Connect uses HTTP, JavaScript Object Notation (JSON), and JSON Web Tokens (JWT), it is popular with today’s API-based web and mobile applications, which require lighter tokens and protocols.

You can also use the SAML protocol for API security, and it supports both AuthN and AuthZ. SAML gained popularity with the rise in adoption of web services inside organizations, and it became the standard of choice for service-oriented architectures. However, due to its heavy nature and requirements for XML and SOAP, teams working with external REST APIs that are based on HTTP opted for the lighter OAuth 2.0 and OpenID Connect protocols.

Invocation by external applications and services

In some cases, an application provides functionality to the user that might require invoking a microservice provided by another application. For example, a mash-up “account balance” microservice in an online banking application presents one view of all of a user’s account balances, including checking and mortgage accounts. The microservice needs to obtain this information from separate core banking and mortgage systems respectively.

The mortgage application’s microservice is invoked by the account balance microservice that resides in another application trust zone (in this case the online banking application) where the user authentication step is already completed. The mortgage application might not use the same identity provider (IdP) as the online banking application, and the goal is to not require the user to go through “heavy” authentication again. Therefore, there needs to be a mechanism to pass the user’s security context and valid claims in a way that the requested microservice in the mortgage application can verify and use it.

In addition, the security token issued by the online banking application’s security-token service proves that the user successfully met the conditions required by the security policies of the online banking application. However, the mortgage application might have different security policy requirements that need to be satisfied.

In this situation, you need to establish some form of trust between the trust zones for the two applications to verify both the authenticity of the claims and the user’s security context that is presented by the security token in the request. You can use the security-token service of the called application to verify the token’s signature, and you can issue a new token that can be verified by the microservices of that application. Also, the requested microservice’s API gateway can enforce any additional security policies that are required before the request is passed downstream.

Without central handling of these activities by an API gateway and a security token service, each microservice would need a trust relationship with external consumers, causing a very complex and unmaintainable architecture.

The API gateway of the called application should enforce all required security policies, unless it is specifically indicated that a security policy was satisfied by the trusted caller. You cannot always trust that external applications performed the required security checks, so follow a defense-in-depth strategy that applies security at all layers.

Should I care about other security issues?

So far, this article primarily focused on enforcing security policies related to AuthN and AuthZ concerns. However, the API gateway, in addition to acting as central a point of such enforcement, should also apply other security policies. Consider the following options:

  • Rate-limiting prevents calling of an API more than the allowed number of times for a given period by a specific consumer.

  • JSON threat protection protects against content-level attacks that attempt to use structures that overwhelm JSON parsers, resulting in app level denial-of-service.

  • XML threat protection protects against attacks related to unusual inflation of elements, attributes, and nesting levels (similar to JSON threat protection).

  • Other custom policies centrally address applicable application security threats.

Rather than each microservice development team creating its own policies, the policies should be provided centrally, so teams can configure and apply what is needed at the API gateway. This approach eases the development process and ensures consistent enforcement. You still might need to develop custom policies (if they don’t already exist) to meet specific needs.

Don’t forget to log, monitor, and detect

As with all security architectures, detective controls such as logging and monitoring play an important role as well. Each API needs to log all important events including security-related ones and send the data to a central system for further correlation, analysis, and detection of potential security concerns.

Record security related events, such as a success or failure of compliance to a specific policy, for further analysis and threat detection. Collect information, like how many times end-points are invoked and response times, to assist your team in both preventing denial of service and in better profiling the application and system.

To help detect fraud, APIs can act as instruments to provide data about the specific device that accessed the API (and its location). Use this information to detect scenarios that match a specific fraudulent access pattern.

Apply group policy

Grouping related objects into one unit and applying configuration values or policies across the group is not a new concept. This approach helps you consistently apply those values and policies to the specific set of objects in that group. The same principle applies to the microservices of an application. Group together the microservices and their APIs that address a particular business need. Present them to developers with a service plan and set of policies that apply to that group.

For example, consider a set of APIs that provide data in XML format, used for building an internal application for account servicing by staff. These APIs require an authentication policy that integrates with the corporate active directory (AD) service, XML threat protection, and a rate-limiting policy that focuses on inside initiated access.

Consider another set of APIs with a JSON data format, used for building an external application for customers to perform Internet banking. These APIs require an authentication policy that uses the customer data repository (for example, LDAP), enforcement of JSON threat protection, and rate-limiting designed for external use cases.

After all security policies applicable to the group are successfully enforced, access is provided and limited only to specific microservices or resources within the group, which is indicated by the security token. When individual microservices have additional security policy requirements over and above the group policies, handle the custom policies either at the API gateway with the security token service, at the token exchange service, or at the microservice itself.


Modern applications based on APIs and microservices are often distributed and communicate over networks. But you need the same level of security assurance as you do in monolithic applications. API gateways help with consistent enforcement of security policies across the microservices of an individual application and can assist with handling aspects related to authorization.

It is also important to have user-level security context and E2E trust across the entire journey, in addition to service level trust among the microservices of an application. You can use protocols such as OpenID Connect, OAuth 2.0, and SAML to facilitate AuthN and AuthZ, and aid in designing a system that handles security at the right place and the right time and guarantees end-to-end trust across the entire journey.

This article also covered why you should apply other security policies beyond AuthN and AuthZ (for example JSON threat protection and rate limiting), the importance of appropriate logging and monitoring, and using policies at the group level to build more secure applications that are based on APIs and microservices.

In summary, consider the following security concepts when you design and implement microservices and API-based applications or services:

  • Maintain user-level E2E trust across the entire journey.

  • Ensure AuthZ is enforced at the right place with the right level of granularity.

  • Group your APIs and use an API gateway to apply configurable security policies consistently.

  • Don’t forget to log, monitor and detect.

  • Follow a defense-in-depth strategy, and add security at all layers.

Securing modern API- and microservices-based apps by design

A high-level security blueprint for modern apps based on APIs and microservices - part 1

Combine existing security concepts and best practices to build more secure distributed applications

Originally posted by Farshad Abasi of Forward Security on the IBM Developer blog on May 31st, 2019.

A common approach to modernizing applications is to use APIs and decompose them into smaller units that typically live in containers. Modernizing applications involves many concepts and technologies that are not always well understood, leading to poor application security postures. In addition, solution architects and developers who create the applications often lack the knowledge and expertise to select and apply the required security controls.

This two-part series brings together existing ideas, principles, and concepts such as end-to-end trust, authentication, authorization, and API gateways, to provide a high-level blueprint for modern API and microservices-based application security.

Read both parts of this series to learn the following skills:

  • Gain a high-level understanding of modern API-, services-, and microservices-based application architectures.

  • Become aware of key security concerns with these application architectures.

  • Understand how to best secure application microservices and their APIs.

This series is for you if you are a solution architect, a software developer, or an application security professional who is faced with securing APIs and their microservices.


Some exposure and previous knowledge of APIs and microservices-based architectures help you better grasp the security aspects discussed. However, it is not necessary.

Estimated time

Take about 30 to 45 minutes to read both parts of the series. Part 1 should take about 20-25 minutes.

What are services and microservices?

According to Wikipedia, “Microservices are a software development technique – a variant of the service-oriented architecture (SOA) architectural style that structures an application as a collection of loosely coupled services. In a microservices architecture, services are fine-grained, and the protocols are lightweight.”

For decades, monolithic applications were the order of the day. An application contained a set of several different services in order to deliver business functionality. The monolithic application’s services and functions had the following characteristics and challenges:

  • They were tightly coupled together.

  • They did not scale well (especially when different components had different resource requirements).

  • Monolithic applications were often large and too complex to be understood by a lone developer.

  • They slowed down development and deployment.

  • They did not protect components from the impact of issues from other components.

  • The applications were difficult to rewrite if you needed to adopt new frameworks.

Consider the following illustration comparing the architectures of monolithic applications and microservices-based applications.


A movement began in the late 1990s that began to focus on service-oriented architecture (SOA). Companies, particularly larger enterprises, started analyzing systems deployed across their organizations, looking for redundant services within different application systems. They then created a unique set and exposed legacy systems as services that could be integrated to deliver applications through new or existing channels.

Part of this process involved decomposing existing application systems into a set of reusable modules and facilitating development of various applications by combining those services. Each service in turn contained several tightly coupled functions related to a particular business area exposed through application programming interfaces (APIs), most commonly as SOAP and XML-based web services that communicated over an enterprise service bus. The following illustration shows an enterprise service bus connecting SOA and microservices-based applications and services:


The SOA movement was further propelled into popularity with the rise of web services in the early 2000s. (See the Service-Oriented Architecture Scenario article published by Gartner in 2003.)

Over the past decade, another movement started that further breaks these service “monoliths” into yet smaller microservices by decoupling the tightly connected functions of each service. Each microservice typically resides in its own container and focuses on a specific functional area that was previously provided by the service and tightly coupled with other functions. The individual microservices typically have their own datastore to provide further independence from other components of the application system. However, data consistency needs to be maintained across the system, which brings new challenges.

APIs are still used to expose the functionalities provided by microservices. However, lighter protocols and formats such as REST and JSON are used instead of the heavier SOAP, XML, and web services. This further decomposition does not necessarily lead to additional APIs exposed to service consumers, if the overall set of functions provided by the service or application remain the same. In addition, some microservices might only provide internal application functionality to the other microservices that are part of a specific application’s group of APIs. They might not expose any APIs for consumption by other applications or services.

Microservices offer a few advantages over monolithic architectures and lead to more flexible and independently deployable systems. In the past, when several developers worked on different functions that made up an (often) large service or application, an issue with one function prevented successful compilation and roll-out of the entire service or application. By decoupling those functions into microservices, the dependency on other functions of the application system is removed. Issues with development and roll-out in one area no longer affect the others. This approach also lends itself well to DevOps and agile models, shifting from a need to design of the entire service at one time and allowing for continuous design and deployment.

Using microservices address the issues related to monolithic applications described earlier, but this architecture brings about a new set of challenges. Examples include the complexities of distributed systems development (such as inter-service communication failure and requests involving multiple services), maintenance of data consistency across the system, deployment, and increased memory consumption (due to added overhead of runtimes and containers).

What about APIs?

Let’s visit Wikipedia again to understand an application programming interface, or an API: “In general terms, it is a set of clearly defined methods of communication among various components. A good API makes it easier to develop a computer program by providing all the building blocks, which are then put together by the programmer.”

APIs are a set of clearly defined communication methods between software components. APIs have been around for quite a while and have made it easy for independent teams to build software components that can work together without knowledge of the inner working of other components. Each component only needs to know what functions are exposed as APIs by the other component it interacts with. This architecture creates a nice separation layer and promotes modularity. Each component in a system can focus on a certain set of functionalities and expose the useful features to the outside world for consumption, while hiding the complexities inside, as shown in the following illustration:


The focus of this article is on modern web APIs provided by many of today’s web enabled applications. RESTful APIs, or those that conform to Representational State Transfer (REST) constraints, are the most common. These APIs communicate using a set of HTTP messages, typically using JavaScript Object Notation (JSON), or in some cases Extensible Markup Language (XML), to define the structure of the message. Additionally, the shift in the recent years to use REST web resources and resource-oriented architecture (ROA), gave rise to the popularity of RESTful APIs.

The API gateway and the post-monolithic world

In the world of monolithic applications, all functions reside in the same walled garden, protected by a single point of entry, which is typically the login functionality. After end users authenticate and pass this point, access is provided to the application and all of its functionality, without further authentication. This authentication approach is because the application architecture tightly couples all functions inside a trust zone, and they cannot be invoked by outsiders to the application. In this scenario, each function inside the application can be designed to either perform further authorization (AuthZ) checks or not, depending on the requirements and granularity of entitlements scheme.

When you break these monoliths into smaller components such as distributed microservices, you no longer have the single point of entry that you had with the monolithic application. Now each microservice can be accessed independently through its own API, and it needs a mechanism to ensure that each request is authenticated and authorized to access the set of functions requested.

However, if each microservice performs this authentication individually, the full set of a user’s credentials is required each time, increasing the likelihood of exposure of long-term credentials and reducing usability. In addition, each microservice is required to enforce the security policies that are applicable across all functions of the application that the microservice belongs to (such as JSON threat protection for a Node.js application).

This is where the API Gateway comes in, acting as a central enforcement point for various security policies including end-user authentication and authorization. Consider the following diagram:


After the users go through a “heavy” verification process involving their full set of long-term credentials (such as user name, password, and two-factor authentication), an access token is issued that can be used for “light” authentication and further interaction with downstream APIs from the API gateway. This process minimizes exposing users’ IDs and passwords, which are considered long-term credentials to the system components. The credentials are only used once and exchanged for a token that has a shorter lifespan.

The API gateway typically uses an identity and access management (IAM) service to handle verifying end-user identity, issuing access tokens, and other token-related activities (such as token exchange, which is described later). The API gateway acts as a guard, restricting access to the microservices’ APIs. It ensures that a valid access token is present and that all policies are met before granting access downstream, creating a virtual walled garden.

In addition to the previously described security benefits, the API gateway has the following non-security features:

  • It can expose different APIs for each client.

  • The gateway routes requests from different channels (for example desktop vs. mobile) to the appropriate microservice for that channel.

  • It allows for creation of “mash-ups” using multiple microservices that might be too granular to access individually.

  • The API gateway abstracts service instances and their location.

  • It hides service partitioning from clients (which can change over time).

  • It provides protocol translation.

The importance of user-level security context and end-to-end trust

In many multi-tiered systems, the end-users authenticate through an agent (such as a browser or a mobile app) to an external-facing service endpoint. The other downstream components in turn enforce mutual authentication either by using service accounts over Transport Layer Security (TLS), or mutual TLS, to establish service-level trust.

A major problem with this design is that the service provider gives access to all data provided by the set of functions that the service account is permitted to use. It does not consider the authenticated user’s security context. This approach is too permissive and is against the principle of least privilege.

For example, in a system where user account details are provided by a user account service, the service consumer could ask for and gain access to any user’s account details by simply authenticating to the provider using a service account. An attacker with an agent capable of connecting to a service provider (using a compromised service account credentials or accessing through a compromised system component that established service trust with the provider) can connect and access data belonging to any user. Another problem is this design does not allow for comprehensive auditing of actions.

These problems highlight the importance of establishing an authenticated user’s security context before allowing access to data provided by a service. An access token should be required by the service provider to determine the user’s security context before servicing the request. These tokens must be signed so your system can verify both the authenticity of the issuer and the integrity of the token data. If there is a confidentiality concern related to data in the token (for example, account numbers), make sure that the token is encrypted.

In addition, an API invocation might involve calls across several microservices downstream from the API gateway. End-to-end (E2E) trust means communicating the authenticated user’s security context to all the involved parties across the entire journey and allowing each party to take appropriate action. As noted at the start of this section, the user’s security context often ends at the external-facing service endpoint, and all downstream components rely on service-level trust. There are two ways that you can establish E2E trust across all microservices belonging to a given API group or application. One uses a token-exchange service, and the other relies on E2E trust tokens.

When establishing E2E trust with a token-exchange service, downstream components use that service to exchange a token with another token that has the required scope and protocol. At the API gateway, the user’s security token is verified, security policies are enforced, and the token is sent to the token-exchange service to receive a token that can be used by the immediate downstream microservice. If there are additional microservices across a journey, each one verifies the access token that it receives and uses the token-exchange service to swap it for one that can be used by the next downstream service provider. The token exchange service ensures that the new token has the scope required by the downstream service to both limit the amount of access granted and use the appropriate protocol. (For example, a JWT token can be exchanged for a SAML one.) With token exchange allows for some microservices to use a system of heterogenous protocols such as OAuth 2.0 and others to use SAML. Each microservice validates the token received and enforces further AuthZ, if required, based on the claims provided. More details about AuthZ are discussed in Part 2.

Another way to achieve end-to-end trust is to use one E2E trust token issued at the gateway across the entire journey. In this scenario, the gateway verifies the user’s token and performs the security policy checks required by the application or API group (as in the previous approach). Then, it also issues an E2E trust token after all validations are successful. In this scenario, all microservices involved must have the same scope and protocol requirements. Upon receiving the E2E trust token, a microservice verifies the signature (and performs decryption if required) and processes the request based on the claims provides. If there are other microservices in the journey, the token is passed on, and the same verification process takes place at each downstream microservice. This model removes the round trip to the token-exchange service for each call across the journey, but it does not provide the same level of flexibility and access control as using a token-exchange service.

With both of these models, E2E trust is established between microservices that belong to the same trust zone (for example, they require the enforcement of the same set of security policies) and that use the same identity provider (IdP). If you need to access microservices in another trust zone that have a different IdP, the path must go through the gateway where the API for the microservice is presented, which enforces the required set of security policies for that trust zone. E2E trust tokens that are issued for one trust zone should not be accepted by microservices in another.

Although these mechanisms provide user-level security context across the journey, they do not remove the need for service-level trust. Apply security at all layers and follow the defense-in-depth model. Lack of service-level trust could lead to using a compromised token (or one that is generated by an attacker and signed using a compromised token-signing key) from agents that are not authorized to access microservice components that make up a specific application. Modern API systems allow secure overlay networking to make it easier to establish service-level trust than service accounts or mutual TLS between the microservice components involved.


Today’s modern service-oriented applications are often decomposed into microservices exposed by APIs. It is important to understand what microservices and APIs are and the role they play in the application system. When adopting an architecture based on APIs and microservices, use API gateways where possible to provide one point of security policy enforcement and to establish end-to-end user-level trust on top of service-level trust. This approach ensures security is applied at multiple layers.

In Part 2 you learn about how to use protocols such as OpenID Connect, OAuth 2.0, and SAML to facilitate authentication (AuthN) and authorization (AuthZ). These protocols aid in designing a system that handles security at the right place and the right time, guaranteeing end-to-end trust across the entire journey. Part 2 covers when and why AuthZ is needed and what to do when applications and services outside a trust boundary invoke APIs. You also learn about additional security policies to consider beyond AuthN and AuthZ (for example JSON threat protection and rate limiting), logging and monitoring considerations, and how group policies can help you build a more secure application based on APIs and microservices.

Lessons from the Capital One breach

The cloud can be more secure – but it’s not automatic The recent Capital One breach is a very interesting…

A risk reduction recipe for vulnerabilities

Kobalt’s study of over 1000 Canadian SMB tech companies showed over a third have highly vulnerable, internet facing services.

Kobalt finds over a third of Canadian SMB tech companies suffer from serious external vulnerabilities

Over a third of organizations assessed have high or critical vulnerabilities that can lead to compromise and breach. Read more…

OSInt, shoe laces and bubblegum

How to use OSInt with limited time and budget to better understand how attackers see your organization.

Know what’s baked inside your application – and how to secure it

Baking security into the application and having an insider real-time view from within helps provide an accurate picture of risks.

How Kobalt Can Help Your Business Security

Cyber security is hard. Mid sized organizations are stretched for resources. We're looking to make a difference.

Security Monitoring to Cross the Compliance Chasm

Logging and monitoring are critical elements of almost all significant compliance regimes. It doesn't need to be slow or expensive…

How to Survive Vendor Security Questionnaires

Clients demanding compliance? Try these tried and true principles……

5 Areas That Will Allow you to Achieve Compliance

Compliance can seem daunting. Understanding the work to be done helps relieve the stress.