Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

Tenable Blog

Subscribe

The Azure Metadata Protection You Didn’t Know Was There

Tenable Cloud Security

Some Azure services have an additional, not widely known, protection mechanism against session token exfiltration.

Azure services such as App Service, Functions and Logic Apps use a runtime that implements a significant protection layer against exfiltration of session tokens from the machines they run on. This protection strengthens the rationale for avoiding sending VMs to do a managed service job.

Credentials exfiltration from metadata services of cloud provider instances is a very popular vector for initial access by malicious actors to cloud environments.

It’s easy to understand why. The metadata service holds and serves session tokens for a proxy identity that has privileges to access resources in the cloud environment. Applications can retrieve these credentials to impersonate the identity and be able to access the cloud resources. (See Figure 1 for a visual detailing these implementations.)

Accessing the metadata service is usually done by apps via HTTP calls. If a malicious actor can manipulate an app or configuration to retrieve a session token, and no other control prevents the token’s use outside the machine, the actor should be able to impersonate the identity that the machine itself is meant to use.

Figure 1: Comparison of metadata services in cloud provider computing services

We’ve covered this risk and its mitigation extensively in the past, including in a blog on AWS EC2 instances, a session at the recent fwd:cloudSec conference on different implementations of the metadata service in AWS, Azure and GCP and, most recently, a post on how such credentials exfiltration combined with poor defaults can have dire implications in GCP.

In this blog, we highlight a pleasant surprise we uncovered while researching the topic.

The unexpected gatekeeper

While looking deeply at some managed services available in Azure, such as App Service, Functions, Automation accounts and Logic Apps, we found that these services utilize a different metadata protection we hadn't seen in the other CSPs.

We saw that these Azure services share a similar runtime – perhaps even the same one. They also share the same orchestrator and service for sending the session tokens for the managed identity attached to the workload (a managed identity which is the proxy identity used by Azure services to access resources in the customer’s directory).

We found that to access the metadata endpoint in the specified services in Azure, you would need to attach to the request the values of two environment variables: IDENTITY_ENDPOINT and the IDENTITY_HEADER.

So, instead of the usual HTTP request to a VM metadata service that looks like this …

… an app makes a request that looks like this …

… with IDENTITY_ENDPOINT and IDENTITY_HEADER being the environment variables we mentioned before.

Requiring the use of these environment variables can be seen as an additional authentication mechanism to the workload's metadata service - and a significant one! This is because it would be extremely difficult to obtain these values without running code in the machine (and if a malicious actor can do that, they can exfiltrate the credentials either way - as well as probably do more nasty stuff).

This kind of protection layer can make the implications of vulnerabilities that may allow an attacker to manipulate applications running on web servers – such as server-side request forgery (SSRF) – to be less dire (although it goes without saying you should still do your best to avoid them).

If we were to sum this up visually:

We note that in the corresponding services in AWS (Elastic Beanstalk) and GCP (AppEngine) we did not see a similar mechanism. In fact, by running code on these managed services that simply make the regular HTTP request to the Instance Metadata Service (IMDS) (simulating an attacker using SSRF, for example) we were able to extract the credentials from these services. In AWS’s favor, we wish to point out that, when creating a new Elastic Beanstalk application, the default configuration that the console offers disables IMDSv1 on the EC2 instance on which it would be deployed. As we all know, this disabling is significant for the instance’s security posture.

Ok, but people use managed services, don’t they?

Manipulating metadata service as a beachhead into cloud environments is a prominent attack vector. Having additional protection in the form of the requirement of the metadata API to know the value of these environment variables offers, for appropriate applications, significant motivation for using a managed service such as App service instead of a VM. This is especially true given that, because they are aimed at serving just about anyone, these kinds of deployments are by design many times publicly available through their network configuration. Yet in a survey of real-life environments in which we reviewed the configuration of over 10,000 VM instances we found that almost 18% were configured to be publicly accessible on ports used by HTTP or HTTP/s applications (80 / 8080 / 443 / 8443).

This finding is extremely interesting for many reasons. One is that it may indicate how many services are deployed on VMs rather than through the use of an appropriate managed service. If this is true for even half the cases (~9%), many companies are missing out not only on convenience in deploying applications but also on built-in security.

So remember: Unless you have a really good reason for taking a different approach, do your best to avoid sending a VM to do a job offered by a managed service. The latter isn’t just easier to use – it could be surprisingly more secure.

Related Articles

Cybersecurity News You Can Use

Enter your email and never miss timely alerts and security guidance from the experts at Tenable.