Azure Blob storage with NGINX proxy

Create an NGINX proxy and stick it in front of your Azure Blob storage so you can use Crowdstrike RTR to its full potential, bypassing restrictive file size limits and artificial bandwidth limitations

Background/Problem

CrowdStrike RTR is an industry leading Endpoint, Detection, and Response (EDR) tool which can be installed on workstations/servers running Windows, Linux, and macOS. One of the core capabilities of RTR is the ability to dump memory of running processes; either individually or as an entire dump (in raw format). Another functionality is the ability to remotely acquire files using the ‘get’ command.

However, there is a critical issue when it comes to actually transferring the files from the endpoint. RTR only supports a maximum file size of 4GB for Windows, and 2GB for Linux/macOS.

This is not appropriate given the potential size of files being acquired from a host during IR engagements. It presents a major hurdle in the preservation and acquisition of volatile data from remote hosts which may have been isolated or contained.

The second issue with RTR is acquisition speeds. During testing (on residential and enterprise connections, as well as on/off VPN connections) indicates there is an artificial limit when it comes to bandwidth. Whether this is by design or a limitation of CrowdStrike's AWS resources, I’m not sure.

The third issue is RTR does not support whitelisting remote hosts by subdomain/FQDN, only IP addresses. When a device is contained/isolated on the network, it is restricted from communicating with anything other than CrowdStrike’s RTR console for administration purposes. This restricts the ability to acquire files and transfer them from remote hosts, and the only option is to use native RTR commands such as ‘get’. Remote IP addresses can be whitelisted, however if you’re using object storage such as S3 buckets and Azure Blob storage, you cannot assign (at the time of this guide), a static public IP address to that resource. That effectively means you cannot whitelist your destination bucket.

I’ll highlight how this can be overcome by using NGINX as a reverse proxy, coupled with high performance Azure Blob storage.

We’ll need to create two resources; a virtual machine (this can be hosted anywhere), and Azure Blob storage. For ease of management, both can be in the same resource group. For the purpose of this guide, I’ll spin up a host in BinaryLane. You'll also need a domain and SSH/console access to your VM.

Summary of what we need to do;

  1. Create a subdomain/domain for our proxy host,

  2. Provision a new Azure Blob storage account

  3. Provision a virtual machine as our reverse proxy,

  4. Setup NGINX as a reverse proxy,

  5. Configure NGINX to pass requests to our upstream storage account

  6. Configure our storage account with our custom subdomain,

  7. Reconfigure NGINX to allow large file uploads,

  8. Restrict Azure Blob service access to the IP address of our NGINX proxy

  9. Test

1. Create a subdomain

blob.iblue.team – this subdomain will be what we use with AzCopy and RTR.

This will be our front door, supported by NGINX, acting as a proxy, which will receive requests from public IP addresses.

2. Provision a new Azure Blob storage account

ixibluevault – the name of our Azure Blob storage account (this is setup to only receive connections directly from blob.iblue.team

3. Provision a virtual machine as our reverse proxy

I’ve chosen a virtual machine with BinaryLane. 1 vCPU, 2GB RAM, 40GB of disk space (for $8/mo). This is sufficient for testing, but you’ll need to ensure you have sufficient resources to proxy requests. If you want to monitor resource consumption, consider installing Copilot or Zabbix.

The temporary public IP of our reverse proxy is 43.224.183.238

4. Setup NGINX as a reverse proxy,

Update the host, and install nginx

$ apt update -y; apt install nginx -y

Define our base site

$ sudo mkdir -p /var/www/blob/html
$ sudo chown -R $USER:$USER /var/www/blob/html
$ sudo chmod -R 755 /var/www/blob
$ echo “blob.iblue.team” > /var/www/blob/html/index.html
$ nano /etc/nginx/sites-available/blob

server {
        listen 80;
        root /var/www/blob/html;
        index index.html index.htm index.nginx-debian.html;
        server_name     blob.iblue.team;
        location / {
                try_files $uri $uri/ =404;
        }
}
$ sudo ln -s /etc/nginx/sites-available/blob /etc/nginx/sites-enabled/

Test your NGINX configuration, and then reload/restart the NGINX service.

$ nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

$ systemctl restart nginx

You should see a non-HTTPS default NGINX placeholder page.

Install certbot so we can deploy Let'sEncrypt certificates. Generate a cert for our subdomain (this isn't proxied through Cloudflare)

$ apt install certbot python3-certbot-nginx

$ certbot --nginx -d blob.iblue.team

(accept prompts)

NGINX should have been restarted already by Certbot. Refresh the page, you should now see it's using HTTPS/TLS.

5. Configure NGINX to pass requests to our upstream storage account

So we have a basic NGINX site setup, and an existing Azure Blob storage account. (ixibluevault.blob.core.windows.net)

Edit your site definition and include the storage account value in proxy_pass

Something that I discovered during testing is that NGINX uses HTTP/1.0 by default. Azure Blob storage only supports REST API calls using HTTP/1.1

If you see errors similar to the following (these will be on the host on which you're executing azcopy), you need to modify your nginx.conf file to force HTTP/1.1

PUT https://blob.iblue.team/yourbucket/somefile/[REDACTED]
   Accept: application/xml
   Content-Length: 8388608
   Content-Type: application/octet-stream
   User-Agent: AzCopy/10.27.1 azsdk-go-azblob/v1.4.0 (go1.23.1; linux)
   X-Ms-Client-Request-Id: [snip]
   x-ms-version: 2023-08-03
   --------------------------------------------------------------------------------
   RESPONSE Status: 505 The HTTP version specified is not supported for this operation by the server.
   Connection: keep-alive
   Content-Length: 297
   Content-Type: application/xml
   Date: Tue, 14 Jan 2025 09:33:32 GMT
   Server: nginx/1.18.0 (Ubuntu)
   X-Ms-Client-Request-Id: [snip]
   X-Ms-Error-Code: UnsupportedHttpVersion
   X-Ms-Request-Id: [snip]
   X-Ms-Version: 2023-08-03

$ nano /etc/nginx/nginx.conf

Add the following line before the end of the HTTP block

proxy_http_version 1.1;

It should look like this

6. Reconfigure storage account with custom subdomain

Azure Blob storage accounts support custom domains/subdomains.

Go to your storage account > Security and Networking > Networking > Custom Domain

Go to your DNS provider (Cloudflare in this case) and enter a CNAME for your subdomain. Verify propogation with online tools such as https://www.whatsmydns.net/ or https://dnschecker.org/

$ dig cname asverify.blob.iblue.team @1.1.1.1

; <<>> DiG 9.19.19-1-Debian <<>> cname asverify.blob.iblue.team @1.1.1.1
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18364
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;asverify.blob.iblue.team.      IN      CNAME

;; ANSWER SECTION:
asverify.blob.iblue.team. 60    IN      CNAME   asverify.ixibluevault.blob.core.windows.net.

This step is critical. If you skip this step, you will see 'Invalid URI' when you try and upload files using the SAS URLs you generate.

7. Reconfigure NGINX to allow large file uploads

$ nano /etc/nginx/nginx.conf

add the following line before the end of the HTTP block;

http {

        ##
        # Basic Settings
        ##
        [snip]
        
client_max_body_size 0;
}

$ nginx -t
$ service nginx reload
$ service nginx restart

This value can be changed to whatever you deem to be appropriate. It's not best practice to blindly accept unauthenticated requests from the internet without limiting the max bodysize. The 0 value skips this check. The risk of this being exposed to the greater internet can be mitigated by limiting connections from known egress IPs (such as offices or enterprise connections), or including authorisation headers. That's outside the scope of this guide.

8. Restrict Azure Blob service access to the IP address of our NGINX proxy

When your storage account was provisioned, you had the option of selecting 'Public network access' values, as seen below. This is not the same as enabling anonymous blob access. During testing, add the public IP address of your NGINX proxy, as well as the public IP address from which you'll be managing the account. If you're using Microsoft Azure Storage Explorer, you must enter your public IP address here to ensure you can administer the bucket and generate SAS URLs/keys.

9. Generate bucket, SAS, and use AzCopy

Download Microsoft Azure Storage Explorer, sign into your Azure account, and browse to your Azure subscription and storage account.

If you don't have one already, right click 'Blob Containers' and create a new one. We're using the 'iblue' bucket in this example.

Right click your bucket, and click 'Get Shared Access Signature' (SAS)

You'll be presented with a range of options to create a signature. To upload/write/add files, create objects, folders, you'll need Read/Add/Create/Write permissions. Select these. If you know the public IP of the host which you're uploading from (or egress IP if it's on a VPN or going through a corporate proxy), then add these values here as an additional control. Setting an expiration is also appropriate given these SAS signatures/keys aren't one time use.

More best practices are listed here: https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview#best-practices-when-using-sas

Click create.

Remember when we configured a custom domain in Azure and verified our ownership of blob.iblue.team? If you didn't perform those steps, you won't be able to replace your storage account name (ixibluevault.blob.core.windows.net) with your custom domain name.

If you were to try and use this URL from a host which wasn't whitelisted, you won't be authorised and it'll throw an error. Likewise, if you try and use your custom domain without verifying it, you'll see an error similar to 'HTTP/1.1 400 Value for one of the query parameters specified in the request URI is invalid.'

Generate a 64MB file

 dd if=/dev/urandom of=64MB.bin bs=64M count=1 iflag=fullblock

Use our SAS URL and azcopy, and send it.

Success. That works, let's try our custom subdomain with the same SAS signature (everything after the ?sv=)

That's fine, just add '--from-to LocalBlob' after 'copy' but before the file. It should look like this;

Remove your public IP from the Azure Blob settings which we configured before. We want to verify that all access it being sent through the proxy.

After you remove the IP (not the proxy's public/egress IP), you should see the following error in Azure Storage Explorer.

This is fine but it means we can't generate any SAS URLs using the console. We can't do it via the Azure Portal either, since our IP isn't authorised for access. However, we only want to test the upload functionality of AzCopy with these new restrictions.

You'll now need to go into your CrowdStrike containment policy and add the public IP address of your NGINX proxy which you'll use with AzCopy.

Running some side by side comparisons between the above method and the native RTR 'get' command saw incredible improvements. Using 'get' to acquire a ~500MB triage collection from a server on an enterprise grade NBN connection took hours.

Transfer speeds are now limited by the host's resources, memory, disk performance, and available bandwidth. I used AzCopy to transfer 32GB/64GB files in a matter of ~5 minutes. This is not currently possible with RTR.

To further enhance your workflow, you can mount your storage account on a VM and to access files you've uploaded entirely within the cloud. You can also generate additional keys (or define access policies) to share resources with external stakeholders, or just to download them internally and share amongst teams.

10. Summary

We've created an NGINX proxy which will receive requests from clients with the CrowdStrike agent installed. AzCopy uses pre-signed/signature validated URLs which can be used to upload files.

Upload requests via the NGINX proxy be forwarded to our Azure Blob storage account using our custom domain. The Blob storage account will only accept requests from our proxy, and not directly from the internet.

Further reading

https://www.f5.com/company/blog/nginx/avoiding-top-10-nginx-configuration-mistakes https://jrdeveloper.hashnode.dev/reverse-proxy-a-request-to-azure-blob-storage-urls-with-nginx

Last updated

Was this helpful?