<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Ciaran Doherty's SecOps Blog]]></title><description><![CDATA[Ciaran Doherty's SecOps Blog]]></description><link>https://blog.cdoherty.co.uk</link><generator>RSS for Node</generator><lastBuildDate>Fri, 10 Apr 2026 14:08:24 GMT</lastBuildDate><atom:link href="https://blog.cdoherty.co.uk/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Immich Deployment on Windows Server Using Docker, WSL, and SMB-Based Persistent Storage]]></title><description><![CDATA[Introduction.
Self-hosting applications on Windows servers can be deceptively complex when persistence, security, and reliability are required. This becomes especially true when deploying Linux-first ]]></description><link>https://blog.cdoherty.co.uk/immich-deployment-on-windows-server-using-docker-wsl-and-smb-based-persistent-storage</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/immich-deployment-on-windows-server-using-docker-wsl-and-smb-based-persistent-storage</guid><category><![CDATA[Immich]]></category><category><![CDATA[Windows]]></category><category><![CDATA[windows server]]></category><category><![CDATA[windows 11]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Docker]]></category><category><![CDATA[docker images]]></category><category><![CDATA[hypervisor]]></category><category><![CDATA[hyper-v]]></category><category><![CDATA[WSL]]></category><category><![CDATA[wsl2]]></category><category><![CDATA[networking]]></category><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Sun, 22 Feb 2026 11:38:45 GMT</pubDate><enclosure url="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6802a0d80e92b43b83647d7c/d55acc79-9a76-4b5f-a373-6ca6a2da605b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Introduction.</h1>
<p>Self-hosting applications on Windows servers can be deceptively complex when persistence, security, and reliability are required. This becomes especially true when deploying Linux-first platforms such as Immich in a domain-managed environment.</p>
<p>In this article, I document how I deployed Immich on a Windows Server using Docker Desktop and WSL, with media stored on a network file server and the database configured for long-term persistence.</p>
<p>The goal was simple: ensure the service survives reboots, upgrades, and redeployment without losing data.</p>
<hr />
<h1>What Is Immich?</h1>
<p>Immich is an open source, self hosted photo and video management platform designed as a privacy focused alternative to services such as Google Photos.</p>
<p>It consists of multiple containerised services, including a web interface, machine learning components, PostgreSQL for metadata storage, and a Redis compatible cache. It is designed primarily for Linux based container environments.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6802a0d80e92b43b83647d7c/6191da83-901b-4f24-8dad-e344c84d3eab.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h1>Project Objectives.</h1>
<p>Before starting, the deployment had several clear goals:</p>
<ul>
<li><p>Host Immich on a Windows Server platform.</p>
</li>
<li><p>Use Docker containers for portability.</p>
</li>
<li><p>Store uploaded media on a central file server.</p>
</li>
<li><p>Prevent database resets after reboots.</p>
</li>
<li><p>Allow redeployment on another endpoint.</p>
</li>
<li><p>Support both LAN and secure WAN access.</p>
</li>
<li><p>Maintain security and domain integration.</p>
</li>
</ul>
<p>Most importantly, the solution had to be repeatable.</p>
<hr />
<h1>Platform Overview.</h1>
<h3>Host System:</h3>
<ul>
<li><p>Windows Server (Application Server).</p>
</li>
<li><p>Joined to Active Directory (AD) domain.</p>
</li>
<li><p>Docker Desktop installed.</p>
</li>
<li><p>WSL2 enabled with Ubuntu.</p>
</li>
</ul>
<hr />
<h1>Supporting Infrastructure.</h1>
<ul>
<li><p>Domain Controller.</p>
</li>
<li><p>Central File Server (SMB share).</p>
</li>
<li><p>Internal LAN network.</p>
</li>
<li><p>Cloudflare Zero Trust Tunnel for remote access.</p>
</li>
</ul>
<hr />
<h1>Software Stack.</h1>
<ul>
<li><p>Docker Desktop (WSL backend).</p>
</li>
<li><p>Ubuntu 22.04 (WSL2).</p>
</li>
<li><p>Immich (Docker containers).</p>
</li>
<li><p>PostgreSQL (containerised).</p>
</li>
<li><p>Redis compatible cache.</p>
</li>
<li><p>Cloudflare Tunnel.</p>
</li>
</ul>
<hr />
<h1>Prerequisites.</h1>
<ul>
<li><p><a href="https://www.docker.com/products/docker-desktop/">Docker Desktop.</a></p>
</li>
<li><p><a href="https://apps.microsoft.com/detail/9pn20msr04dw?launch=true&amp;mode=full&amp;hl=en-us&amp;gl=gb&amp;ocid=bingwebsearch">Ubuntu 22.04.5 LTS</a>, or another WSL distro.</p>
</li>
<li><p>Nested virtualisation support (If you're using Hyper-V like me).</p>
</li>
</ul>
<pre><code class="language-typescript">Set-VMProcessor -VMName &lt;VMName&gt; -ExposeVirtualizationExtensions $true
</code></pre>
<ul>
<li><p>Recommended system resources:</p>
<ul>
<li><p>Memory: Minimum 6GB, recommended 8GB.</p>
</li>
<li><p>Processor: Minimum 2 cores, recommended 4 cores.</p>
</li>
<li><p>Storage: Recommended Unix-compatible filesystem (EXT4, XFS, ZFS, etc.) with support for user/group ownership and permissions.</p>
</li>
</ul>
</li>
</ul>
<blockquote>
<p><a href="https://docs.immich.app/install/requirements/">Requirements | Immich</a></p>
</blockquote>
<hr />
<h1>Why WSL and Docker Desktop.</h1>
<p>Immich is designed for Linux. Rather than running it inside a traditional VM, Docker Desktop with WSL2 offers:</p>
<ul>
<li><p>A native Linux kernel.</p>
</li>
<li><p>Direct container integration.</p>
</li>
<li><p>Simplified networking.</p>
</li>
<li><p>Reduced resource overhead.</p>
</li>
</ul>
<p>With WSL integration enabled, Docker runs its engine inside Linux while remaining manageable from Windows.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6802a0d80e92b43b83647d7c/0b27faa7-35f6-4a5d-b72d-0c782879ea63.png" alt="" style="display:block;margin:0 auto" />

<p>Once enabled, Docker commands are available inside Ubuntu, confirming WSL can communicate with the Docker engine:</p>
<pre><code class="language-typescript">docker version
</code></pre>
<hr />
<h1>Preparing Network Storage.</h1>
<p>To ensure data persistence, a shared folder was created on a file server:</p>
<pre><code class="language-typescript">\\FileServer\Immich
</code></pre>
<p>With subfolders:</p>
<pre><code class="language-yaml">library
upload
postgres
redis
</code></pre>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6802a0d80e92b43b83647d7c/c2bade02-fe5a-4ade-a428-057c9b3e6894.png" alt="" style="display:block;margin:0 auto" />

<p>SMB and NTFS permissions were restricted to authorised domain users, following least privilege principles.</p>
<hr />
<h1>Mounting The File Share In WSL.</h1>
<p>Windows drive mappings do not appear inside WSL, as Linux requires its own SMB mount.</p>
<p>A directory was created to act as the mount target:</p>
<pre><code class="language-typescript">sudo mkdir -p /mnt/immich
</code></pre>
<p>At this stage, nothing is mounted. It is just an empty directory waiting to have something attached to it.</p>
<p>SMB mounting requires CIFS utilities as WSL doesn't include them by default:</p>
<pre><code class="language-typescript">sudo apt update
sudo apt install -y cifs-utils
</code></pre>
<p>With the mount point ready, I'll try mounting the share directly using my domain credentials:</p>
<pre><code class="language-typescript">sudo mount -t cifs //FileServer/Immich /mnt/immich \
-o username=DOMAIN\\ADUsername,vers=3.0,uid=1000,gid=1000
</code></pre>
<p>It failed instantly:</p>
<pre><code class="language-typescript">mount error(13): Permission denied
</code></pre>
<p>That error is vague, and does not tell you whether it is networking, DNS, firewall, or authentication.</p>
<p>So I'll check the kernel logs:</p>
<pre><code class="language-typescript">dmesg | tail -n 30
</code></pre>
<pre><code class="language-yaml">STATUS_LOGON_FAILURE
</code></pre>
<p>This tells us everything. The share is reachable, and the server responded, but authentication failed.</p>
<p>The problem is how the credentials are being passed. The corrected format is:</p>
<pre><code class="language-typescript">sudo mount -t cifs //FileServer/Immich /mnt/immich \
-o user=username,domain=ADDomain,vers=3.0,uid=1000,gid=1000,dir_mode=0770,file_mode=0660
</code></pre>
<blockquote>
<p>Instead of embedding the domain in the username string, I'll split them properly.</p>
<ul>
<li><p><code>user=</code> instead of <code>username=DOMAIN\user</code> .</p>
</li>
<li><p><code>domain=ADDomain</code> passed separately.</p>
</li>
</ul>
</blockquote>
<p>Verification:</p>
<pre><code class="language-typescript">ls -la /mnt/immich
</code></pre>
<p>And the expected folders were there.</p>
<hr />
<h1>Securing Credentials.</h1>
<p>Rather than embedding Active Directory account credentials in scripts, I'll create a protected credentials file:</p>
<pre><code class="language-typescript">sudo nano /etc/cifs-immich.creds
</code></pre>
<p>Contents:</p>
<pre><code class="language-typescript">username=ADUsername
password=ADPassword
</code></pre>
<blockquote>
<p>You don't need to put <strong>DOMAIN\</strong> before the username, as we'll pass the domain separately in the mount options. It can work, but it's easy to mess up in Linux because backslashes are escape characters in some contexts... Also if the file is created with a heredoc or printf with the wrong quoting, you can end up with invisible characters that break auth. The kernel then returns a logon failure.</p>
</blockquote>
<p>Permissions are restricted to ensure only root could read the file:</p>
<pre><code class="language-typescript">sudo chmod 600 /etc/cifs-immich.creds
sudo chown root:root /etc/cifs-immich.creds
</code></pre>
<hr />
<h1>Automounting the Share.</h1>
<p><code>/etc/fstab</code> is a plain text configuration that defines which disks and network shares Linux mounts automatically at startup. Adding the SMB share here ensures it is reconnected every time WSL starts, without requiring manual intervention.</p>
<p>In WSL2 environments, fstab processing is controlled by <code>/etc/wsl.conf</code>. If automounting is disabled, the file exists but is ignored.</p>
<p>In my case, only systemd needed to be enabled and fstab entries were processed automatically. However, if mounts do not persist, explicitly enabling automounting resolves most issues:</p>
<pre><code class="language-typescript">sudo nano /etc/wsl.conf
</code></pre>
<p>Example:</p>
<pre><code class="language-yaml">[boot]
systemd=true

[automount]
enabled = true
mountFsTab = true
</code></pre>
<p>To configure automatic mounting, edit <code>/etc/fstab</code>:</p>
<pre><code class="language-typescript">sudo nano /etc/fstab
</code></pre>
<p>Entry:</p>
<pre><code class="language-typescript">//FileServer/Immich /mnt/immich cifs credentials=/etc/cifs-immich.creds,domain=ADDomainName,vers=3.0,uid=1000,gid=1000,dir_mode=0770,file_mode=0660,nofail 0 0
</code></pre>
<blockquote>
<p>The <code>nofail</code> option prevents boot delays if the file server is temporarily unavailable.</p>
</blockquote>
<p>Verify the configuration:</p>
<pre><code class="language-typescript">sudo mount -a
mount | grep immich
</code></pre>
<blockquote>
<p>Expected output: <code>//FileServer/Immich on /mnt/immich type cifs (...)</code></p>
</blockquote>
<p>By adding the SMB share to this file, the network storage is reconnected after every reboot, ensuring Docker and Immich always have access to persistent data.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6802a0d80e92b43b83647d7c/1afc55c8-2d58-4c41-a05c-7032c38ea7c3.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h1>Creating The Immich Project.</h1>
<p>I'll create a dedicated project directory called <code>immich</code>:</p>
<pre><code class="language-typescript">mkdir -p ~/immich
cd ~/immich
</code></pre>
<h3>Environment File.</h3>
<p>The <code>.env</code> file defines storage and credentials:</p>
<pre><code class="language-yaml">UPLOAD_LOCATION=/mnt/immich/upload
DB_DATA_LOCATION=DBLocation
TZ=Europe/London
IMMICH_VERSION=v2
DB_PASSWORD=StrongPassword
DB_USERNAME=DBUsername
DB_DATABASE_NAME=immich
</code></pre>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6802a0d80e92b43b83647d7c/1ba04a98-f67d-4c2c-8915-3c9e12a85e38.png" alt="" style="display:block;margin:0 auto" />

<p>Key design decision:</p>
<ul>
<li><p>Media on SMB.</p>
</li>
<li><p>Database stored locally in WSL.</p>
</li>
</ul>
<blockquote>
<p>PostgreSQL data directories should not be hosted on SMB shares. Network file systems introduce latency, unreliable file locking semantics, and caching behaviour that can cause corruption and transaction failures. Storing the database locally ensures correct POSIX locking and preserves data integrity.</p>
</blockquote>
<p>A local database folder was created:</p>
<pre><code class="language-typescript">mkdir -p ~/immich/postgres
</code></pre>
<hr />
<h1>Docker Compose Configuration.</h1>
<p>The official Immich compose file was used with volume overrides, binding persistent storage into containers:</p>
<pre><code class="language-yaml">- ${DB_DATA_LOCATION}:/var/lib/postgresql/data
- ${UPLOAD_LOCATION}:/usr/src/app/upload
</code></pre>
<p>Each service is configured with a restart policy:</p>
<pre><code class="language-typescript">restart: always
</code></pre>
<blockquote>
<p>In a Docker compose file, this tells the Docker daemon to automatically restart that container if it exits or crashes, and to bring it back up after the Docker service starts (for example after a host reboot). The main exception is if you manually stop the container, in which case it stays stopped until you start it again.</p>
</blockquote>
<hr />
<h1>Deploying The Stack.</h1>
<p>Containers are pulled and started:</p>
<pre><code class="language-typescript">docker compose pull
docker compose up -d
docker compose ps
</code></pre>
<p>All services report healthy status:</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6802a0d80e92b43b83647d7c/c62d056b-81f0-405b-a782-3c2717250852.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h1>Resolving Docker Credential Issues.</h1>
<p>When pulling images, Docker failed with:</p>
<pre><code class="language-typescript">docker-credential-desktop.exe: exec format error
</code></pre>
<p>This occurs because WSL is attempting to use Windows credential helpers, so I'll remove them, forcing Docker to operate without incompatible helpers:</p>
<pre><code class="language-typescript">nano ~/.docker/config.json
</code></pre>
<p>Replace the contents with (or leave as blank):</p>
<pre><code class="language-json">{
  "auths": {}
}
</code></pre>
<hr />
<h1>Verifying Persistence.</h1>
<p>The most important step is confirming that the database was not ephemeral.</p>
<p>Inside the Postgres container, run:</p>
<pre><code class="language-typescript">docker exec immich_postgres ls /var/lib/postgresql/data
</code></pre>
<p>The presence of <code>PG_VERSION</code> confirms initialisation.</p>
<p>Mount verification:</p>
<pre><code class="language-typescript">mount | grep /mnt/immich
</code></pre>
<p>Mount inspection:</p>
<pre><code class="language-typescript">docker inspect immich_postgres
</code></pre>
<p>This shows that the data directory is bound to persistent storage.</p>
<hr />
<h1>Networking and Access.</h1>
<p>Docker publishes port 2283 to the host:</p>
<pre><code class="language-typescript">0.0.0.0:2283 → 2283
</code></pre>
<p>Let's create a Windows TCP port forwarding rule using the built in IP Helper service.</p>
<p>Retrieve the WSL instance IP address:</p>
<pre><code class="language-typescript">wsl hostname -I
</code></pre>
<pre><code class="language-typescript">$winIp = "&lt;WINDOWS_SERVER_IP&gt;"
$wslIp = "&lt;WSL_IP&gt;"

netsh interface portproxy add v4tov4 listenaddress=\(winIp listenport=2283 connectaddress=\)wslIp connectport=2283
</code></pre>
<p>Portproxy rules are stored in the registry under:</p>
<p><code>HKLM\SYSTEM\CurrentControlSet\Services\PortProxy\v4tov4\tcp</code></p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6802a0d80e92b43b83647d7c/4507dce6-1910-4aae-ad83-dd6fa37c1fad.png" alt="" style="display:block;margin:0 auto" />

<p>We can also confirm that this rule is present with:</p>
<pre><code class="language-typescript">C:\Windows\System32&gt;netsh interface portproxy show all

Listen on ipv4:             Connect to ipv4:

Address         Port        Address         Port
--------------- ----------  --------------- ----------
10.0.0.XXX      2283        172.XX.XXX.XX   2283
</code></pre>
<p>I now need to create a Windows firewall rule to allow TCP port 2283, which can be achieved with the following PowerShell command:</p>
<pre><code class="language-typescript">New-NetFirewallRule -DisplayName "Immich TCP 2283"
-Direction Inbound -Protocol TCP -LocalPort 2283 -Action Allow
</code></pre>
<p>We can also use the following command to see if netstat shows a listener on <code>10.0.0.&lt;your-server-ip&gt;:2283</code> or <code>0.0.0.0:2283</code>, to see if anything is exposed to the LAN.</p>
<pre><code class="language-shell">Test-NetConnection -ComputerName 10.0.0.X -Port 2283
netstat -ano | findstr :2283
</code></pre>
<pre><code class="language-typescript">PS C:\WINDOWS\system32&gt; Test-NetConnection -ComputerName 10.0.0.0 -Port 2283

ComputerName     : 10.0.0.0
RemoteAddress    : 10.0.0.0
RemotePort       : 2283
InterfaceAlias   : Ethernet 3
SourceAddress    : 10.0.0.0
TcpTestSucceeded : True

PS C:\WINDOWS\system32&gt; netstat -ano | findstr :2283
  TCP    0.0.0.0:2283           0.0.0.0:0              LISTENING       12276
  TCP    10.0.0.0:53504         10.0.0.0:2283          TIME_WAIT       0
  TCP    10.0.0.0:60657         10.0.0.0:2283          TIME_WAIT       0
  TCP    [::]:2283              [::]:0                 LISTENING       12276
  TCP    [::1]:2283             [::]:0                 LISTENING       4672
</code></pre>
<p>Immich then becomes accessible on:</p>
<pre><code class="language-plaintext">http://server-ip:2283
</code></pre>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6802a0d80e92b43b83647d7c/1084e64b-118d-4829-ade1-64c0ffedd4d2.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h1>WSL Networking Architecture.</h1>
<p>WSL operates behind a Hyper-V NAT interface:</p>
<ul>
<li><p>Internal subnet: 172.x.x.x.</p>
</li>
<li><p>Host bridges traffic.</p>
</li>
<li><p>Docker Desktop handles forwarding.</p>
</li>
</ul>
<p>In this deployment, manual portproxy rules were required because Docker Desktop was not exposing container ports directly to the Windows host network.</p>
<p>Although the server was configured with only a single Hyper-V virtual network adapter connected to the external vSwitch, the system presents multiple logical network interfaces once WSL and Docker Desktop are installed.</p>
<p>From a Hyper-V perspective, the virtual machine has one primary adapter attached to the external switch, which provides normal LAN connectivity. This adapter is assigned an address via DHCP from the gateway, and handles domain access, DNS resolution, internet access, and communication with other servers on the network.</p>
<p>Alongside this, a second virtual adapter appears:</p>
<p>vEthernet (WSL (Hyper-V firewall)) with an address in the 172.28.128.0/20 range.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6802a0d80e92b43b83647d7c/eba71b9d-114e-412e-84cd-f9adfefdf8d3.png" alt="" style="display:block;margin:0 auto" />

<p>This adapter is not connected to the external vSwitch and is not manually created. It is automatically generated by WSL2 when the Linux subsystem is enabled. Internally, WSL runs inside a lightweight Hyper-V virtual machine. To allow Windows and Linux to communicate, Hyper-V creates a private NAT network between the host and the WSL VM.</p>
<p>This network operates as follows:</p>
<ul>
<li><p>WSL runs on an isolated internal subnet, typically in the 172.x.x.x private address range.</p>
</li>
<li><p>Windows acts as the gateway and NAT device for this subnet.</p>
</li>
<li><p>Traffic from WSL is translated and forwarded through the main 10.0.0.0/24 interface.</p>
</li>
<li><p>Inbound traffic must be explicitly forwarded.</p>
</li>
</ul>
<p>The 172.28.128.1 address therefore belongs to the Windows host side of the WSL virtual switch. The Linux environment sits behind it on the same subnet. This is why no default gateway is shown on that interface. It is not intended for external routing.</p>
<p>Importantly, this network is created and managed entirely by WSL and Hyper-V. It does not appear as a separate vSwitch in Hyper-V Manager because it is implemented as an internal virtual switch owned by the WSL platform.</p>
<p>Docker Desktop builds on top of this architecture. Containers run inside the WSL VM and inherit this private NAT network. As a result, container services are bound to the 172.x.x.x subnet by default, not directly to the server’s 10.0.0.0/24 interface.</p>
<p>In theory, Docker Desktop should automatically publish container ports to the Windows host and forward them to the external network. In practice, this forwarding was unreliable in this environment. Services were reachable from inside WSL but not consistently accessible from the LAN.</p>
<p>Because of this, manual portproxy rules were implemented on Windows to bridge traffic between the external interface and the WSL subnet. These rules explicitly forward connections from 10.0.0.XX to the relevant 172.28.x.x container addresses.</p>
<p>This design means that, despite appearing to have multiple “network adapters”, the system is still built around a single physical path to the network. The 10.0.0.0/24 adapter provides real connectivity, while the 172.28.0.0/20 adapter exists purely to support WSL and container isolation.</p>
<p>Understanding this separation was essential when diagnosing connectivity issues, firewall behaviour, and port publishing failures during deployment.</p>
<hr />
<h2>Secure WAN Access with Cloudflare Tunnel &amp; Conditional Access.</h2>
<p>For remote access, a Cloudflare Zero Trust Tunnel was optionally configured.</p>
<p>High-level architecture:</p>
<pre><code class="language-plaintext">Internet → Cloudflare → Tunnel → App Server → Immich
</code></pre>
<p>Advantages:</p>
<ul>
<li><p>No open inbound ports.</p>
</li>
<li><p>Zero Trust Authentication).</p>
</li>
<li><p>TLS encryption.</p>
</li>
<li><p>Single Sign-On (SSO).</p>
</li>
<li><p>DDoS protection.</p>
</li>
</ul>
<p>This provided secure external access without exposing the server.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6802a0d80e92b43b83647d7c/20067197-7038-4071-8cd8-617ba785bad5.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/6802a0d80e92b43b83647d7c/93ea47c8-cc78-45c3-aa41-7bd2918eeae3.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>We can also create Conditional Access policies to govern access to Cloudflare's Zero Trust Network, such as enforce compliance requirements, set browser controls and sign-in frequency or token persistence, and block access based on signals during sign-in.</p>
</blockquote>
<hr />
<h1>Best Practices Applied.</h1>
<h3>Security.</h3>
<ul>
<li><p>Domain authentication.</p>
</li>
<li><p>Restricted SMB permissions.</p>
</li>
<li><p>Encrypted credentials.</p>
</li>
<li><p>No anonymous access.</p>
</li>
<li><p>Firewall hardening.</p>
</li>
</ul>
<h3>Reliability.</h3>
<ul>
<li><p>Local database storage.</p>
</li>
<li><p>Automatic restarts.</p>
</li>
<li><p>Persistent mounts.</p>
</li>
<li><p>Health checks.</p>
</li>
</ul>
<h3>Maintainability.</h3>
<ul>
<li><p>Versioned configuration.</p>
</li>
<li><p>Centralised storage.</p>
</li>
<li><p>Reproducible structure.</p>
</li>
<li><p>Documented procedures.</p>
</li>
</ul>
<hr />
<h1>Known Limitations.</h1>
<p>Despite the success, several limitations remain:</p>
<ol>
<li><p>WSL mounts depend on network availability.</p>
</li>
<li><p>Docker Desktop is a dependency.</p>
</li>
</ol>
<blockquote>
<p>In a production environment, you would install docker in a Linux VM directly, rather than use Docker Desktop.</p>
</blockquote>
<ol>
<li><p>Postgres backups are manual by default.</p>
</li>
<li><p>Requires domain connectivity.</p>
</li>
<li><p>SMB latency can affect performance.</p>
</li>
</ol>
<p>These are acceptable trade-offs in my environment :)</p>
<hr />
<h1>Backup Strategy.</h1>
<p>A simple database backup:</p>
<pre><code class="language-bash">docker exec immich_postgres pg_dump -U postgres immich &gt; /mnt/immich/backups/db.sql
</code></pre>
<p>Restore example:</p>
<pre><code class="language-shell">docker exec -i immich_postgres psql -U postgres immich &lt; /mnt/immich/backups/db.sql
</code></pre>
<p>Verify backup integrity:</p>
<pre><code class="language-shell">ls -lh /mnt/immich/backups/db.sql
</code></pre>
<ul>
<li><p>Media is already stored on the file server.</p>
</li>
<li><p>Backups can be automated with cron.</p>
</li>
</ul>
<hr />
<h1>Verification Commands.</h1>
<p>These commands can be used at any time to validate the deployment.</p>
<p>Mount status:</p>
<pre><code class="language-bash">mount | grep immich
</code></pre>
<p>Credentials:</p>
<pre><code class="language-bash">sudo ls -l /etc/cifs-immich.creds
</code></pre>
<p>fstab:</p>
<pre><code class="language-bash">grep immich /etc/fstab
</code></pre>
<p>Containers:</p>
<pre><code class="language-bash">docker compose ps
</code></pre>
<p>Database integrity:</p>
<pre><code class="language-bash">docker exec immich_postgres test -f /var/lib/postgresql/data/PG_VERSION &amp;&amp; echo OK
</code></pre>
<p>Firewall:</p>
<pre><code class="language-powershell">Get-NetFirewallRule | Where DisplayName -Like "*Immich*"
</code></pre>
<hr />
<h1>Failover Testing.</h1>
<p>To validate resilience, the following failure scenarios should be tested:</p>
<ol>
<li>Restart WSL:</li>
</ol>
<pre><code class="language-plaintext">wsl --shutdown
</code></pre>
<ol>
<li>Restart Docker:</li>
</ol>
<pre><code class="language-plaintext">Restart-Service com.docker.service
</code></pre>
<ol>
<li><p>Reboot the host server.</p>
</li>
<li><p>Post recovery validation:</p>
</li>
</ol>
<pre><code class="language-plaintext">docker compose ps
mount | grep immich
</code></pre>
<p>All services and mounts should reinitialise automatically.</p>
<hr />
<h1>Conclusion.</h1>
<p>This deployment demonstrates that it is entirely possible to run a reliable, persistent, Linux-first application on Windows Server using Docker and WSL.</p>
<p>By combining:</p>
<ul>
<li><p>Domain authentication.</p>
</li>
<li><p>SMB-backed media storage.</p>
</li>
<li><p>Localised database persistence.</p>
</li>
<li><p>Container orchestration.</p>
</li>
<li><p>Secure networking.</p>
</li>
</ul>
<p>It is possible to achieve an enterprise-grade deployment using largely open-source tooling.</p>
<p>The key lesson is that persistence is not automatic. It must be deliberately designed. Once storage, authentication, and mounts are correct, containers become highly resilient and portable.</p>
<p>This approach provides a strong foundation for self-hosted services in mixed Windows and Linux environments.</p>
<hr />
<h1>docker-compose.yml</h1>
<pre><code class="language-shell">#
# WARNING: To install Immich, follow our guide: https://docs.immich.app/install/docker-compose
#
# Make sure to use the docker-compose.yml of the current release:
#
# https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
#
# The compose file on main may not be compatible with the latest release.

name: immich

services:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
    # extends:
    #   file: hwaccel.transcoding.yml
    #   service: cpu # set to one of [nvenc, quicksync, rkmpp, vaapi, vaapi-wsl] for accelerated transcoding
    volumes:
      # Do not edit the next line. If you want to change the media storage location on your system, edit the value of UPLOAD_LOCATION in the .env file
      - ${UPLOAD_LOCATION}:/data
      - /etc/localtime:/etc/localtime:ro
    env_file:
      - .env
    ports:
      - '2283:2283'
    depends_on:
      - redis
      - database
    restart: always
    healthcheck:
      disable: false

  immich-machine-learning:
    container_name: immich_machine_learning
    # For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag.
    # Example tag: ${IMMICH_VERSION:-release}-cuda
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
    # extends: # uncomment this section for hardware acceleration - see https://docs.immich.app/features/ml-hardware-acceleration
    #   file: hwaccel.ml.yml
    #   service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable
    volumes:
      - model-cache:/cache
    env_file:
      - .env
    restart: always
    healthcheck:
      disable: false

  redis:
    container_name: immich_redis
    image: docker.io/valkey/valkey:9@sha256:546304417feac0874c3dd576e0952c6bb8f06bb4093ea0c9ca303c73cf458f63
    healthcheck:
      test: redis-cli ping || exit 1
    restart: always

  database:
    container_name: immich_postgres
    image: ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0@sha256:bcf63357191b76a916ae5eb93464d65c07511da41e3bf7a8416db519b40b1c23
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
      POSTGRES_INITDB_ARGS: '--data-checksums'
      # Uncomment the DB_STORAGE_TYPE: 'HDD' var if your database isn't stored on SSDs
      # DB_STORAGE_TYPE: 'HDD'
    volumes:
      # Do not edit the next line. If you want to change the database storage location on your system, edit the value of DB_DATA_LOCATION in the .env file
      - ${DB_DATA_LOCATION}:/var/lib/postgresql/data
    shm_size: 128mb
    restart: always
    healthcheck:
      disable: false

volumes:
  model-cache:
</code></pre>
]]></content:encoded></item><item><title><![CDATA[ClickFix Forensics: Proving Execution Beyond the Browser]]></title><description><![CDATA[The Attack Explained.
ClickFix, sometimes described as “fake CAPTCHA” execution, is a social engineering technique where the attacker deliberately moves the critical step of the intrusion onto the user.
Instead of exploiting a vulnerability automatic...]]></description><link>https://blog.cdoherty.co.uk/clickfix-forensics-proving-execution-beyond-the-browser</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/clickfix-forensics-proving-execution-beyond-the-browser</guid><category><![CDATA[Clickfix]]></category><category><![CDATA[Scam]]></category><category><![CDATA[browser]]></category><category><![CDATA[Browsers]]></category><category><![CDATA[phishing]]></category><category><![CDATA[Malware]]></category><category><![CDATA[malware analysis]]></category><category><![CDATA[cybersecurity]]></category><category><![CDATA[#cybersecurity]]></category><category><![CDATA[CyberSec]]></category><category><![CDATA[captcha]]></category><category><![CDATA[recaptcha]]></category><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Sun, 21 Dec 2025 19:44:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766344514620/cd7f2922-62f3-4796-b7fe-fc3aa2338eda.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-the-attack-explained">The Attack Explained.</h1>
<p>ClickFix, sometimes described as “fake CAPTCHA” execution, is a social engineering technique where the attacker deliberately moves the critical step of the intrusion onto the user.</p>
<p>Instead of exploiting a vulnerability automatically, the attacker convinces the victim to run the payload themselves, most commonly by opening the Windows Run dialog and pasting a command presented as a “verification” step.</p>
<p>This user driven execution is a large part of why ClickFix has been effective against organisations with otherwise mature controls.</p>
<h2 id="heading-what-clickfix-actually-is">What ClickFix actually is.</h2>
<p>At a high level, ClickFix is an interaction pattern, not a single malware family. The attacker’s objective is to get a user to perform a sequence of actions that results in code execution under the user’s context. The lure is usually presented as a familiar web experience such as a CAPTCHA prompt, an “I am not a robot” verification, or a fake error banner that claims a quick manual fix is required.</p>
<p>ClickFix is described as an attack chain that commonly begins with phishing, malvertising, or a compromised website, then leads the user to a visual lure that instructs them to run a command, specifically because that user interaction can bypass some conventional automated detections.</p>
<p><img src="https://media.discordapp.net/attachments/1386818582721069066/1452373072211349535/image.png?ex=69499354&amp;is=694841d4&amp;hm=cc460db80691523ccd1b02154188a82067840046f7d7846e7278d23244ce03c6&amp;=&amp;format=webp&amp;quality=lossless&amp;width=750&amp;height=616" alt="Image" /></p>
<h2 id="heading-why-it-works-so-well">Why it works so well.</h2>
<p>ClickFix succeeds because it exploits normal problem solving behaviour. The page looks like a legitimate friction point that users are conditioned to complete quickly, and the instructions are presented in a “helpdesk style” voice that implies compliance is routine.</p>
<p>Technically, it also shifts the attacker’s risk. If the user runs the command, the execution often happens in a way that looks like legitimate administrative activity, especially in environments where PowerShell is widely used and script execution is not tightly controlled.</p>
<hr />
<h1 id="heading-the-clickfix-execution-pathway-in-windows">The ClickFix Execution Pathway In Windows.</h1>
<p>Although the exact payload varies, many ClickFix chains have a similar structure:</p>
<p>The site presents a prompt that encourages a copy and paste action. The copied content is a command, or a small “stager” that pulls the real payload from the internet. This is frequently designed to blend in with normal Windows tooling, using legitimate signed binaries that can execute script content.</p>
<p>In real incidents, you’ll often see execution via PowerShell, cmd, rundll32, or mshta. mshta is particularly notable because it is a Windows native binary designed to execute HTA script content and has been repeatedly abused as an execution proxy.</p>
<p>Once the initial command runs, typical next steps depend on the operator and payload, but commonly include downloading a second stage, establishing persistence, credential access, browser data harvesting, and then lateral movement or cloud pivoting where possible.</p>
<hr />
<h1 id="heading-detection">Detection.</h1>
<h2 id="heading-runmru-most-recently-used">RunMRU (Most Recently Used).</h2>
<p><em>RunMRU (Most Recently Used) is a Windows registry key that stores a list of the last 26 commands entered in the Run dialog (Win + R) on a per-user basis. The command’s execution order can be obtained from the “MRUList” value in the key. It keeps an ordered list (left being most recent) of recently used commands (hence the name MRU).</em></p>
<p>To confirm whether code was actually entered and executed into Run, you can check Run history in the registry:</p>
<p><code>NTUSER.dat\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\RunMRU</code></p>
<p>Review the most recent entries. If they followed the prompt, you will typically see something like PowerShell, cmd, mshta, rundll32, or a base64 encoded one liner recorded there. If you are logged in as the user:</p>
<p><code>HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\RunMRU</code></p>
<p>If you are inspecting another profile, load NTUSER.dat and check:</p>
<p><code>HKU\&lt;SID&gt;\Software\Microsoft\Windows\CurrentVersion\Explorer\RunMRU</code></p>
<p><img src="https://media.discordapp.net/attachments/1386818582721069066/1452373071896903832/image.png?ex=69499354&amp;is=694841d4&amp;hm=729266b431868027cbe417caae8567154a2ed4366ac0dc5a1902dfb81036b208&amp;=&amp;format=webp&amp;quality=lossless&amp;width=1555&amp;height=424" alt="Image" /></p>
<p>This is useful because it shows what was entered, not just what was clicked in the browser. The <code>MRUList</code> (REG_SZ) string tells you the order in which the commands were entered.</p>
<p>If you do see a malicious record, treat this as one input, not the conclusion. You still need to tie it to process execution, network activity, and post execution actions to understand impact.</p>
<hr />
<h1 id="heading-what-to-expect-after-execution">What To Expect After Execution.</h1>
<p>If you are investigating a suspected ClickFix event, it helps to think in terms of what an operator needs immediately after they land:</p>
<p>They need reliability, so they often use short stagers and then pull the rest from the network.</p>
<p>They need stealth, so they often use built in tooling and reduc<a target="_blank" href="https://www.cybertriage.com/wp-content/uploads/2025/03/Run_MRU_Ex.png">e</a> on disk footprint.</p>
<p>They need access, so they will frequently attempt credential theft, token theft, or session hijacking, especially if the user has privileged access.</p>
<p>This is why a ClickFix investigation should not stop at “did something run”. The deeper question is what access was gained, what was touched, and what capability was established afterwards.</p>
<hr />
<h1 id="heading-investigation">Investigation.</h1>
<h3 id="heading-step-1-establish-scope-and-time-boundaries">Step 1: Establish scope and time boundaries.</h3>
<p>Start with time anchoring. Determine when the user saw the prompt, when they took the action, and what device they used. If you have EDR, this becomes an initial timeline exercise: browser process, clipboard related activity if available, then the first suspicious child process and any outbound connections that follow.</p>
<p>Where possible, treat this as an incident until you can prove otherwise. ClickFix is designed to create uncertainty and urgency.</p>
<h3 id="heading-step-2-identify-the-first-execution-and-the-parent-chain">Step 2: Identify the first execution and the parent chain.</h3>
<p>The most useful early signal is the process tree. You are looking for an unusual transition from a user initiated action into a scripting or execution host.</p>
<p>In many environments, you will see one of two patterns:</p>
<ul>
<li><p>A browser process that leads quickly into script execution via a Windows binary.</p>
</li>
<li><p>A user opened execution path such as the Run dialog followed by a scripting host.</p>
</li>
</ul>
<p>At this stage, capture command line arguments, parent process, network connections, and any spawned child processes. This is where you often find the “stager” that reveals the next infrastructure and payload stage.</p>
<h3 id="heading-step-3-collect-endpoint-evidence-that-reflects-user-driven-execution">Step 3: Collect endpoint evidence that reflects user driven execution.</h3>
<p>This is where Windows artefacts matter. You want artefacts that record what the user actually typed or pasted, rather than what the browser displayed.</p>
<h3 id="heading-step-4-hunt-for-follow-on-behaviours-not-just-the-first-command">Step 4: Hunt for follow on behaviours, not just the first command.</h3>
<p>Once you identify the first execution, pivot into “what changed” and “what persisted”.</p>
<p>Look for new scheduled tasks, new services, unusual registry autoruns, newly dropped executables in user writeable locations, unusual PowerShell usage patterns, and outbound connections to new domains or IPs. If your EDR supports it, hunt for unusual child processes of common scripting hosts.</p>
<p>If mshta is present in the chain, treat it as higher risk because it can execute script content embedded in HTML and is frequently used as a proxy for code execution.</p>
<h3 id="heading-step-5-expand-to-identity-and-cloud-where-applicable">Step 5: Expand to identity and cloud where applicable.</h3>
<p>ClickFix is often the first step, not the final objective. If the compromised user has access to Microsoft 365 or other SaaS platforms, you should treat identity as in scope immediately.</p>
<p>This typically means reviewing sign in activity, risky sign ins if you have them, unusual OAuth consent grants, mailbox rules, forwarding changes, unusual file sharing activity, and new devices or tokens. The exact telemetry depends on your stack, but the principle is consistent: verify whether the attacker converted endpoint execution into durable access elsewhere.</p>
<hr />
<h1 id="heading-responding-to-an-attack">Responding to an attack.</h1>
<p>A solid ClickFix response usually needs to cover containment, credential and session hygiene, eradication, and resilience.</p>
<p><strong>Containment should focus on isolating the affected endpoint and limiting further outbound access</strong> where feasible, especially if you have evidence of a live second stage.</p>
<p>Credential and session hygiene should assume the user context is compromised. Password reset alone may not be sufficient if tokens were captured, so you should revoke sessions where your identity platform allows it and ensure privileged access is reviewed.</p>
<p>Eradication should remove persistence mechanisms, remediate dropped binaries, and validate that the initial execution vector is no longer present.</p>
<p>Resilience should include closing the specific execution pathways and tightening controls that make ClickFix succeed in the first place.</p>
<hr />
<h1 id="heading-prevention-controls-that-meaningfully-reduce-clickfix-risk">Prevention Controls That Meaningfully Reduce ClickFix Risk.</h1>
<p>ClickFix relies heavily on built in tools being available for arbitrary execution. The most effective prevention work therefore tends to reduce easy execution paths and strengthen detection where you cannot remove them.</p>
<p>Microsoft Defender for Endpoint Attack Surface Reduction rules are a strong baseline in many Windows estates, especially where Office based initial access is common. These rules are documented and can block common child process creation patterns that attackers rely on.</p>
<p>Beyond ASR, consider application control approaches such as allow listing for scripting hosts, restrictions on LOLBins that are rarely used in your organisation, and tighter PowerShell governance through logging, constrained execution, and script signing where it is practical. Pair this with web filtering, stronger browser controls, and user education that explicitly calls out the “copy and paste a verification command” pattern.</p>
<p>The important point is that ClickFix is not beaten by a single setting. It is reduced by layered friction: fewer execution paths, stronger visibility, and faster containment when the tactic slips through.</p>
<p>You could also consider blocking the Run Dialog for end users, if it isn’t needed.</p>
<hr />
<h1 id="heading-closing-thoughts">Closing Thoughts.</h1>
<p>ClickFix is a reminder that some of the most damaging intrusions still begin with persuasion, not exploitation. If your investigation approach centres on process trees, endpoint artefacts, and follow on behaviours, you can move from uncertainty to clarity quickly, and you can shape controls that make the tactic far less reliable for attackers.</p>
]]></content:encoded></item><item><title><![CDATA[LAPS: The Local Administrator Password Solution For Windows Devices In Entra ID.]]></title><description><![CDATA[Overview.
Prerequisites.
Join types.
LAPS is only supported on:

Microsoft Entra joined devices.

Microsoft Entra hybrid joined devices.


Microsoft Entra registered devices are not supported.
License requirements.
LAPS is available to all customers ...]]></description><link>https://blog.cdoherty.co.uk/laps-the-local-administrator-password-solution-for-windows-devices-in-entra-id</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/laps-the-local-administrator-password-solution-for-windows-devices-in-entra-id</guid><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Sun, 30 Nov 2025 18:40:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1764427695226/3f7bcf5b-5c62-4df5-9a5e-70bd563f0ef9.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-overview">Overview.</h1>
<h1 id="heading-prerequisites">Prerequisites.</h1>
<h2 id="heading-join-types">Join types.</h2>
<p>LAPS is only supported on:</p>
<ul>
<li><p>Microsoft Entra joined devices.</p>
</li>
<li><p>Microsoft Entra hybrid joined devices.</p>
</li>
</ul>
<p>Microsoft Entra registered devices are not supported.</p>
<h2 id="heading-license-requirements">License requirements.</h2>
<p>LAPS is available to all customers with Microsoft Entra ID Free or higher licenses. Other related features like administrative units, custom roles, Conditional Access, and Intune have other licensing requirements.</p>
<h2 id="heading-required-roles-and-permissions">Required roles and permissions.</h2>
<p>Other than the built-in Microsoft Entra roles like Cloud Device Administrator and Intune Administrator that are granted device.LocalCredentials.Read.All, you can use Microsoft Entra custom roles or administrative units to authorise local administrator password recovery.</p>
<hr />
<h1 id="heading-enabling-windows-laps-within-microsoft-entra-id">Enabling Windows LAPS Within Microsoft Entra ID.</h1>
<p>To enable Windows LAPS with Microsoft Entra ID, you must take actions in Microsoft Entra ID and the devices you wish to manage. We recommend organizations <a target="_blank" href="https://learn.microsoft.com/en-us/mem/intune/protect/windows-laps-policy">manage Windows LAPS using Microsoft Intune</a>. If your devices are Microsoft Entra joined but not using or don't support Microsoft Intune, you can deploy Windows LAPS for Microsoft Entra ID manually. For more information, see the article <a target="_blank" href="https://learn.microsoft.com/en-us/windows-server/identity/laps/laps-management-policy-settings">Configure Windows LAPS policy settings</a>.</p>
<ol>
<li><p>Sign in to the <a target="_blank" href="https://entra.microsoft.com/">Microsoft Entra admin center</a> as at least a <a target="_blank" href="https://learn.microsoft.com/en-gb/entra/identity/role-based-access-control/permissions-reference#cloud-device-administrator">Cloud Device Administrator</a>.</p>
</li>
<li><p>Navigate to Entra ID &gt; Devices &gt; Overview &gt; Device Settings.</p>
</li>
<li><p>Select <strong>Yes</strong> for the <strong>Enable Local Administrator Password Solution (LAPS)</strong> setting, then select <strong>Save</strong>. You might also use the Microsoft Graph API <a target="_blank" href="https://learn.microsoft.com/en-us/graph/api/deviceregistrationpolicy-update?view=graph-rest-beta&amp;preserve-view=true">Update deviceRegistrationPolicy</a> to complete this task.</p>
</li>
</ol>
<h1 id="heading-deploying-windows-laps-using-microsoft-intune">Deploying Windows LAPS Using Microsoft Intune.</h1>
<blockquote>
<p>Ensure the prerequisites for Intune to support Windows LAPS in your tenant are met before creating policies. Intune's LAPS policies don't create new accounts or passwords. Instead, they manage an account that's already on the device.</p>
</blockquote>
<ol>
<li><p>Sign in to the Microsoft Intune admin center and go to Endpoint security &gt; Account protection, and then select <code>Create policy</code>.</p>
</li>
<li><p>Set the <em>Platform</em> to <strong>Windows</strong>, <em>Profile</em> to <strong>Local admin password solution (Windows LAPS)</strong>, and then select <strong>Create</strong>.</p>
</li>
</ol>
<hr />
<h1 id="heading-monitoring-laps-password-access-via-microsoft-sentineldefender-xdr">Monitoring LAPS Password Access Via Microsoft Sentinel/Defender XDR.</h1>
<hr />
<h1 id="heading-references">References.</h1>
<p><a target="_blank" href="https://learn.microsoft.com/en-gb/entra/identity/devices/howto-manage-local-admin-passwords">Use Windows Local Administrator Password Solution (LAPS) with Microsoft Entra ID - Microsoft Entra ID | Microsoft Learn</a></p>
<p><a target="_blank" href="https://learn.microsoft.com/en-us/intune/intune-service/protect/windows-laps-policy">Deploy Intune policies to manage Windows LAPS - Microsoft Intune | Microsoft Learn</a></p>
]]></content:encoded></item><item><title><![CDATA[SOC Casefile: Microsoft 365 Account Compromise Investigation.]]></title><description><![CDATA[Introduction.
A SOC investigation following a compromised user typically takes between 45 minutes and 3 hours, depending on the volume of user activity, the quality of available audit data, the complexity of the attacker’s behaviour and whether any s...]]></description><link>https://blog.cdoherty.co.uk/soc-casefile-microsoft-365-account-compromise-investigation</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/soc-casefile-microsoft-365-account-compromise-investigation</guid><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Sun, 30 Nov 2025 18:37:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1764526600825/d29db30b-7e2f-44ce-a0b3-6feaac39579c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction.</h1>
<p>A SOC investigation following a compromised user typically takes between 45 minutes and 3 hours, depending on the volume of user activity, the quality of available audit data, the complexity of the attacker’s behaviour and whether any signs of exfiltration or mailbox manipulation require deeper analysis.</p>
<p>Straightforward cases involving a single suspicious sign-in and minimal follow-on activity usually sit at the lower end of that range, while incidents involving large-scale data access, unusual file-sharing behaviour or signs of targeted exfiltration take longer because they raise concerns around compliance, data sensitivity, regulatory obligations and potential security impact.</p>
<hr />
<h1 id="heading-initial-alert-amp-triage">Initial Alert &amp; Triage.</h1>
<hr />
<h1 id="heading-containment-and-identity-recovery">Containment and Identity Recovery.</h1>
<hr />
<h1 id="heading-sign-in-timeline-reconstruction">Sign-In Timeline Reconstruction.</h1>
<hr />
<h1 id="heading-mailbox-and-exchange-online-forensics">Mailbox and Exchange Online Forensics.</h1>
<hr />
<h1 id="heading-sharepoint-onedrive-and-exfiltration-overview">SharePoint, OneDrive, and Exfiltration Overview.</h1>
<hr />
<h1 id="heading-oauth-application-and-token-misuse-assessment">OAuth Application and Token Misuse Assessment.</h1>
<hr />
<h1 id="heading-defender-signals-and-endpoint-correlation">Defender Signals and Endpoint Correlation.</h1>
<hr />
<h1 id="heading-impact-assessment-and-compliance-review">Impact Assessment and Compliance Review.</h1>
<hr />
<h1 id="heading-remediation-hardening-and-closure">Remediation, Hardening and Closure.</h1>
<hr />
]]></content:encoded></item><item><title><![CDATA[Deploying Level RMM Using A Microsoft Intune Platform Script.]]></title><description><![CDATA[Overview.
Level is a lightweight remote monitoring and management platform for Windows devices. It can be deployed manually, although most organisations prefer to automate installation through device management tooling.
This article explains how to d...]]></description><link>https://blog.cdoherty.co.uk/deploying-level-rmm-using-a-microsoft-intune-platform-script</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/deploying-level-rmm-using-a-microsoft-intune-platform-script</guid><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Sun, 30 Nov 2025 16:20:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1764517476916/43ba0909-0cf7-45d2-8f9e-f83b39a32757.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-overview">Overview.</h1>
<p>Level is a lightweight remote monitoring and management platform for Windows devices. It can be deployed manually, although most organisations prefer to automate installation through device management tooling.</p>
<p>This article explains how to deploy the Level agent using a Microsoft Intune platform script. This approach provides a reliable and centrally managed installation path and ensures that devices are onboarded in a predictable way.</p>
<p>The method described here is suitable for both small scale and full fleet rollouts and is one of the simplest ways to ensure that the Level agent installs silently without user interaction.</p>
<hr />
<h1 id="heading-understanding-the-level-silent-installer">Understanding The Level Silent Installer.</h1>
<p>Before creating the script, it is important to understand what the Level silent installer does. Level generates an API key which is included in the installation arguments. This key does not provide administrative access to the Level platform. It is used only to enroll the device during installation.</p>
<p>Once the device is enrolled, the key has no further active purpose unless it is reused elsewhere.</p>
<p>The installer file is downloaded from Level directly and is executed by the script on the target device. Intune then reports the success or failure of the script to assist with troubleshooting.</p>
<p>For example:</p>
<pre><code class="lang-powershell"><span class="hljs-variable">$args</span> = <span class="hljs-string">"LEVEL_API_KEY=XXXXXXXXXXXXXXXXXXXXXXXX"</span>;
[<span class="hljs-type">Net.ServicePointManager</span>]::SecurityProtocol = [<span class="hljs-type">Net.SecurityProtocolType</span>]::Tls12;
<span class="hljs-variable">$tempFile</span> = <span class="hljs-built_in">Join-Path</span> ([<span class="hljs-type">System.IO.Path</span>]::GetTempPath()) <span class="hljs-string">"level.msi"</span>;
<span class="hljs-variable">$ProgressPreference</span> = <span class="hljs-string">'SilentlyContinue'</span>; <span class="hljs-built_in">Invoke-WebRequest</span> <span class="hljs-literal">-Uri</span> <span class="hljs-string">"https://downloads.level.io/level.msi"</span> <span class="hljs-literal">-OutFile</span> <span class="hljs-variable">$tempFile</span>; <span class="hljs-variable">$ProgressPreference</span> =
<span class="hljs-string">'Continue'</span>; <span class="hljs-built_in">Start-Process</span> msiexec.exe <span class="hljs-literal">-Wait</span> <span class="hljs-literal">-ArgumentList</span> <span class="hljs-string">"/i `"<span class="hljs-variable">$tempFile</span>`" <span class="hljs-variable">$args</span> /qn"</span>;
</code></pre>
<p>This script instructs the device to download the Level MSI installer from the Level content delivery network using TLS version 1.2 in order to ensure a secure connection. The file is written to the system’s temporary directory and then executed through msiexec as a completely silent installation using the <code>/qn</code> switch.</p>
<hr />
<h1 id="heading-creating-the-platform-script">Creating The Platform Script.</h1>
<ol>
<li><p>Sign in to the <a target="_blank" href="https://app.level.io/devices">Level RMM platform</a> and copy the silent install command for windows. This will look something like the following:</p>
<ol>
<li><pre><code class="lang-powershell">   <span class="hljs-variable">$args</span> = <span class="hljs-string">"LEVEL_API_KEY=XXXXXXXXXXXXXXXXXXXXXXXX"</span>; [<span class="hljs-type">Net.ServicePointManager</span>]::SecurityProtocol = [<span class="hljs-type">Net.SecurityProtocolType</span>]::Tls12; <span class="hljs-variable">$tempFile</span> = <span class="hljs-built_in">Join-Path</span> ([<span class="hljs-type">System.IO.Path</span>]::GetTempPath()) <span class="hljs-string">"level.msi"</span>; <span class="hljs-variable">$ProgressPreference</span> = <span class="hljs-string">'SilentlyContinue'</span>; <span class="hljs-built_in">Invoke-WebRequest</span> <span class="hljs-literal">-Uri</span> <span class="hljs-string">"https://downloads.level.io/level.msi"</span> <span class="hljs-literal">-OutFile</span> <span class="hljs-variable">$tempFile</span>; <span class="hljs-variable">$ProgressPreference</span> = <span class="hljs-string">'Continue'</span>; <span class="hljs-built_in">Start-Process</span> msiexec.exe <span class="hljs-literal">-Wait</span> <span class="hljs-literal">-ArgumentList</span> <span class="hljs-string">"/i `"<span class="hljs-variable">$tempFile</span>`" <span class="hljs-variable">$args</span> /qn"</span>;
</code></pre>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764519183113/4c080fa9-f36a-4403-8aa6-ecfa2136fcd5.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
</li>
<li><p>Create a <code>.ps1</code> file, paste the silent install command obtained in step 1and save the file. <em>You will need to upload this file to Microsoft Intune in step 7.</em></p>
</li>
<li><p>Sign in to the Microsoft Intune admin center and go to Devices &gt; Scripts and remediations &gt; Platform scripts.</p>
</li>
<li><p>Select <code>Add</code>, then <code>Windows 10 and later</code>.</p>
</li>
<li><p>Provide a name and description for the script, and choose Next.</p>
</li>
<li><p>On the Script settings blade;</p>
<ol>
<li><p>Set <code>Run this script using the logged on credentials</code> to <code>No</code>.</p>
</li>
<li><p>Set <code>Enforce script signature check</code> to <code>No</code>.</p>
</li>
<li><p>Set <code>Run script in 64 bit PowerShell Host</code> to <code>No</code>.</p>
</li>
</ol>
</li>
<li><p>Upload the Level RMM PowerShell script.</p>
</li>
<li><p>Select Next.</p>
</li>
<li><p>Specify the assignments and choose next. <em>It is recommended to create a dedicated device group for any devices that will be onboarded to Level so the Level agent can be deployed in a controlled and predictable manner.</em></p>
</li>
<li><p>Select Create.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764518417264/629d7712-b68d-42f9-b915-33177ed19086.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-post-deployment-verification">Post Deployment Verification.</h1>
<p>Once the script has been assigned and devices begin to check in, Intune will report whether the script executed successfully. You can review results within the Platform script view in the Intune portal.</p>
<p>You should also sign in to the Level dashboard to confirm that newly onboarded devices appear as expected. After a few minutes each device should report its status, policies should apply and the agent should begin sending telemetry.</p>
<p>If you want a higher level of assurance, you can also create a remediation script within Intune that checks for the presence of the Level service and confirms that the installation has not been removed or disrupted. This can help detect devices where users or third party software have interfered with the agent.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Attack Simulation Training - Microsoft Defender for Office 365 (MDO)]]></title><description><![CDATA[Overview.
Attack Simulation Training is a feature of Microsoft Defender for Office 365 (available with a Defender for Office Plan 2 license) that.

Launching An Attack Simulation.
Prerequisites.

Microsoft Defender for Office 365 (MDO) Plan 2 license...]]></description><link>https://blog.cdoherty.co.uk/attack-simulation-training-microsoft-defender-for-office-365-mdo</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/attack-simulation-training-microsoft-defender-for-office-365-mdo</guid><category><![CDATA[Defender for Endpoint]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[Microsoft Defender]]></category><category><![CDATA[microsoft security]]></category><category><![CDATA[phishing]]></category><category><![CDATA[simulation]]></category><category><![CDATA[Attack Simulation]]></category><category><![CDATA[Phishing training]]></category><category><![CDATA[Security]]></category><category><![CDATA[cyber security]]></category><category><![CDATA[#cybersecurity]]></category><category><![CDATA[Microsoft Defender for Office 365]]></category><category><![CDATA[Microsoft Defender for Identity]]></category><category><![CDATA[MicrosoftDefenderforAzure]]></category><category><![CDATA[MDE]]></category><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Sun, 16 Nov 2025 20:12:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763324092809/fdff5b00-bbda-404f-b937-47d710598401.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-overview">Overview.</h1>
<p>Attack Simulation Training is a feature of Microsoft Defender for Office 365 (available with a Defender for Office Plan 2 license) that.</p>
<hr />
<h1 id="heading-launching-an-attack-simulation">Launching An Attack Simulation.</h1>
<h2 id="heading-prerequisites">Prerequisites.</h2>
<ul>
<li>Microsoft Defender for Office 365 (MDO) Plan 2 license.</li>
</ul>
<p>To launch a simulated phishing attack, perform the following steps:</p>
<ol>
<li><p><a target="_blank" href="https://learn.microsoft.com/en-us/defender-office-365/attack-simulation-training-training-campaigns">Navigate to the Microsoft Defender</a> portal <em>→ Email &amp; collaboration → Attack simulation training → Simulations.</em></p>
</li>
<li><p>Select <code>Launch a simulation</code>.</p>
</li>
<li><p>Choose the technique you’d like to use.</p>
</li>
</ol>
<p>The available options are:</p>
<ul>
<li><p>Credential harvest.</p>
</li>
<li><p>Malware Attachment.</p>
</li>
<li><p>Link in Attachment.</p>
</li>
<li><p>Link to Malware.</p>
</li>
<li><p>Drive-by URL.</p>
</li>
<li><p>OAuth Consent Grant.</p>
</li>
<li><p>How-to Guide.</p>
</li>
</ul>
<p>In this example, I am going to use the technique <code>Malware Attachment</code>. With this technique, a malicious actor creates a message, with an attachment added to the message. When the target opens the attachment, typically some arbitrary code such as a macro will execute in order to help the attacker install additional code on a target's device, or further entrench themselves.</p>
<blockquote>
<p>MITRE Reference: <a target="_blank" href="https://attack.mitre.org/techniques/T1566/001/">Phishing: Spearphishing Attachment, Sub-technique T1566.001 - Enterprise | MITRE ATT&amp;CK®</a></p>
</blockquote>
<p>Once you have chosen the technique you’d like to use:</p>
<ol>
<li><p>Enter a simulation name. In this example, I will name it <code>AST_AllUsers_MalwareAttachment</code>.</p>
</li>
<li><p>Enter a description (optional).</p>
</li>
<li><p>Select Next.</p>
</li>
<li><p>Choose the payload name. I will use <code>DocuSign Shared Document</code>, which has a compromise rate of 27.23%, but it’s recommended that you choose a payload that you believe will have the highest compromise rate.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763315619454/798510ca-0987-469d-99d9-f613bb65fe12.png" alt class="image--center mx-auto" /></p>
<ol start="4">
<li>Send a test email (recommended). Selecting this option will save and send the selected payload to the currently logged in user for formatting validation. It will not be included in any simulation reporting and will not work as part of an end to end simulation scenario.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763315878262/76b41eba-bb79-4331-9179-745cb0b8272a.png" alt class="image--center mx-auto" /></p>
<ol start="5">
<li><p>Select Next.</p>
</li>
<li><p>On the “Target users” blade, add existing users and groups or import a list of email addresses. In this example, I will only select my own user object.</p>
</li>
<li><p>Select Next.</p>
</li>
<li><p>Choose whether you’d like to exclude users or groups from this campaign, and select Next.</p>
</li>
<li><p>Select training preferences, assignment, and customise a landing page for this simulation. In this example, I will use <code>Assign training for me (Recommended)</code>, and <code>30 days after Simulation ends</code> as the training due date.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763316066358/fe7aa6d1-e225-410c-8ae1-10dcd9ccbb01.png" alt class="image--center mx-auto" /></p>
<ol start="8">
<li><p>Select Next.</p>
</li>
<li><p>Choose a Phish landing page. A landing page provides a learning moment to the user after getting phished. In this example, I will use <code>Microsoft Landing Page Template 4</code>, which looks like the following:</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763316709929/badc78cb-ecbe-4125-9ca1-ec4260d2c7ab.png" alt class="image--center mx-auto" /></p>
<ol start="10">
<li><p>Choose Next.</p>
</li>
<li><p>Select end user notification preferences for this campaign. I will use <code>Microsoft default notifications (recommended)</code>, but you may wish to either not deliver notifications or use custom end user notifications. In this example, I will set the delivery preference to <code>Deliver during simulation</code>, <mark>however it may be more appropriate to deliver this notification after the simulation has completed, as doing so reduces the likelihood of users alerting one another while it is in progress.</mark> I will also set the training reminder notification to <code>Twice a week</code>.</p>
</li>
<li><p>Select Next.</p>
</li>
<li><p>On the “Launch details” blade, specify when the simulation should start and whether the payloads should be removed from user inboxes. You can schedule the launch up to a maximum of 14 days in advance, and set the simulation to end after any period between 2 and 30 days. In this example, I will <code>Launch this simulation as soon as I’m done</code>, and configure the simulation to end after <code>21 days</code>.</p>
</li>
<li><p>Specify whether you’d like to enable `region aware timezone delivery.</p>
</li>
</ol>
<blockquote>
<p><strong>Region aware delivery</strong> uses the mailbox time zone of each targeted user to decide when the message should be delivered. Delivery can vary by about one hour either side depending on the user’s time zone.</p>
<p>For example:</p>
<p>At 07:00 GMT, an administrator creates and schedules a campaign to start at 09:00 GMT on the same day.<br />User A’s mailbox is set to UTC+3.<br />User B’s mailbox is set to GMT (UTC+0).</p>
<p>At 09:00 GMT, the simulation message is delivered to UserB. With region aware delivery enabled, the message is not sent to UserA at this time, because 09:00 GMT is 12:00 in UserA’s time zone. Instead, the message is sent to UserA at 09:00 in their own time zone on the following day.</p>
<p>This means the campaign may initially look as if it has only delivered to users in one region. As time progresses and users in other time zones reach the scheduled delivery time, the number of targeted users increases.</p>
<p>If region aware delivery is disabled, the campaign follows the organiser’s time zone and all users receive the message at the same GMT time.</p>
</blockquote>
<ol start="15">
<li><p>Select Next.</p>
</li>
<li><p>Finally, review your Simulation information before you launch it. If you followed along with me, your simulation review should look like the below:</p>
</li>
</ol>
<pre><code class="lang-plaintext">Delivery Platform
Email

Technique
Malware Attachment

Name
AST_AllUsers_MalwareAttachment

Payloads
DocuSign Shared Document

Target Users
1 targeted users or groups
0 users or groups excluded
</code></pre>
<ol start="17">
<li><p>Select Submit.</p>
</li>
<li><p>Next, <a target="_blank" href="https://security.microsoft.com/attacksimulator?viewid=overview">Go to Attack simulation training overview</a>, or <a target="_blank" href="https://security.microsoft.com/attacksimulator?viewid=overview">View all payloads</a>.</p>
</li>
</ol>
<blockquote>
<p><a target="_blank" href="https://security.microsoft.com/attacksimulator?viewid=contentlibrary&amp;tabId=payload">Note: this si</a><a target="_blank" href="https://security.microsoft.com/attacksimulator?viewid=overview">mulat</a>io<a target="_blank" href="https://security.microsoft.com/attacksimulator?viewid=overview">n launched near ins</a><a target="_blank" href="https://security.microsoft.com/attacksimulator?viewid=contentlibrary&amp;tabId=payload">tantly. I scheduled</a> <a target="_blank" href="https://security.microsoft.com/attacksimulator?viewid=overview">the</a> simulation at 18:33:34 and received the email in my inbox at 18:35:00, from the sender <a target="_blank" href="mailto:notificationsrelyadmin@bankmenia.org">notificationsrelyadmin@bankmenia.org</a> (DocuSign Info).</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763318796530/e6eb276d-2698-494b-a944-c2832527a7b3.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-result">The Result:</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763319012734/5d977865-40e6-4112-8f7c-6a04e83b019c.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-attack-simulation-training">Attack Simulation Training.</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763318438052/e7c56c99-090b-488c-b0e8-b97e2989b6e1.png" alt class="image--center mx-auto" /></p>
<p>Attack Simulation Trainings helps you assess phishing risk, train users, and evaluate their progress through an intelligent phishing risk reduction tool.</p>
<p>Security trainings are the best in class trainings made available by Microsoft for you to train your professionals about security and compliance of your organisation and help in improving their behaviour to common attacks and rules.</p>
<p>Training campaigns can be run to train your employees on topics including security, compliance, privacy, or social engineering risks.</p>
<p>To create a training campaign:</p>
<ol>
<li><p>Navigate to the <a target="_blank" href="https://security.microsoft.com/attacksimulator?viewid=trainingcampaign">Microsoft Defender portal</a> <em>→ Email &amp; collaboration → Attack simulation training → Training.</em></p>
</li>
<li><p><a target="_blank" href="https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/attack-simulation-training-get-started?view=o365-worldwide#what-do-you-need-to-know-before-you-begin">Select <code>C</code></a><code>reate new</code>.</p>
</li>
<li><p>Enter a training name. In this example, I will name it <code>AST_AllUsers_AllTraining</code>.</p>
</li>
<li><p>Enter a description (optional).</p>
</li>
<li><p>Select Next.</p>
</li>
<li><p>Add existing users and groups or import a list of email addresses.</p>
</li>
<li><p>Select Next.</p>
</li>
<li><p>Choose users or groups to be excluded from this campaign.</p>
</li>
<li><p>Select Next.</p>
</li>
<li><p>Select training modules.</p>
</li>
</ol>
<p>Below is the list of all available trainings that you can use to run a campaign or a simulation:</p>
<h3 id="heading-social-engineering-and-phishing"><strong>Social Engineering and Phishing.</strong></h3>
<ul>
<li><p>Social engineering via email</p>
</li>
<li><p>Insider Threats</p>
</li>
<li><p>Identity theft – Example of an attack</p>
</li>
<li><p>Real or not real? How deep is the fake?</p>
</li>
<li><p>Computer Compromise</p>
</li>
<li><p>C-Level email impersonation</p>
</li>
<li><p>AI: Helpful tool or threat?</p>
</li>
<li><p>Credential Theft</p>
</li>
<li><p>Browser-in-the-browser attack</p>
</li>
<li><p>Application account compromise</p>
</li>
<li><p>Unintentional Insider Threat</p>
</li>
<li><p>Sharing an organisation’s computer</p>
</li>
<li><p>Risky USB</p>
</li>
<li><p>Phishing by Phone</p>
</li>
<li><p>Handling Unidentified Individuals</p>
</li>
<li><p>Friend or Foe?</p>
</li>
<li><p>Working Remotely</p>
</li>
<li><p>Travelling Securely</p>
</li>
<li><p>The Clean Desk Principle</p>
</li>
<li><p>Mobile Devices</p>
</li>
<li><p>Secure sharing of sensitive information</p>
</li>
<li><p>Spear Phishing</p>
</li>
<li><p>Mass-marketing phishing</p>
</li>
<li><p>Business Email Compromise (BEC)</p>
</li>
<li><p>Phishing websites</p>
</li>
<li><p>How to report suspicious messages</p>
</li>
<li><p>Spoofing: how to avoid falling victim</p>
</li>
</ul>
<h3 id="heading-malware-ransomware-and-technical-threats"><strong>Malware, Ransomware and Technical Threats.</strong></h3>
<ul>
<li><p>Ransomware</p>
</li>
<li><p>Malicious Software</p>
</li>
<li><p>Cyber Fraud</p>
</li>
<li><p>Cyber Attack Detection</p>
</li>
<li><p>Malicious digital QR codes</p>
</li>
<li><p>Malicious printed QR codes</p>
</li>
<li><p>Risks associated with file transfer</p>
</li>
</ul>
<h3 id="heading-data-protection-privacy-and-compliance"><strong>Data Protection, Privacy and Compliance</strong></h3>
<ul>
<li><p>Privacy Awareness</p>
</li>
<li><p>GDPR Essentials</p>
</li>
<li><p>CCPA Essentials</p>
</li>
<li><p>Personally Identifiable Information (PII)</p>
</li>
<li><p>Protected Health Information (PHI)</p>
</li>
<li><p>HIPAA/HITECH</p>
</li>
<li><p>Protecting your information from Wi-Fi security risks</p>
</li>
<li><p>Protecting Sensitive Information – Information Handling</p>
</li>
<li><p>Preventing Security Breaches</p>
</li>
<li><p>Countering Misinformation</p>
</li>
<li><p>Identity Theft</p>
</li>
<li><p>Financial Data Exposure</p>
</li>
<li><p>Employee Data Breach</p>
</li>
<li><p>Importance of Security Culture in the Organisation</p>
</li>
<li><p>Personal Information Protection and Electronic Documents Act</p>
</li>
<li><p>PCI DSS for Call Centres</p>
</li>
<li><p>PCI DSS for Retailers</p>
</li>
<li><p>PCI DSS Awareness</p>
</li>
<li><p>PCI DSS Awareness</p>
</li>
<li><p>AI risks and best practices</p>
</li>
<li><p>Control de acceso</p>
</li>
<li><p>Understanding app consent requests</p>
</li>
<li><p>OAuth Consent Grant</p>
</li>
</ul>
<h3 id="heading-physical-security-and-behavioural-security"><strong>Physical Security and Behavioural Security</strong></h3>
<ul>
<li><p>How to be security aware</p>
</li>
<li><p>Applying the Clean Desk Principle</p>
</li>
<li><p>Access Control</p>
</li>
<li><p>Responsible Use of the Internet</p>
</li>
<li><p>Protecting Payment Card Data</p>
</li>
<li><p>Physical Security</p>
</li>
<li><p>Smartphones</p>
</li>
<li><p>Intellectual Property</p>
</li>
<li><p>Information Lifecycle</p>
</li>
<li><p>Information Classification</p>
</li>
<li><p>Incident Reporting</p>
</li>
</ul>
<h3 id="heading-cloud-internet-and-it-usage"><strong>Cloud, Internet and IT Usage.</strong></h3>
<ul>
<li><p>Cloud Services</p>
</li>
<li><p>Cloud Computing</p>
</li>
<li><p>Bring Your Own Device (BYOD)</p>
</li>
<li><p>Open Wi-Fi Risks</p>
</li>
<li><p>Open Web Application Security Project (OWASP) Top 10</p>
</li>
</ul>
<h3 id="heading-other-awareness-topics"><strong>Other Awareness Topics</strong></h3>
<ul>
<li><p>Privacy</p>
</li>
<li><p>Passwords</p>
</li>
</ul>
<ol start="11">
<li><p>Select Next.</p>
</li>
<li><p>Select end user notification preferences for this campaign. This can be either Microsoft curated end-user notifications (recommended), or customised end-user notifications. In this example, I will choose <code>Microsoft default notification (recommended)</code>, and I will also set the delivery preference of <code>Microsoft default training only campaign-training reminder notification</code> to <code>Twice a week</code>.</p>
</li>
<li><p>Select Next.</p>
</li>
<li><p>Select the launch and end date and time for your training campaign. You can schedule it up to 14 days in advance, and the end date can be configured to 30 days from the launch date. In this example, I will launch the training as soon as I’m done, and end the campaign 30 days later.</p>
</li>
<li><p>Select Next.</p>
</li>
<li><p>Review your training campaign information before you launch it.</p>
</li>
</ol>
<p>For example:</p>
<pre><code class="lang-plaintext">Name
AST_AllUsers_AllTraining

Description

Selected users
1 users selected

Training Campaign Content
108 trainings
910 mins 0 sec total duration

Introduction to Information Security
Business Email Compromise
Email
Identity Theft
Malware
Phishing
Ransomware
Social Engineering
Anatomy of a Spear Phishing Attack
Phishing websites
Ransomware
...

Schedule campaign
Scheduled to launch after submission
Scheduled to end on 15/12/2025, 20:00:00
</code></pre>
<ol start="17">
<li>Select Submit.</li>
</ol>
<p>End users can select <a target="_blank" href="https://security.microsoft.com/trainingassignments"><code>Go to training</code></a> from within the email to start their training.</p>
<hr />
<h1 id="heading-third-party-phishing-simulations">Third Party Phishing Simulations.</h1>
<p>Phishing simulations are attacks orchestrated by your security team and used for training and learning. Simulations can help identify vulnerable users and lessen the impact of malicious attacks on your organisation.</p>
<p>Third-party phishing simulations require at least one Sending domain entry [source domain or DKIM] AND at least one Sending IP entry to ensure message delivery. URLs present in the email message body will also be automatically allowed at time of click as a part of this phishing simulation system allow.</p>
<p>Note: The Simulation URLs to allow field is optional and available for non-email based phishing simulation campaign scenarios. Specifying URLs in this field ensures that these URLs aren't blocked at time of click for phishing simulation scenarios that use Microsoft Teams and Office apps (Word, Excel, etc).</p>
<p>To configure the advanced delivery of IP addresses, sender domains and URLs that are used as part of your 3rd party phishing simulation:</p>
<ul>
<li>Navigate to Microsoft Defender <em>→</em> Email &amp; collaboration <em>→</em> Policies &amp; rules <em>→</em> Threat policies <em>→</em> Advanced delivery <em>→</em> Phishing simulation.</li>
</ul>
<hr />
<h1 id="heading-final-thoughts-on-attack-simulation-training-within-microsoft-defender-for-office-365-mdo">Final Thoughts On Attack Simulation Training Within Microsoft Defender for Office 365 (MDO).</h1>
<p>The training component within Microsoft Defender for Office 365 is one of the strongest elements of the platfor<a target="_blank" href="https://learn.microsoft.com/en-us/microsoft-365/security/office-365-security/skip-filtering-phishing-simulations-sec-ops-mailboxes?view=o365-worldwide#use-the-microsoft-365-defender-portal-to-configure-third-party-phishing-simulations-in-the-advanced-delivery-policy%20target=">m. The con</a>tent is clear, structured and delivered at an appropriate pace, which makes it suitable for a broad range of end users.</p>
<p>The videos are concise, focused on real behaviours, and avoid unnecessary detail while still offering technically accurate guidance. They reinforce the essential principles o<a target="_blank" href="https://learn.microsoft.com/en-us/microsoft-365/security/office-365-security/skip-filtering-phishing-simulations-sec-ops-mailboxes?view=o365-worldwide#use-the-microsoft-365-defender-portal-to-configure-third-party-phishing-simulations-in-the-advanced-delivery-policy%20target=">f identify</a>ing suspicious messages, reporting them correctly and understanding how common attack techniques work in practice.</p>
<p>End users cannot skip ahead or jump past sections they have not viewed. They may pause and rewind, but they can only return to points they have already watched.</p>
<p>For organisations seeking measurable improvements in user behaviour, this structured delivery model aligns well with security awareness best practice recommended by Microsoft and the National Cyber Security Centre.</p>
<p>Overall, the training within MDO complements attack simulations effectively. It reinforces learning rather than relying solely on pass or fail outcomes, and it encourages users to understand the reasoning behind safe email handling rather than memorising steps. When used consistently, it provides a measurable uplift in user readiness and reduces the likelihood of successful social engineering attacks.</p>
<p>If your organisation uses Microsoft 365 heavily and has Defender for Office 365 Plan 2 available, I would highly recommend giving Attack Simulation Training a try!</p>
<hr />
<h1 id="heading-referenceshttpslearnmicrosoftcomen-usdefender-office-365attack-simulation-training-training-campaigns"><a target="_blank" href="https://learn.microsoft.com/en-us/defender-office-365/attack-simulation-training-training-campaigns">References.</a></h1>
<blockquote>
<p>Reference: <a target="_blank" href="https://learn.microsoft.com/en-us/defender-office-365/attack-simulation-training-simulations">Simulate a phishing attack with Attack simulation training - Microsoft Defender for Office 365 | Microsoft Learn</a></p>
<p><a target="_blank" href="https://learn.microsoft.com/en-us/defender-office-365/attack-simulation-training-training-campaigns">Reference: Landing pages in A</a><a target="_blank" href="https://learn.microsoft.com/en-us/defender-office-365/attack-simulation-training-landing-pages?view=o365-worldwide">ttack simulation training - Microsoft Defender for Office 365 | Microsoft Lear</a><a target="_blank" href="https://learn.microsoft.com/en-us/defender-office-365/attack-simulation-training-training-campaigns">n.</a></p>
<p>Reference: <a target="_blank" href="https://learn.microsoft.com/en-us/defender-office-365/attack-simulation-training-training-campaigns">Attack Simulation Training</a> <a target="_blank" href="https://youtu.be/zB_O-6bwZbc?si=CcjvRDaJ6N7AcpI7">With Microsoft | YouTube.</a></p>
<p><a target="_blank" href="https://learn.microsoft.com/en-us/defender-office-365/attack-simulation-training-training-campaigns">T</a><a target="_blank" href="https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/attack-simulation-training-get-started?view=o365-worldwide#what-do-you-need-to-know-before-you-begin">raining cam</a><a target="_blank" href="https://learn.microsoft.com/en-us/defender-office-365/attack-simulation-training-training-campaigns">paigns in Attack simulation training - Microsoft Defender for Office 365 | Microsoft Learn</a></p>
<p>Refer to this if you’d like to assign training to users without putting them through a simulation.</p>
<p>Reference: <a target="_blank" href="https://learn.microsoft.com/en-us/defender-office-365/attack-simulation-training-get-started?view=o365-worldwide#what-do-you-need-to-know-before-you-begin">Get started using Attack simulation training - Microsoft Defender for Office 365 | Microsoft Learn</a></p>
<p>Refer to this for creating training campaigns.</p>
<p>Reference: <a target="_blank" href="https://learn.microsoft.com/en-us/defender-office-365/advanced-delivery-policy-configure?view=o365-worldwide#use-the-microsoft-365-defender-portal-to-configure-third-party-phishing-simulations-in-the-advanced-delivery-policy%20target=">Configure the advanced delivery policy for non-Microsoft phishing simulations and email delivery to SecOps mailboxes - Microsoft Defender for Office 365 | Microsoft Learn</a></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Bypassing Microsoft Intune Compliant Device Conditional Access Requirements Using TokenSmith]]></title><description><![CDATA[This article is coming 20/10/2025. Keep an eye out, and subscribe to the blog to be announced.]]></description><link>https://blog.cdoherty.co.uk/bypassing-microsoft-intune-compliant-device-conditional-access-requirements-using-tokensmith</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/bypassing-microsoft-intune-compliant-device-conditional-access-requirements-using-tokensmith</guid><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Sat, 18 Oct 2025 23:14:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760829316097/9d1f7ce9-8bc6-4f05-b665-c40fb99f53b9.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This article is coming 20/10/2025. Keep an eye out, and subscribe to the blog to be announced.</p>
]]></content:encoded></item><item><title><![CDATA[How Adversaries Execute AiTM Phishing Campaigns (A Practical Demonstration Using Evilginx3)]]></title><description><![CDATA[This article is coming 20/10/2025. Keep an eye out, and subscribe to the blog to be announced.

Enjoyed this article? Consider reading: Bypassing Microsoft Intune Compliant Device Conditional Access Requirements Using TokenSmith.]]></description><link>https://blog.cdoherty.co.uk/how-adversaries-execute-aitm-phishing-campaigns-a-practical-demonstration-using-evilginx3</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/how-adversaries-execute-aitm-phishing-campaigns-a-practical-demonstration-using-evilginx3</guid><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Sat, 18 Oct 2025 23:10:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760829017311/00b70b16-2b2c-4369-8dc2-59ec895b4f88.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This article is coming 20/10/2025. Keep an eye out, and subscribe to the blog to be announced.</p>
<hr />
<p>Enjoyed this article? Consider reading: <a target="_blank" href="https://blog.cdoherty.co.uk/bypassing-microsoft-intune-compliant-device-conditional-access-requirements-using-tokensmith">Bypassing Microsoft Intune Compliant Device Conditional Access Requirements Using TokenSmith</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Auditing Microsoft Entra Authentication Methods with Microsoft Sentinel and Azure Logic Apps]]></title><description><![CDATA[Overview.
When managing identity security in Microsoft Entra ID, having clear visibility into which multi-factor authentication (MFA) methods users have registered is essential.
It underpins compliance, user assurance, and incident response.
Entra ID...]]></description><link>https://blog.cdoherty.co.uk/auditing-microsoft-entra-authentication-methods-with-microsoft-sentinel-and-azure-logic-apps</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/auditing-microsoft-entra-authentication-methods-with-microsoft-sentinel-and-azure-logic-apps</guid><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Thu, 16 Oct 2025 21:58:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760643233216/4ed6162f-1ea7-46d7-87fe-995530ae8765.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-overview">Overview.</h1>
<p>When managing identity security in Microsoft Entra ID, having clear visibility into which multi-factor authentication (MFA) methods users have registered is essential.</p>
<p>It underpins compliance, user assurance, and incident response.</p>
<p>Entra ID’s portal presents a convenient, current-state view of a user’s authentication methods. It is useful for spot checks but it is not designed for tenant-wide assurance, forensic context, or durable compliance evidence.</p>
<p>Microsoft Sentinel’s native identity tables focus on events such as sign-ins and administrative changes, not on the underlying configuration state of which methods each user has registered. The authoritative source for that configuration is Microsoft Graph’s authentication methods API.</p>
<p>The most reliable pattern is to collect from Graph on a schedule, normalise the results, and write into a custom Sentinel table that you can query with KQL for audits and investigations.</p>
<p><a target="_blank" href="https://learn.microsoft.com/en-us/graph/api/authentication-list-methods?view=graph-rest-1.0&amp;utm_source=chatgpt.com">Microsoft Se</a>ntinel can ingest valuable identity telemetry, SignInLogs, AuditLogs, and risk detections, but these sources do not reveal which MFA methods each account has configured.</p>
<hr />
<h1 id="heading-why-you-need-real-mfa-visibility">Why You Need Real MFA Visibility.</h1>
<p>Having visibility of which authentication methods users have registered in Microsoft Entra ID is invaluable for identity assurance, compliance, and operational monitoring. It is not enough to know that multi-factor authentication (MFA) is enforced. You need to know which authentication methods each user has actually registered.</p>
<p>Entra’s native tools, such as the Authentication Methods dashboard and the <code>IdentityInfo</code> table in Microsoft Sentinel give only a partial picture. The <code>IdentityInfo</code> table provides a Boolean flag (“registered / not registered”), and while <code>AuditLogs</code> can record when an MFA method is added or removed, neither provides the full configuration-level detail of which methods are actually present on each account.</p>
<p>That limitation makes it impossible to answer questions such as:</p>
<ul>
<li><p>Which users still rely solely on passwords?</p>
</li>
<li><p>How many FIDO2 keys or Authenticator apps are deployed?</p>
</li>
<li><p>Are any users using outdated or duplicate devices?</p>
</li>
<li><p>Can we prove our MFA rollout for audit or compliance evidence?</p>
</li>
</ul>
<p>To obtain a complete, historical, and queryable record of authentication methods, we must pull data directly from the Microsoft Graph API and store it where it can be queried and correlated: Microsoft Sentinel.</p>
<hr />
<h1 id="heading-where-entra-falls-short">Where Entra Falls Short.</h1>
<h3 id="heading-native-ui-authentication-methods-reporting">Native UI: Authentication Methods Reporting</h3>
<p>The <strong>User Registration Details</strong> view under <em>Identity → Authentication methods</em> in the Entra Admin Centre provides useful at-a-glance information but is limited:</p>
<ul>
<li><p>It displays only <em>current state</em>, with no history.</p>
</li>
<li><p>Phone numbers and device names are <strong>masked</strong> for privacy.</p>
</li>
<li><p>There is no export or filtering beyond basic search.</p>
</li>
<li><p>It cannot answer questions like <em>“Which users have registered both Authenticator and FIDO2?”</em></p>
</li>
</ul>
<h3 id="heading-sentinels-built-in-tables">Sentinel’s Built-in Tables</h3>
<p>When diagnostic settings are connected, Sentinel receives the following data streams from Entra ID:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Table</td><td>Example Content</td><td>Limitation</td></tr>
</thead>
<tbody>
<tr>
<td><strong>SignInLogs</strong></td><td>MFA required / satisfied, Conditional Access results</td><td>Event data only — no configuration.</td></tr>
<tr>
<td><strong>AuditLogs</strong></td><td>Method added or removed, policy changes</td><td>Captures change events, not current state.</td></tr>
<tr>
<td><strong>IdentityInfo</strong></td><td>MFA registered = true/false</td><td>Lacks method-level detail.</td></tr>
</tbody>
</table>
</div><p>These tables are designed for behavioural analysis, not configuration auditing. To audit MFA properly, you need to extract configuration from the source of truth, Microsoft Graph.</p>
<hr />
<h1 id="heading-the-authoritative-source-microsoft-graph">The Authoritative Source: Microsoft Graph.</h1>
<h2 id="heading-the-endpoint">The Endpoint.</h2>
<p>The Microsoft Graph endpoint <code>/users/{id}/authentication/methods</code> returns the full list of authentication methods registered for any given user:</p>
<pre><code class="lang-powershell">GET https://graph.microsoft.com/v1.<span class="hljs-number">0</span>/users/{id or UPN}/authentication/methods
</code></pre>
<p>Example output:</p>
<pre><code class="lang-powershell">{
  <span class="hljs-string">"value"</span>: [
    {
      <span class="hljs-string">"@odata.type"</span>: <span class="hljs-string">"#microsoft.graph.passwordAuthenticationMethod"</span>
    },
    {
      <span class="hljs-string">"@odata.type"</span>: <span class="hljs-string">"#microsoft.graph.microsoftAuthenticatorAuthenticationMethod"</span>,
      <span class="hljs-string">"displayName"</span>: <span class="hljs-string">"iPhone 17 Pro"</span>,
      <span class="hljs-string">"deviceTag"</span>: <span class="hljs-string">"SoftwareTokenActivated"</span>,
      <span class="hljs-string">"phoneAppVersion"</span>: <span class="hljs-string">"6.8.30"</span>
    },
    {
      <span class="hljs-string">"@odata.type"</span>: <span class="hljs-string">"#microsoft.graph.fido2AuthenticationMethod"</span>,
      <span class="hljs-string">"model"</span>: <span class="hljs-string">"YubiKey 5 NFC"</span>
    }
  ]
}
</code></pre>
<p>Each <code>@odata.type</code> identifies the authentication method type:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Method Type</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><code>#microsoft.graph.passwordAuthenticationMethod</code></td><td>Password credential.</td></tr>
<tr>
<td><code>#microsoft.graph.microsoftAuthenticatorAuthenticationMethod</code></td><td>Microsoft Authenticator app registration.</td></tr>
<tr>
<td><code>#microsoft.graph.windowsHelloForBusinessAuthenticationMethod</code></td><td>Windows Hello for Business key.</td></tr>
<tr>
<td><code>#microsoft.graph.phoneAuthenticationMethod</code></td><td>Phone number used for SMS or voice MFA.</td></tr>
<tr>
<td><code>#</code><a target="_blank" href="http://microsoft.graph.fido"><code>microsoft.graph.fido</code></a><code>2AuthenticationMethod</code></td><td>FIDO2 hardware key (e.g. YubiKey, Feitian).</td></tr>
</tbody>
</table>
</div><p>This data is far richer than anything available in the Entra GUI or tables and is the same source that Entra itself uses internally.</p>
<blockquote>
<p><em><mark>NOTE: The </mark></em> <code>#microsoft.graph.</code> <em><mark> prefix comes directly from the </mark></em> <code>@odata.type</code> <em><mark> field in the Microsoft Graph API response. It represents the fully qualified object type within the Graph schema (namespace </mark></em> <code>microsoft.graph</code><em><mark>).</mark></em></p>
<p><em><mark>For readability, use the simplified form (for example, </mark></em> <code>fido2AuthenticationMethod</code> <em><mark> instead of </mark></em> <code>#</code><a target="_blank" href="http://microsoft.graph.fido"><code>microsoft.graph.fido</code></a><code>2AuthenticationMethod</code><em><mark>), but the full prefix is what you’ll see in the raw API data and Sentinel logs.</mark></em></p>
</blockquote>
<hr />
<h1 id="heading-solution-overview">Solution Overview.</h1>
<p>The goal is to automate collection of authentication method data using Azure Logic Apps and Microsoft Graph, then ingest the results into a custom Sentinel table (for example <code>UserAuthMethods_CL</code>).</p>
<p>This gives you a historical, queryable record of MFA configurations across your tenant, perfect for compliance reporting, audit evidence, or security investigations.</p>
<h2 id="heading-examples">Examples.</h2>
<h3 id="heading-generate-a-readable-mfa-configuration-summary-per-user">Generate a readable MFA configuration summary per user:</h3>
<pre><code class="lang-sql">userAuthMethods_CL
| extend UserPrincipalName = url_decode(extract(@"users\('(.+)'\)/authentication/", 1, _odata_context_s))
| mv-expand Auth = todynamic(value_s)
| extend MethodType = <span class="hljs-keyword">replace</span>(@<span class="hljs-string">"#microsoft\.graph\."</span>, <span class="hljs-string">""</span>, tostring(Auth[<span class="hljs-string">'@odata.type'</span>]))
| extend DeviceName = tostring(Auth.displayName)
| extend PhoneNumber = tostring(Auth.phoneNumber)
| extend DeviceTag = tostring(Auth.deviceTag)
| extend AppVersion = tostring(Auth.phoneAppVersion)
| extend Created = todatetime(Auth.createdDateTime)
| extend MethodDetail = <span class="hljs-keyword">case</span>(
        MethodType == <span class="hljs-string">"microsoftAuthenticatorAuthenticationMethod"</span>, strcat(<span class="hljs-string">"Authenticator: "</span>, DeviceName, <span class="hljs-string">" ("</span>, DeviceTag, <span class="hljs-string">", v"</span>, AppVersion, <span class="hljs-string">")"</span>),
        MethodType == <span class="hljs-string">"phoneAuthenticationMethod"</span>, strcat(<span class="hljs-string">"Phone: "</span>, PhoneNumber),
        MethodType == <span class="hljs-string">"windowsHelloForBusinessAuthenticationMethod"</span>, strcat(<span class="hljs-string">"Windows Hello: "</span>, DeviceName),
        MethodType == <span class="hljs-string">"fido2AuthenticationMethod"</span>, strcat(<span class="hljs-string">"FIDO2 Key: "</span>, DeviceName),
        MethodType == <span class="hljs-string">"passwordAuthenticationMethod"</span>, <span class="hljs-string">"Password only"</span>,
        strcat(MethodType, <span class="hljs-string">": "</span>, <span class="hljs-keyword">coalesce</span>(DeviceName, PhoneNumber, DeviceTag))
    )
| summarize AuthSummary = strcat_array(<span class="hljs-keyword">make_set</span>(MethodDetail), <span class="hljs-string">" | "</span>) <span class="hljs-keyword">by</span> UserPrincipalName
| <span class="hljs-keyword">project</span> UserPrincipalName, AuthSummary
| <span class="hljs-keyword">order</span> <span class="hljs-keyword">by</span> UserPrincipalName <span class="hljs-keyword">asc</span>
| <span class="hljs-keyword">where</span> UserPrincipalName contains <span class="hljs-string">"me"</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760647218089/492e5949-aaf9-4bf1-85d0-c930784a53f4.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-summarize-the-number-of-users-with-each-type-of-authentication-method">Summarize the number of users with each type of authentication method:</h3>
<pre><code class="lang-sql">userAuthMethods_CL 
| extend
    username=url_decode(extract(@"users\('(.+)'\)/authentication/", 1, _odata_context_s))
| summarize arg_max(TimeGenerated, *) by username
| mv-expand todynamic(value_s)
| extend method=tostring(value_s["@odata.type"])
| summarize count() by method
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760647393898/9d514fa5-5534-4ebc-a3ff-4c6ee17d6dc3.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-enrich-phone-based-mfa-records-with-country-information">Enrich Phone-Based MFA Records with Country Information.</h3>
<pre><code class="lang-sql">let dialingCodes=externaldata(callingCode:string, countryName:string)
    [@'https://raw.githubusercontent.com/karani-gk/Country-Calling-Codes/refs/heads/main/Countries.csv'] <span class="hljs-keyword">with</span>(<span class="hljs-keyword">format</span>=<span class="hljs-string">"csv"</span>);
userAuthMethods_CL 
| extend
    username=url_decode(extract(@"users\('(.+)'\)/authentication/", 1, _odata_context_s))
| summarize arg_max(TimeGenerated, *) by username
| mv-expand todynamic(value_s)
| extend method=tostring(value_s["@odata.type"])
| where method == "<span class="hljs-comment">#microsoft.graph.phoneAuthenticationMethod"</span>
| extend phoneNumber_ = tostring(value_s.phoneNumber)
| extend countryCode=substring(phoneNumber_, 0, 3)
| lookup (dialingCodes) on $left.countryCode==$right.callingCode
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760647759415/1901e7e6-f0f4-46b8-8822-52a01f65a588.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-scraping-the-data">Scraping The Data.</h1>
<p>The Microsoft Graph API exposes an endpoint that provides detailed information about each user’s registered authentication methods, exactly the data we need.</p>
<p>The Azure Logic App in this implementation automates that process by:</p>
<ul>
<li><p>Retrieving a complete list of users from the tenant.</p>
</li>
<li><p>Querying the Microsoft Graph API for each user’s <code>/authentication/methods</code> endpoint.</p>
</li>
<li><p>Forwarding the collected data into a custom Log Analytics table (<code>userAuthMethods_CL</code>) within Microsoft Sentinel for analysis and compliance reporting.</p>
</li>
</ul>
<p>To allow the Logic App to query Microsoft Graph, it must be granted permission to access directory data. There are two ways to achieve this:</p>
<ol>
<li><p>System-assigned managed identity:<br /> The Logic App can authenticate to Microsoft Graph using a managed identity, but this approach requires assigning high-privilege Entra ID roles (such as <em>Directory Reader</em> or <em>Reports Reader</em>).</p>
</li>
<li><p>App registration with OAuth:<br /> A more flexible and secure method is to create a dedicated Azure AD app registration and configure the Logic App to authenticate via OAuth 2.0.</p>
</li>
</ol>
<p>The app registration only requires the following Microsoft Graph API permissions:</p>
<ul>
<li><p><a target="_blank" href="http://UserAuthenticationMethod.Read"><code>UserAuthenticationMethod.Read</code></a><code>.All</code></p>
</li>
<li><p><a target="_blank" href="http://User.Read"><code>User.Read</code></a><code>.All</code></p>
</li>
</ul>
<hr />
<h1 id="heading-designing-the-collection-pipeline">Designing The Collection Pipeline.</h1>
<p>To continuously capture this data and make it available for analysis, you can automate the process using Azure Logic Apps.</p>
<p>A Logic App executes the following sequence every 24 hours:</p>
<ol>
<li><p>List all users in the tenant using the <code>/users</code> Graph endpoint.</p>
</li>
<li><p>Query each user’s authentication methods using <code>/users/{id}/authentication/methods</code>.</p>
</li>
<li><p>Send the results to Sentinel through the Azure Log Analytics Data Collector connector.</p>
</li>
</ol>
<p>This creates a custom table, <code>userAuthMethods_CL</code>, in your Sentinel workspace containing timestamped records of each user’s registered MFA methods.</p>
<p>Once the data is in Sentinel, you can query, summarise, or alert on MFA coverage with KQL.</p>
<hr />
<h1 id="heading-step-13-enterprise-app-registration">Step 1/3: Enterprise App Registration.</h1>
<p>To authenticate against Microsoft Graph securely without assigning broad system identities:</p>
<ol>
<li><p>In the Entra ID admin center, navigate to App registrations.</p>
</li>
<li><p>Select New registration.</p>
</li>
<li><p>Name the application.</p>
</li>
<li><p>Choose <code>Accounts in this organizational directory only (Single tenant)</code></p>
</li>
<li><p>Select Register.</p>
</li>
<li><p>Select API Permissions → Add a permission → Microsoft Graph → Application permissions.</p>
</li>
<li><p>Add the following permissions:</p>
<ol>
<li><p><a target="_blank" href="http://UserAuthenticationMethod.Read"><code>UserAuthenticationMethod.Read</code></a><code>.All</code></p>
</li>
<li><p><a target="_blank" href="http://User.Read"><code>User.Read</code></a><code>.All</code></p>
</li>
</ol>
</li>
<li><p>Choose Grant consent for your tenant.</p>
</li>
<li><p>Navigate to Certificates &amp; Secrets → New client secret.</p>
</li>
<li><p>Choose an expiration date (The recommendation is 180 days/6 months) → Add.</p>
</li>
<li><p><mark>Make a note of the client secret </mark> <code>Value</code> <mark> and </mark> <code>Secret ID</code> <mark> values, as you will need these later.</mark></p>
</li>
</ol>
<hr />
<h1 id="heading-step-23-creating-the-azure-logic-app">Step 2/3: Creating The Azure Logic App.</h1>
<ol>
<li><p>In the Azure portal, search Deploy a custom template.</p>
</li>
<li><p>Choose Build your own template in the editor.</p>
</li>
<li><p>Paste in the below deployment template → Save.</p>
</li>
</ol>
<h2 id="heading-deployment-template">Deployment Template:</h2>
<pre><code class="lang-json">{
    <span class="hljs-attr">"$schema"</span>: <span class="hljs-string">"https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#"</span>,
    <span class="hljs-attr">"contentVersion"</span>: <span class="hljs-string">"1.0.0.0"</span>,
    <span class="hljs-attr">"parameters"</span>: {
        <span class="hljs-attr">"workflows_Get_Entra_AuthMethods_Logic_App_name"</span>: {
            <span class="hljs-attr">"defaultValue"</span>: <span class="hljs-string">"Get-Entra-AuthMethods-Logic-App"</span>,
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"String"</span>
        },
        <span class="hljs-attr">"connections_azureloganalyticsdatacollector_1_externalid"</span>: {
            <span class="hljs-attr">"defaultValue"</span>: <span class="hljs-string">"/subscriptions/fb47e429-e603-41ff-86a1-36b85336ac4a/resourceGroups/soc-cdoherty-prod-rg/providers/Microsoft.Web/connections/azureloganalyticsdatacollector-1"</span>,
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"String"</span>
        }
    },
    <span class="hljs-attr">"variables"</span>: {},
    <span class="hljs-attr">"resources"</span>: [
        {
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Microsoft.Logic/workflows"</span>,
            <span class="hljs-attr">"apiVersion"</span>: <span class="hljs-string">"2017-07-01"</span>,
            <span class="hljs-attr">"name"</span>: <span class="hljs-string">"[parameters('workflows_Get_Entra_AuthMethods_Logic_App_name')]"</span>,
            <span class="hljs-attr">"location"</span>: <span class="hljs-string">"uksouth"</span>,
            <span class="hljs-attr">"identity"</span>: {
                <span class="hljs-attr">"type"</span>: <span class="hljs-string">"SystemAssigned"</span>
            },
            <span class="hljs-attr">"properties"</span>: {
                <span class="hljs-attr">"state"</span>: <span class="hljs-string">"Enabled"</span>,
                <span class="hljs-attr">"definition"</span>: {
                    <span class="hljs-attr">"$schema"</span>: <span class="hljs-string">"https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#"</span>,
                    <span class="hljs-attr">"contentVersion"</span>: <span class="hljs-string">"1.0.0.0"</span>,
                    <span class="hljs-attr">"parameters"</span>: {
                        <span class="hljs-attr">"$connections"</span>: {
                            <span class="hljs-attr">"defaultValue"</span>: {},
                            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Object"</span>
                        }
                    },
                    <span class="hljs-attr">"triggers"</span>: {
                        <span class="hljs-attr">"Recurrence"</span>: {
                            <span class="hljs-attr">"recurrence"</span>: {
                                <span class="hljs-attr">"interval"</span>: <span class="hljs-number">24</span>,
                                <span class="hljs-attr">"frequency"</span>: <span class="hljs-string">"Hour"</span>,
                                <span class="hljs-attr">"timeZone"</span>: <span class="hljs-string">"GMT Standard Time"</span>
                            },
                            <span class="hljs-attr">"evaluatedRecurrence"</span>: {
                                <span class="hljs-attr">"interval"</span>: <span class="hljs-number">24</span>,
                                <span class="hljs-attr">"frequency"</span>: <span class="hljs-string">"Hour"</span>,
                                <span class="hljs-attr">"timeZone"</span>: <span class="hljs-string">"GMT Standard Time"</span>
                            },
                            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Recurrence"</span>
                        }
                    },
                    <span class="hljs-attr">"actions"</span>: {
                        <span class="hljs-attr">"Get_all_users_from_Graph_API"</span>: {
                            <span class="hljs-attr">"runAfter"</span>: {},
                            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Http"</span>,
                            <span class="hljs-attr">"inputs"</span>: {
                                <span class="hljs-attr">"uri"</span>: <span class="hljs-string">"https://graph.microsoft.com/v1.0/users"</span>,
                                <span class="hljs-attr">"method"</span>: <span class="hljs-string">"GET"</span>,
                                <span class="hljs-attr">"authentication"</span>: {
                                    <span class="hljs-attr">"type"</span>: <span class="hljs-string">"ActiveDirectoryOAuth"</span>,
                                    <span class="hljs-attr">"authority"</span>: <span class="hljs-string">"https://login.microsoftonline.com"</span>,
                                    <span class="hljs-attr">"tenant"</span>: <span class="hljs-string">"fd270158-ff15-46c6-b89c-8c989a167ade"</span>,
                                    <span class="hljs-attr">"audience"</span>: <span class="hljs-string">"https://graph.microsoft.com"</span>,
                                    <span class="hljs-attr">"clientId"</span>: <span class="hljs-string">"4169c87a-e141-4a23-8b3a-f3e7c54fd124"</span>,
                                    <span class="hljs-attr">"secret"</span>: <span class="hljs-string">"tJe8Q~3Z4edikz8u3wAde1TYezB2OxIDQe_w-b.2"</span>
                                }
                            }
                        },
                        <span class="hljs-attr">"Parse_Users_JSON"</span>: {
                            <span class="hljs-attr">"runAfter"</span>: {
                                <span class="hljs-attr">"Get_all_users_from_Graph_API"</span>: [
                                    <span class="hljs-string">"Succeeded"</span>
                                ]
                            },
                            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"ParseJson"</span>,
                            <span class="hljs-attr">"inputs"</span>: {
                                <span class="hljs-attr">"content"</span>: <span class="hljs-string">"@body('Get_all_users_from_Graph_API')"</span>,
                                <span class="hljs-attr">"schema"</span>: {
                                    <span class="hljs-attr">"type"</span>: <span class="hljs-string">"object"</span>,
                                    <span class="hljs-attr">"properties"</span>: {
                                        <span class="hljs-attr">"@@odata.context"</span>: {
                                            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span>
                                        },
                                        <span class="hljs-attr">"value"</span>: {
                                            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"array"</span>,
                                            <span class="hljs-attr">"items"</span>: {
                                                <span class="hljs-attr">"type"</span>: <span class="hljs-string">"object"</span>,
                                                <span class="hljs-attr">"properties"</span>: {
                                                    <span class="hljs-attr">"businessPhones"</span>: {
                                                        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"array"</span>
                                                    },
                                                    <span class="hljs-attr">"displayName"</span>: {
                                                        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span>
                                                    },
                                                    <span class="hljs-attr">"givenName"</span>: {},
                                                    <span class="hljs-attr">"jobTitle"</span>: {},
                                                    <span class="hljs-attr">"mail"</span>: {
                                                        <span class="hljs-attr">"type"</span>: [
                                                            <span class="hljs-string">"string"</span>,
                                                            <span class="hljs-string">"null"</span>
                                                        ]
                                                    },
                                                    <span class="hljs-attr">"mobilePhone"</span>: {},
                                                    <span class="hljs-attr">"officeLocation"</span>: {},
                                                    <span class="hljs-attr">"preferredLanguage"</span>: {},
                                                    <span class="hljs-attr">"surname"</span>: {},
                                                    <span class="hljs-attr">"userPrincipalName"</span>: {
                                                        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span>
                                                    },
                                                    <span class="hljs-attr">"id"</span>: {
                                                        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span>
                                                    }
                                                },
                                                <span class="hljs-attr">"required"</span>: [
                                                    <span class="hljs-string">"userPrincipalName"</span>,
                                                    <span class="hljs-string">"id"</span>
                                                ]
                                            }
                                        }
                                    }
                                }
                            }
                        },
                        <span class="hljs-attr">"For_each_User"</span>: {
                            <span class="hljs-attr">"foreach"</span>: <span class="hljs-string">"@outputs('Parse_Users_JSON')?['body']?['value']"</span>,
                            <span class="hljs-attr">"actions"</span>: {
                                <span class="hljs-attr">"If_user_is_not_external"</span>: {
                                    <span class="hljs-attr">"actions"</span>: {
                                        <span class="hljs-attr">"Get_User_Auth_Methods"</span>: {
                                            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Http"</span>,
                                            <span class="hljs-attr">"inputs"</span>: {
                                                <span class="hljs-attr">"uri"</span>: <span class="hljs-string">"https://graph.microsoft.com/v1.0/users/@{item()?['userPrincipalName']}/authentication/methods"</span>,
                                                <span class="hljs-attr">"method"</span>: <span class="hljs-string">"GET"</span>,
                                                <span class="hljs-attr">"authentication"</span>: {
                                                    <span class="hljs-attr">"type"</span>: <span class="hljs-string">"ActiveDirectoryOAuth"</span>,
                                                    <span class="hljs-attr">"authority"</span>: <span class="hljs-string">"https://login.microsoftonline.com"</span>,
                                                    <span class="hljs-attr">"tenant"</span>: <span class="hljs-string">"PutYourTenantIDHere"</span>,
                                                    <span class="hljs-attr">"audience"</span>: <span class="hljs-string">"https://graph.microsoft.com"</span>,
                                                    <span class="hljs-attr">"clientId"</span>: <span class="hljs-string">"PutYourClientIDHere"</span>,
                                                    <span class="hljs-attr">"secret"</span>: <span class="hljs-string">"PutYourClientSecretHere"</span>
                                                }
                                            }
                                        },
                                        <span class="hljs-attr">"Send_Data_to_Sentinel"</span>: {
                                            <span class="hljs-attr">"runAfter"</span>: {
                                                <span class="hljs-attr">"Get_User_Auth_Methods"</span>: [
                                                    <span class="hljs-string">"Succeeded"</span>
                                                ]
                                            },
                                            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"ApiConnection"</span>,
                                            <span class="hljs-attr">"inputs"</span>: {
                                                <span class="hljs-attr">"host"</span>: {
                                                    <span class="hljs-attr">"connection"</span>: {
                                                        <span class="hljs-attr">"name"</span>: <span class="hljs-string">"@parameters('$connections')['azureloganalyticsdatacollector-1']['connectionId']"</span>
                                                    }
                                                },
                                                <span class="hljs-attr">"method"</span>: <span class="hljs-string">"post"</span>,
                                                <span class="hljs-attr">"body"</span>: <span class="hljs-string">"@{body('Get_User_Auth_Methods')}"</span>,
                                                <span class="hljs-attr">"headers"</span>: {
                                                    <span class="hljs-attr">"Log-Type"</span>: <span class="hljs-string">"userAuthMethods"</span>
                                                },
                                                <span class="hljs-attr">"path"</span>: <span class="hljs-string">"/api/logs"</span>
                                            }
                                        }
                                    },
                                    <span class="hljs-attr">"else"</span>: {
                                        <span class="hljs-attr">"actions"</span>: {}
                                    },
                                    <span class="hljs-attr">"expression"</span>: {
                                        <span class="hljs-attr">"and"</span>: [
                                            {
                                                <span class="hljs-attr">"not"</span>: {
                                                    <span class="hljs-attr">"contains"</span>: [
                                                        <span class="hljs-string">"@item()?['userPrincipalName']"</span>,
                                                        <span class="hljs-string">"#EXT#"</span>
                                                    ]
                                                }
                                            }
                                        ]
                                    },
                                    <span class="hljs-attr">"type"</span>: <span class="hljs-string">"If"</span>
                                }
                            },
                            <span class="hljs-attr">"runAfter"</span>: {
                                <span class="hljs-attr">"Parse_Users_JSON"</span>: [
                                    <span class="hljs-string">"Succeeded"</span>
                                ]
                            },
                            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Foreach"</span>
                        }
                    },
                    <span class="hljs-attr">"outputs"</span>: {}
                },
                <span class="hljs-attr">"parameters"</span>: {
                    <span class="hljs-attr">"$connections"</span>: {
                        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Object"</span>,
                        <span class="hljs-attr">"value"</span>: {
                            <span class="hljs-attr">"azureloganalyticsdatacollector-1"</span>: {
                                <span class="hljs-attr">"id"</span>: <span class="hljs-string">"/subscriptions/fb47e429-e603-41ff-86a1-36b85336ac4a/providers/Microsoft.Web/locations/uksouth/managedApis/azureloganalyticsdatacollector"</span>,
                                <span class="hljs-attr">"connectionId"</span>: <span class="hljs-string">"[parameters('connections_azureloganalyticsdatacollector_1_externalid')]"</span>,
                                <span class="hljs-attr">"connectionName"</span>: <span class="hljs-string">"azureloganalyticsdatacollector-1"</span>
                            }
                        }
                    }
                }
            }
        }
    ]
}
</code></pre>
<h2 id="heading-azure-logic-app-design-amp-flow">Azure Logic App Design &amp; Flow.</h2>
<p>Below is the production Logic App definition used for this project.</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"definition"</span>: {
    <span class="hljs-attr">"$schema"</span>: <span class="hljs-string">"https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#"</span>,
    <span class="hljs-attr">"contentVersion"</span>: <span class="hljs-string">"1.0.0.0"</span>,
    <span class="hljs-attr">"triggers"</span>: {
      <span class="hljs-attr">"Recurrence"</span>: {
        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Recurrence"</span>,
        <span class="hljs-attr">"recurrence"</span>: {
          <span class="hljs-attr">"interval"</span>: <span class="hljs-number">24</span>,
          <span class="hljs-attr">"frequency"</span>: <span class="hljs-string">"Hour"</span>,
          <span class="hljs-attr">"timeZone"</span>: <span class="hljs-string">"GMT Standard Time"</span>
        }
      }
    },
    <span class="hljs-attr">"actions"</span>: {
      <span class="hljs-attr">"Get_all_users_from_Graph_API"</span>: {
        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Http"</span>,
        <span class="hljs-attr">"inputs"</span>: {
          <span class="hljs-attr">"uri"</span>: <span class="hljs-string">"https://graph.microsoft.com/v1.0/users"</span>,
          <span class="hljs-attr">"method"</span>: <span class="hljs-string">"GET"</span>,
          <span class="hljs-attr">"authentication"</span>: {
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"ActiveDirectoryOAuth"</span>,
            <span class="hljs-attr">"authority"</span>: <span class="hljs-string">"https://login.microsoftonline.com"</span>,
            <span class="hljs-attr">"tenant"</span>: <span class="hljs-string">"&lt;tenant-id&gt;"</span>,
            <span class="hljs-attr">"audience"</span>: <span class="hljs-string">"https://graph.microsoft.com"</span>,
            <span class="hljs-attr">"clientId"</span>: <span class="hljs-string">"&lt;client-id&gt;"</span>,
            <span class="hljs-attr">"secret"</span>: <span class="hljs-string">"&lt;client-secret&gt;"</span>
          }
        }
      },
      <span class="hljs-attr">"Parse_Users_JSON"</span>: {
        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"ParseJson"</span>,
        <span class="hljs-attr">"inputs"</span>: {
          <span class="hljs-attr">"content"</span>: <span class="hljs-string">"@body('Get_all_users_from_Graph_API')"</span>,
          <span class="hljs-attr">"schema"</span>: {
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"object"</span>,
            <span class="hljs-attr">"properties"</span>: {
              <span class="hljs-attr">"value"</span>: {
                <span class="hljs-attr">"type"</span>: <span class="hljs-string">"array"</span>,
                <span class="hljs-attr">"items"</span>: {
                  <span class="hljs-attr">"type"</span>: <span class="hljs-string">"object"</span>,
                  <span class="hljs-attr">"properties"</span>: {
                    <span class="hljs-attr">"userPrincipalName"</span>: { <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span> },
                    <span class="hljs-attr">"id"</span>: { <span class="hljs-attr">"type"</span>: <span class="hljs-string">"string"</span> }
                  },
                  <span class="hljs-attr">"required"</span>: [ <span class="hljs-string">"userPrincipalName"</span>, <span class="hljs-string">"id"</span> ]
                }
              }
            }
          }
        },
        <span class="hljs-attr">"runAfter"</span>: { <span class="hljs-attr">"Get_all_users_from_Graph_API"</span>: [ <span class="hljs-string">"Succeeded"</span> ] }
      },
      <span class="hljs-attr">"For_each_User"</span>: {
        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Foreach"</span>,
        <span class="hljs-attr">"foreach"</span>: <span class="hljs-string">"@outputs('Parse_Users_JSON')?['body']?['value']"</span>,
        <span class="hljs-attr">"actions"</span>: {
          <span class="hljs-attr">"If_user_is_not_external"</span>: {
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"If"</span>,
            <span class="hljs-attr">"expression"</span>: {
              <span class="hljs-attr">"not"</span>: {
                <span class="hljs-attr">"contains"</span>: [
                  <span class="hljs-string">"@item()?['userPrincipalName']"</span>,
                  <span class="hljs-string">"#EXT#"</span>
                ]
              }
            },
            <span class="hljs-attr">"actions"</span>: {
              <span class="hljs-attr">"Get_User_Auth_Methods"</span>: {
                <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Http"</span>,
                <span class="hljs-attr">"inputs"</span>: {
                  <span class="hljs-attr">"uri"</span>: <span class="hljs-string">"https://graph.microsoft.com/v1.0/users/@{item()?['userPrincipalName']}/authentication/methods"</span>,
                  <span class="hljs-attr">"method"</span>: <span class="hljs-string">"GET"</span>,
                  <span class="hljs-attr">"authentication"</span>: <span class="hljs-string">"@parameters('GraphOAuth')"</span>
                }
              },
              <span class="hljs-attr">"Send_Data_to_Sentinel"</span>: {
                <span class="hljs-attr">"type"</span>: <span class="hljs-string">"ApiConnection"</span>,
                <span class="hljs-attr">"inputs"</span>: {
                  <span class="hljs-attr">"host"</span>: {
                    <span class="hljs-attr">"connection"</span>: {
                      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"@parameters('$connections')['azureloganalyticsdatacollector-1']['connectionId']"</span>
                    }
                  },
                  <span class="hljs-attr">"method"</span>: <span class="hljs-string">"post"</span>,
                  <span class="hljs-attr">"body"</span>: <span class="hljs-string">"@{body('Get_User_Auth_Methods')}"</span>,
                  <span class="hljs-attr">"headers"</span>: { <span class="hljs-attr">"Log-Type"</span>: <span class="hljs-string">"userAuthMethods"</span> },
                  <span class="hljs-attr">"path"</span>: <span class="hljs-string">"/api/logs"</span>
                },
                <span class="hljs-attr">"runAfter"</span>: { <span class="hljs-attr">"Get_User_Auth_Methods"</span>: [ <span class="hljs-string">"Succeeded"</span> ] }
              }
            }
          }
        },
        <span class="hljs-attr">"runAfter"</span>: { <span class="hljs-attr">"Parse_Users_JSON"</span>: [ <span class="hljs-string">"Succeeded"</span> ] }
      }
    }
  }
}
</code></pre>
<hr />
<h1 id="heading-querying-data-in-sentinel">Querying Data in Sentinel.</h1>
<p>Once ingested, your data lives in <code>userAuthMethods_CL</code> and can be queried using KQL.</p>
<h3 id="heading-inventory-of-fido2-models">Inventory of FIDO2 models:</h3>
<pre><code class="lang-sql">userAuthMethods_CL
| mv-expand todynamic(value_s)
| where value_s["@odata.type"] == "<span class="hljs-comment">#microsoft.graph.fido2AuthenticationMethod"</span>
| summarize count() by Model = tostring(value_s.model)
</code></pre>
<h3 id="heading-summarise-users-by-authentication-method">Summarise users by authentication method:</h3>
<pre><code class="lang-sql">userAuthMethods_CL
| mv-expand todynamic(value_s)
| extend Method = tostring(value_s["@odata.type"])
| summarize Users = count() by Method
| order by Users desc
</code></pre>
<hr />
<h1 id="heading-benefits-of-the-logic-app-approach">Benefits of the Logic App Approach.</h1>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Entra Portal</td><td>AuditLogs / IdentityInfo</td><td>Azure Logic App + Graph</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Detail</strong></td><td>Masked summary</td><td>Event-based only</td><td>Full, unmasked data</td></tr>
<tr>
<td><strong>Historical Tracking</strong></td><td>None</td><td>Partial (events only)</td><td>Continuous timeline</td></tr>
<tr>
<td><strong>Scalability</strong></td><td>Manual</td><td>Limited</td><td>Automated, scalable</td></tr>
<tr>
<td><strong>Integration</strong></td><td>GUI only</td><td>Internal</td><td>Sentinel-native</td></tr>
<tr>
<td><strong>Security</strong></td><td>Role-bound</td><td>Internal</td><td>App-scoped Graph permissions</td></tr>
</tbody>
</table>
</div><p>This approach provides a complete and auditable dataset suitable for SOC analytics, MFA coverage validation, and compliance evidence for standards such as ISO 27001, Cyber Essentials Plus, and NCSC Cloud Security Principles.</p>
<hr />
<h1 id="heading-final-thoughts">Final Thoughts.</h1>
<p>Native Entra reporting is management-level, useful for checking individual users but not for enterprise-scale auditing. By querying Graph through an automated Logic App and storing the data in Microsoft Sentinel, you create a long-term, analytical record of MFA registrations across your organisation.</p>
<p>This approach turns transient configuration data into an auditable evidence trail, exactly what compliance frameworks and SOC teams need to demonstrate effective MFA enforcement.</p>
]]></content:encoded></item><item><title><![CDATA[Automating Azure NSG Inbound Rules for Dynamic IPs Using DDNS and PowerShell Runbooks]]></title><description><![CDATA[Overview.
Managing inbound access to Azure resources through Network Security Groups (NSGs) is a standard best practice. NSGs define which IP addresses or ranges are permitted to communicate with your services, providing an essential layer of control...]]></description><link>https://blog.cdoherty.co.uk/automating-azure-nsg-inbound-rules-for-dynamic-ips-using-ddns-and-powershell-runbooks</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/automating-azure-nsg-inbound-rules-for-dynamic-ips-using-ddns-and-powershell-runbooks</guid><category><![CDATA[Azure]]></category><category><![CDATA[azure runbook]]></category><category><![CDATA[runbooks]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[#microsoft-azure]]></category><category><![CDATA[dns]]></category><category><![CDATA[dynamic dns]]></category><category><![CDATA[Powershell]]></category><category><![CDATA[ip address]]></category><category><![CDATA[Network Security Group]]></category><category><![CDATA[NSG]]></category><category><![CDATA[Azure NSG]]></category><category><![CDATA[automation]]></category><category><![CDATA[automation testing ]]></category><category><![CDATA[vpn]]></category><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Thu, 16 Oct 2025 17:51:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760829837139/181807e5-a53e-4880-9ab5-b957ceffb499.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-overview">Overview.</h1>
<p>Managing inbound access to Azure resources through Network Security Groups (NSGs) is a standard best practice. NSGs define which IP addresses or ranges are permitted to communicate with your services, providing an essential layer of control for environments hosting management interfaces, firewalls, or Azure Virtual Desktop infrastructure.</p>
<p>The challenge arises when the authorised source WAN IP address changes frequently. This scenario is common for environments using DDNS, dynamic VPN gateways, partner networks, or externally managed endpoints.</p>
<p>Manually updating NSG rules to reflect a new source IP is both inefficient and prone to error.</p>
<p>In this article, I demonstrate how to automate NSG inbound security rule updates dynamically using a DNS hostname (for example, from Dynamic DNS), PowerShell, and Azure Automation. The result is a secure, credential-free automation that keeps your NSG rules continuously aligned with the current source IP address.</p>
<hr />
<h1 id="heading-understanding-the-problem">Understanding The Problem.</h1>
<p>NSGs in Microsoft Azure operate at Layer 3 and Layer 4 of the network stack, enforcing allow or deny rules based on parameters such as source, destination, port, and protocol.</p>
<p>When the source address of a trusted endpoint changes, the corresponding rule must be updated to maintain connectivity.</p>
<p>For example, a VPN concentrator or external network may obtain a new public IP every few hours. Leaving the NSG source set to <em>Any</em> is insecure, yet manually updating the rule every time the IP changes is impractical.</p>
<p>As shown below, Azure NSG inbound security rules do not support Fully Qualified Domain Names (FQDNs) or hostnames. Only IP addresses or CIDR-based ranges can be used when defining the source or destination of a rule.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760632555237/ad4ab02f-9d8b-4ba0-b9c8-1b6090f600ed.png" alt class="image--center mx-auto" /></p>
<p>To handle frequently changing source addresses, Azure Automation can be used to periodically resolve a DNS record representing the authorised source and automatically update the NSG rule with the resolved IP.</p>
<hr />
<h1 id="heading-prerequisites">Prerequisites.</h1>
<p>Before implementing this solution, ensure the following requirements are met:</p>
<ol>
<li><p>Azure Subscription: Sufficient permissions to create resources and assign roles.</p>
</li>
<li><p>Existing Network Security Group (NSG): Containing an inbound security rule that requires dynamic source IP management.</p>
</li>
<li><p>DNS Hostname: A resolvable DNS record (for example, a DDNS hostname) that maps to the current public IP of the authorised network. I use a service called <a target="_blank" href="https://www.noip.com/">No-IP</a> for this.</p>
</li>
</ol>
<p>We will create:</p>
<ul>
<li><p>An Azure Automation account.</p>
</li>
<li><p>A system-assigned managed identity.</p>
</li>
<li><p>A runbook, linked to a schedule.</p>
</li>
</ul>
<hr />
<h1 id="heading-solution-overview">Solution Overview.</h1>
<p>The solution uses an Azure Automation Runbook to maintain NSG rules dynamically.</p>
<p>Your DNS hostname (for example, <a target="_blank" href="http://cdoherty.ddns.net"><code>mynetwork.ddns.net</code></a>) resolves to the current public IP of the trusted source. This could represent an office or home network that you access over VPN (for instance OpenVPN).</p>
<p>The Runbook executes a PowerShell script that:</p>
<ul>
<li><p>Resolves the hostname to its current IP address.</p>
</li>
<li><p>Retrieves the NSG configuration.</p>
</li>
<li><p>Compares the existing rule source to the resolved IP.</p>
</li>
<li><p>Updates the rule only when a change is detected.</p>
</li>
</ul>
<p>The Runbook authenticates to Azure using its Managed Identity, eliminating the need for credentials or secrets. This ensures that NSG rules remain accurate, secure, and automatically synchronised with DNS resolution.</p>
<hr />
<h1 id="heading-step-15-creating-the-azure-automation-account">Step 1/5: Creating The Azure Automation Account.</h1>
<ol>
<li><p>In the <a target="_blank" href="https://portal.azure.com/#browse/Microsoft.Automation%2FAutomationAccounts">Azure Portal</a>, navigate to Automation Accounts → Create.</p>
</li>
<li><p>On the Basics tab:</p>
<ol>
<li><p>Specify an Azure Subscription, and the resource group containing the NSG.</p>
</li>
<li><p>Enter a name for the Automation Account.</p>
</li>
<li><p>Choose the same region as your NSG.</p>
</li>
</ol>
</li>
<li><p>On the Advanced tab:</p>
<ol>
<li>Keep the managed identity selection as System assigned.</li>
</ol>
</li>
<li><p>On the Networking tab:</p>
<ol>
<li>Keep the connectivity configuration as Public access.</li>
</ol>
</li>
<li><p>Select Review + Create → Create.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760633972218/2e578f62-e6af-4f7d-95d2-6340d09a77d7.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-step-25-configuring-permissions">Step 2/5: Configuring Permissions.</h1>
<ol>
<li><p>Once the Automation Account has been created:</p>
</li>
<li><p>Navigate to your NSG.</p>
<ol>
<li>In the Azure Portal, select the search bar and search for <code>Network security groups</code> and select your NSG.</li>
</ol>
</li>
<li><p>Select Access control (IAM) → Add → Add role assignment.</p>
</li>
<li><p>On the Role tab:</p>
<ol>
<li>Select <code>Network Contributor</code> on page 4 → Next.</li>
</ol>
</li>
<li><p>On the Members tab:</p>
<ol>
<li><p>Choose Managed Identity → Select members → Automation Account → and select the name of the automation account that you created in step 1.</p>
</li>
<li><p>Select Review + Assign to complete the assignment.</p>
</li>
</ol>
</li>
</ol>
<p>This grants the Automation Account permission to modify NSG rules using least-privilege access.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760634118549/34362462-17aa-4f56-bb8e-cbb12492ebf8.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-step-35-creating-the-powershell-runbook">Step 3/5: Creating The PowerShell Runbook.</h1>
<p>Below is the production-ready PowerShell script that performs the dynamic update.</p>
<p>It authenticates using a Managed Identity, resolves the DNS hostname, retrieves the NSG inbound security rule, and applies changes only when the source IP differs.</p>
<pre><code class="lang-powershell"><span class="hljs-keyword">param</span>(
    [<span class="hljs-built_in">string</span>]<span class="hljs-variable">$ResourceGroupName</span> = <span class="hljs-string">"PutYourResourceGroupNameHere"</span>,
    [<span class="hljs-built_in">string</span>]<span class="hljs-variable">$NsgName</span> = <span class="hljs-string">"PutYourNetworkSecurityGroupNameHere"</span>,
    [<span class="hljs-built_in">string</span>]<span class="hljs-variable">$RuleName</span> = <span class="hljs-string">"PutYourRuleNameHere"</span>,
    [<span class="hljs-built_in">string</span>]<span class="hljs-variable">$DnsName</span> = <span class="hljs-string">"PutYourDDNSorFQDNHere"</span>
)

<span class="hljs-comment"># Log start</span>
<span class="hljs-built_in">Write-Output</span> <span class="hljs-string">"Starting NSG update process for <span class="hljs-variable">$DnsName</span>"</span>

<span class="hljs-comment"># Authenticate with the system-assigned managed identity</span>
<span class="hljs-built_in">Connect-AzAccount</span> <span class="hljs-literal">-Identity</span>

<span class="hljs-comment"># Resolve DDNS hostname to IP</span>
<span class="hljs-keyword">try</span> {
    <span class="hljs-variable">$resolvedIP</span> = (<span class="hljs-built_in">Resolve-DnsName</span> <span class="hljs-literal">-Name</span> <span class="hljs-variable">$DnsName</span> <span class="hljs-literal">-Type</span> A).IPAddress
    <span class="hljs-built_in">Write-Output</span> <span class="hljs-string">"Resolved IP: <span class="hljs-variable">$resolvedIP</span>"</span>
} <span class="hljs-keyword">catch</span> {
    <span class="hljs-built_in">Write-Error</span> <span class="hljs-string">"Failed to resolve <span class="hljs-variable">$DnsName</span>"</span>
    <span class="hljs-keyword">exit</span> <span class="hljs-number">1</span>
}

<span class="hljs-comment"># Retrieve the NSG</span>
<span class="hljs-keyword">try</span> {
    <span class="hljs-variable">$nsg</span> = <span class="hljs-built_in">Get-AzNetworkSecurityGroup</span> <span class="hljs-literal">-Name</span> <span class="hljs-variable">$NsgName</span> <span class="hljs-literal">-ResourceGroupName</span> <span class="hljs-variable">$ResourceGroupName</span>
} <span class="hljs-keyword">catch</span> {
    <span class="hljs-built_in">Write-Error</span> <span class="hljs-string">"Failed to get NSG <span class="hljs-variable">$NsgName</span> from <span class="hljs-variable">$ResourceGroupName</span>"</span>
    <span class="hljs-keyword">exit</span> <span class="hljs-number">1</span>
}

<span class="hljs-comment"># Retrieve the rule</span>
<span class="hljs-variable">$rule</span> = <span class="hljs-variable">$nsg</span>.SecurityRules | <span class="hljs-built_in">Where-Object</span> { <span class="hljs-variable">$_</span>.Name <span class="hljs-operator">-eq</span> <span class="hljs-variable">$RuleName</span> }

<span class="hljs-keyword">if</span> (<span class="hljs-operator">-not</span> <span class="hljs-variable">$rule</span>) {
    <span class="hljs-built_in">Write-Error</span> <span class="hljs-string">"Rule '<span class="hljs-variable">$RuleName</span>' not found in NSG '<span class="hljs-variable">$NsgName</span>'"</span>
    <span class="hljs-keyword">exit</span> <span class="hljs-number">1</span>
}

<span class="hljs-comment"># Get current source</span>
<span class="hljs-variable">$currentSource</span> = <span class="hljs-keyword">if</span> (<span class="hljs-variable">$rule</span>.SourceAddressPrefix <span class="hljs-operator">-is</span> [<span class="hljs-type">System.Array</span>]) {
    <span class="hljs-variable">$rule</span>.SourceAddressPrefix <span class="hljs-operator">-join</span> <span class="hljs-string">","</span>
} <span class="hljs-keyword">else</span> {
    <span class="hljs-variable">$rule</span>.SourceAddressPrefix
}

<span class="hljs-comment"># Compare and update if required</span>
<span class="hljs-keyword">if</span> (<span class="hljs-variable">$currentSource</span> <span class="hljs-operator">-ne</span> <span class="hljs-variable">$resolvedIP</span>) {
    <span class="hljs-built_in">Write-Output</span> <span class="hljs-string">"Updating rule '<span class="hljs-variable">$RuleName</span>' SourceAddressPrefix from '<span class="hljs-variable">$currentSource</span>' to '<span class="hljs-variable">$resolvedIP</span>'"</span>

    <span class="hljs-keyword">try</span> {
        <span class="hljs-comment"># Properly modify the NSG rule</span>
        <span class="hljs-variable">$nsg</span> | <span class="hljs-built_in">Set-AzNetworkSecurityRuleConfig</span> `
            <span class="hljs-literal">-Name</span> <span class="hljs-variable">$RuleName</span> `
            <span class="hljs-literal">-Protocol</span> <span class="hljs-variable">$rule</span>.Protocol `
            <span class="hljs-literal">-Direction</span> <span class="hljs-variable">$rule</span>.Direction `
            <span class="hljs-literal">-Priority</span> <span class="hljs-variable">$rule</span>.Priority `
            <span class="hljs-literal">-SourceAddressPrefix</span> <span class="hljs-variable">$resolvedIP</span> `
            <span class="hljs-literal">-SourcePortRange</span> <span class="hljs-variable">$rule</span>.SourcePortRange `
            <span class="hljs-literal">-DestinationAddressPrefix</span> <span class="hljs-variable">$rule</span>.DestinationAddressPrefix `
            <span class="hljs-literal">-DestinationPortRange</span> <span class="hljs-variable">$rule</span>.DestinationPortRange `
            <span class="hljs-literal">-Access</span> <span class="hljs-variable">$rule</span>.Access

        <span class="hljs-comment"># Commit the update to Azure</span>
        <span class="hljs-built_in">Set-AzNetworkSecurityGroup</span> <span class="hljs-literal">-NetworkSecurityGroup</span> <span class="hljs-variable">$nsg</span>
        <span class="hljs-built_in">Write-Output</span> <span class="hljs-string">"Update complete. Rule '<span class="hljs-variable">$RuleName</span>' now allows only <span class="hljs-variable">$resolvedIP</span>."</span>
    } <span class="hljs-keyword">catch</span> {
        <span class="hljs-built_in">Write-Error</span> <span class="hljs-string">"Failed to update rule '<span class="hljs-variable">$RuleName</span>': <span class="hljs-variable">$_</span>"</span>
        <span class="hljs-keyword">exit</span> <span class="hljs-number">1</span>
    }
} <span class="hljs-keyword">else</span> {
    <span class="hljs-built_in">Write-Output</span> <span class="hljs-string">"No change detected. Current IP already matches."</span>
}
</code></pre>
<blockquote>
<p><strong><mark>Ensure you replace the placeholder variables (</mark></strong><code>$ResourceGroupName</code><strong><mark>, </mark></strong> <code>$NsgName</code><strong><mark>, </mark></strong> <code>$RuleName</code><strong><mark>, </mark></strong> <code>$DnsName</code><strong><mark>) at the top of the script.</mark></strong></p>
</blockquote>
<hr />
<h1 id="heading-step-45-publishing-the-automation-runbook">Step 4/5: Publishing The Automation Runbook.</h1>
<ol>
<li><p>In the Azure portal, navigate to Automation Accounts → Process Automation → Runbooks.</p>
</li>
<li><p>Select Create a Runbook.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760634344873/0444337d-3068-4dc3-87cd-ae002595b93e.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p>Provide a name, and choose Powershell as the Runbook type.</p>
</li>
<li><p>Choose 7.2 (or later) as the Runtime version.</p>
</li>
<li><p>Select Review + Create → Create.</p>
</li>
<li><p>Then, open the runbook, and select Edit → Edit in portal.</p>
</li>
<li><p>Paste the above PowerShell code, and change the variables as highlighted previously.</p>
</li>
<li><p>Choose Save, and then select Test in pane.</p>
</li>
</ol>
<p>If your inbound security rule currently has its source set to <strong><em>Any</em></strong>, a successful test should replace it with the resolved IP address.</p>
<p>Example result:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760634813128/28266e36-b6b4-4b5e-9413-c8f123a37a68.png" alt class="image--center mx-auto" /></p>
<p>Example output:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760635649568/62c39ca9-d286-4833-ad20-de8cacbd4025.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-powershell">Completed
Starting NSG update <span class="hljs-keyword">process</span> <span class="hljs-keyword">for</span> [<span class="hljs-type">REDACTED</span>].ddns.net

Environments                                                                                           Context
------------                                                                                           -------
{[<span class="hljs-type">AzureCloud</span>, <span class="hljs-type">AzureCloud</span>], [<span class="hljs-type">AzureUSGovernment</span>, <span class="hljs-type">AzureUSGovernment</span>], [<span class="hljs-type">AzureChinaCloud</span>, <span class="hljs-type">AzureChinaCloud</span>]} Microsoft.Azure.…

Resolved IP: [<span class="hljs-type">REDACTED</span>]
Resolved IP: [<span class="hljs-type">REDACTED</span>]
No change detected. Current IP already matches.
</code></pre>
<p>Once verified, publish the Runbook.</p>
<hr />
<h1 id="heading-step-55-scheduling-the-runbook">Step 5/5: Scheduling The Runbook.</h1>
<p>Finally, schedule the Runbook to ensure it runs automatically. In this example, I will schedule the runbook to run every hour.</p>
<ol>
<li>Ensure that your runbook is in a published state:</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760635111179/af146818-d5e5-4f6d-a52f-574a5752cff9.png" alt class="image--center mx-auto" /></p>
<ol start="2">
<li><p>Select the runbook → Link to schedule → Schedule (Link a schedule to your runbook).</p>
</li>
<li><p>Choose <code>Add a schedule</code>.</p>
</li>
<li><p>Name the schedule.</p>
</li>
<li><p>Set the start date.</p>
</li>
<li><p>Select <code>Recurring</code>.</p>
</li>
<li><p>Set the recur to 1 Hour.</p>
</li>
<li><p>Set the expiration (if needed).</p>
</li>
<li><p>Select Create.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760635390557/e572c31d-4bba-41fc-8c21-3c74f507d6b1.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-verification">Verification.</h1>
<p>After the first scheduled runbook execution, check the NSG in the Azure Portal under Inbound security rules. You should see that the rule’s <code>Source</code> field reflects the current IP resolved from your DNS hostname.</p>
<p>You will also see the scheduled run in the Overview page, under recent jobs.</p>
<hr />
<h1 id="heading-example-scenario">Example Scenario.</h1>
<p>To illustrate this automation in practice, consider the following real-world use case.</p>
<p>I have deployed a Sophos Firewall (SFOS) virtual appliance in Microsoft Azure. This appliance provides both a Management, User Portal and a VPN Portal, typically accessible on TCP ports 443, 4443 and 4444 by default. While these portals can be essential for administrative and remote user access, exposing them directly to the public internet is not recommended.</p>
<p>From a security and compliance standpoint, unrestricted WAN exposure conflicts with:</p>
<ul>
<li><p>Cyber Essentials requirements for minimising externally exposed management interfaces.</p>
</li>
<li><p>NCSC Cloud Security Principles, which advise strict control over administrative interfaces.</p>
</li>
<li><p>NIST SP 800-41 and 800-125 guidelines, which recommend that VPN and firewall management endpoints are not accessible from untrusted networks.</p>
</li>
</ul>
<p>In this scenario, I want to securely manage and access the Sophos Firewall without exposing the User or VPN portals to the WAN.</p>
<p>However, my office network connection uses a dynamic public IP address provided by my ISP. This means the IP changes periodically, breaking any static NSG rule that restricts inbound access.</p>
<p>To address this, I use a Dynamic DNS (DDNS) hostname, for example <a target="_blank" href="http://cdoherty.ddns.net"><code>vpn.ddns.net</code></a>, which always points to my home network’s current IP address. I can connect to my network at any time using OpenVPN, but since Azure Network Security Groups do not support FQDNs or DDNS resolution natively, the NSG cannot reference <a target="_blank" href="http://cdoherty.ddns.net"><code>vpn.ddns.net</code></a> directly.</p>
<p>Instead, the Azure Automation Runbook dynamically resolves the DDNS hostname and updates the inbound NSG rule for the Sophos firewall. This ensures that:</p>
<ul>
<li><p>Only my current, trusted office IP address is permitted inbound access to the VPN and User Portals.</p>
</li>
<li><p>The firewall remains effectively non-public, in compliance with Cyber Essentials, NCSC, and NIST best practices.</p>
</li>
<li><p>The automation continuously maintains secure access without manual intervention each time my IP changes.</p>
</li>
</ul>
<p>This approach combines least-privilege network access with automation and identity-based security, removing both administrative overhead and risk exposure.</p>
<hr />
<h1 id="heading-conclusion">Conclusion.</h1>
<p>Automating NSG rule updates with Azure Automation and PowerShell provides an elegant and secure solution to managing dynamic inbound access.</p>
<p>By resolving a DNS hostname and applying the corresponding IP address to your Network Security Group rule, you eliminate the need for manual intervention while maintaining the principle of least privilege.</p>
<p>This method is lightweight, cost-effective, and entirely managed within the Azure platform. It scales easily to multiple environments, supports credential-free authentication via Managed Identity, and can be extended with monitoring, notifications, or audit logging for enterprise-grade governance.</p>
<p>Whether you are managing Azure Virtual Desktop session hosts, VPN gateways, or administrative endpoints, this approach ensures continuous and secure access control without compromising operational efficiency.</p>
]]></content:encoded></item><item><title><![CDATA[Understanding Azure Permissions: A Practical Guide]]></title><description><![CDATA[Introduction.
Understanding who can do what in Azure is one of the most common pain points for businesses adopting or scaling their cloud infrastructure. Assigning permissions might appear straightforward—until a user with “Contributor” access cannot...]]></description><link>https://blog.cdoherty.co.uk/understanding-azure-permissions-a-practical-guide</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/understanding-azure-permissions-a-practical-guide</guid><category><![CDATA[Azure]]></category><category><![CDATA[azure-devops]]></category><category><![CDATA[SecOps]]></category><category><![CDATA[zero-trust]]></category><category><![CDATA[rbac]]></category><category><![CDATA[permissions]]></category><category><![CDATA[role-based-access-control]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[#microsoft-azure]]></category><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Thu, 31 Jul 2025 21:40:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753996696212/13cac5e7-71ea-42b7-990a-71b250c78dea.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction"><strong>Introduction.</strong></h1>
<p>Understanding who can do what in Azure is one of the most common pain points for businesses adopting or scaling their cloud infrastructure. Assigning permissions might appear straightforward—until a user with “Contributor” access cannot assign a role, deploy a service, or view cost data.</p>
<p>These types of roadblocks are especially common when developers, DevOps engineers, or IT support teams are expected to build or manage infrastructure but lack the necessary access.</p>
<p>This guide was written to remove the guesswork. It covers the fundamentals of Azure Role-Based Access Control (RBAC), explains the purpose and limitations of common built-in roles, and maps those roles to specific operational tasks.</p>
<p>Whether you're managing SQL resources, deploying Function Apps, configuring managed identities, or setting policies, this article will help you understand the exact permissions needed—and the best way to assign them securely.</p>
<p>By the end of this guide, you should be able to:</p>
<ul>
<li><p>Confidently determine which Azure role is required for a given task</p>
</li>
<li><p>Avoid the pitfalls of over- or under-provisioning access</p>
</li>
<li><p>Support technical users with the correct access model from the outset</p>
</li>
</ul>
<p>This article is ideal for IT service providers, internal support teams, and cloud engineers who want to align autonomy with governance in Azure.</p>
<hr />
<h1 id="heading-azure-rbac-explained">Azure RBAC Explained.</h1>
<p>Azure Role-Based Access Control (RBAC) is Microsoft’s native system for managing access to Azure resources. It allows organisations to grant users only the permissions they need to perform specific tasks, and nothing more—supporting the principle of least privilege.</p>
<p>Each role assignment in Azure consists of three parts:</p>
<ul>
<li><p><strong>Security Principal</strong>: The user, group, or service principal being granted access.</p>
</li>
<li><p><strong>Role Definition</strong>: A predefined or custom set of permissions (e.g. Reader, Contributor, Owner).</p>
</li>
<li><p><strong>Scope</strong>: The level at which the permissions apply—this can be a subscription, resource group, or specific resource.</p>
</li>
</ul>
<p>Understanding this model is essential to managing Azure securely and effectively. A user might have the right role but at the wrong scope, leading to frustrating permission errors.</p>
<hr />
<h1 id="heading-key-built-in-roles-and-what-they-actually-do">Key Built-in Roles (and What They Actually Do).</h1>
<p>Microsoft Azure includes a long list of <a target="_blank" href="https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles">built-in roles</a>, but for most practical cases, the following are the most relevant:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Role</td><td>What it allows</td><td>Can assign roles?</td></tr>
</thead>
<tbody>
<tr>
<td>Reader</td><td>View resources only</td><td>No</td></tr>
<tr>
<td>Contributor</td><td>Create and manage resources (excluding IAM)</td><td>No</td></tr>
<tr>
<td>Owner</td><td>Full access to all resources, including IAM</td><td>Yes</td></tr>
<tr>
<td>User Access Administrator</td><td>Assign RBAC roles to others</td><td>Yes</td></tr>
<tr>
<td>Key Vault Administrator</td><td>Managed secrets, keys, certificates, and policies</td><td>No</td></tr>
<tr>
<td>Application Administrator</td><td>Manage app registrations and enterprise apps</td><td>No</td></tr>
<tr>
<td>Managed Identity Contributor</td><td>Create and assign managed identities to resources</td><td>No</td></tr>
<tr>
<td>Cost Management Contributor</td><td>View and manage budget and cost alerts</td><td>No</td></tr>
<tr>
<td>Policy Contributor</td><td>Create and assign Azure Policies</td><td>No</td></tr>
</tbody>
</table>
</div><blockquote>
<p><mark>Note: Assigning a role at a </mark> <strong><mark>higher scope</mark></strong> <mark> (e.g. subscription) gives access to everything underneath it (e.g. all resource groups and resources).</mark></p>
</blockquote>
<hr />
<h1 id="heading-common-microsoft-azure-tasks-amp-the-roles-required">Common Microsoft Azure Tasks &amp; The Roles Required.</h1>
<p>The following sections explain what roles are needed for common Azure administration tasks, including those frequently encountered in service desk escalations and infrastructure deployments.</p>
<h2 id="heading-11-creating-and-managing-resource-groups">1.1 Creating and managing resource groups.</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Task</strong></td><td><strong>Required role</strong></td><td><strong>Scope</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Create, rename, or delete resource groups</td><td>Contributor</td><td>Subscription</td></tr>
<tr>
<td>Deploy resources into a group</td><td>Contributor</td><td>Resource Group</td></tr>
<tr>
<td>Assign permissions within a group</td><td>Contributor + User Access Administrator <em>or</em> Owner</td><td>Resource Group</td></tr>
</tbody>
</table>
</div><blockquote>
<p><mark>A common mistake is assuming Contributor alone allows IAM changes. It does not.</mark></p>
</blockquote>
<h2 id="heading-12-creating-and-managing-azure-subscriptions">1.2 Creating and managing Azure subscriptions.</h2>
<p>Azure subscription management takes place largely outside the Azure portal, often via the Microsoft 365 admin centre or Azure EA/partner portals.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Task</strong></td><td><strong>Required role</strong></td><td><strong>Where Set</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Create or rename a subscription</td><td>Global Administrator or Billing Administrator</td><td>Microsoft 365 Admin Portal</td></tr>
<tr>
<td>View and manage resources inside a subscription</td><td>Contributor or Owner</td><td>Azure Portal</td></tr>
<tr>
<td>Assign roles within the subscription</td><td>Owner or User Access Admin</td><td>Azure Portal</td></tr>
</tbody>
</table>
</div><blockquote>
<p>You cannot create new Azure subscriptions directly from the Azure portal without billing permissions in Microsoft 365.</p>
</blockquote>
<hr />
<h2 id="heading-13-budget-and-cost-monitoring">1.3 Budget and Cost Monitoring</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Task</strong></td><td><strong>Required role</strong></td><td><strong>Scope</strong></td></tr>
</thead>
<tbody>
<tr>
<td>View cost reports</td><td>Reader or Cost Management Reader</td><td>Subscription</td></tr>
<tr>
<td>Create budgets and alerts</td><td>Cost Management Contributor</td><td>Subscription</td></tr>
<tr>
<td>Manage cost recommendations and advisor settings</td><td>Cost Management Contributor</td><td>Subscription</td></tr>
</tbody>
</table>
</div><blockquote>
<p>The Contributor role alone does not allow access to cost tools unless explicitly granted.</p>
</blockquote>
<hr />
<h2 id="heading-14-managing-app-registrations-and-service-principals">1.4 Managing App Registrations and Service Principals</h2>
<p>Azure Active Directory (Entra ID) governs identity-related actions. Roles assigned in the Azure portal alone are insufficient for app registration control.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Task</strong></td><td><strong>Required role</strong></td><td><strong>Scope</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Create and manage app registrations</td><td>Application Administrator</td><td>Entra ID</td></tr>
<tr>
<td>Grant permissions to apps</td><td>Application Administrator</td><td>Entra ID</td></tr>
<tr>
<td>Manage Enterprise Applications</td><td>Application Administrator</td><td>Entra ID</td></tr>
<tr>
<td>Assign roles to service principals in Azure</td><td>User Access Administrator or Owner</td><td>Azure Resource Scope</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-15-deploying-secure-workloads-with-managed-identities">1.5 Deploying Secure Workloads with Managed Identities</h2>
<p>Managed Identities allow services to authenticate securely without passwords or connection strings.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Task</strong></td><td><strong>Required role</strong></td><td><strong>Scope</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Enable managed identity on resources</td><td>Contributor</td><td>Resource</td></tr>
<tr>
<td>Assign roles to managed identities</td><td>User Access Administrator</td><td>Target resource</td></tr>
<tr>
<td>Allow function apps or VMs to use managed identity</td><td>Contributor</td><td>Function App or VM</td></tr>
<tr>
<td>Create or manage identity role bindings (e.g. Key Vault)</td><td>Managed Identity Contributor</td><td>Vault or resource</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-16-azure-sql-servers-and-databases">1.6 Azure SQL Servers and Databases</h2>
<p>Azure SQL requires both RBAC and SQL-specific permissions. Some access must be configured within SQL itself.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Task</strong></td><td><strong>Required role</strong></td><td><strong>Scope</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Deploy SQL servers and DBs</td><td>Contributor</td><td>Resource Group</td></tr>
<tr>
<td>Configure networking, firewall rules</td><td>Contributor</td><td>SQL Server</td></tr>
<tr>
<td>Enable Azure AD authentication</td><td>SQL Active Directory Admin</td><td>SQL Server</td></tr>
<tr>
<td>Grant database access via Entra ID</td><td>SQL Admin (internal) + Entra roles</td><td>SQL Server / Entra ID</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-17-key-vaults-and-secrets">1.7 Key Vaults and Secrets</h2>
<p>Key Vault is highly secured. Access must be explicitly granted, regardless of Contributor rights.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Task</strong></td><td><strong>Required role</strong></td><td><strong>Scope</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Create and delete Key Vaults</td><td>Contributor</td><td>Resource Group</td></tr>
<tr>
<td>View or modify secrets</td><td>Key Vault Reader or Key Vault Administrator</td><td>Key Vault</td></tr>
<tr>
<td>Assign access to identities</td><td>Key Vault Administrator</td><td>Key Vault</td></tr>
<tr>
<td>Link Vaults with apps securely</td><td>Contributor (on app) + Role assignment on Vault</td><td>App and Key Vault</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-18-azure-function-apps-and-automation-workloads">1.8 Azure Function Apps and Automation Workloads</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Task</strong></td><td><strong>Required role</strong></td><td><strong>Scope</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Create or delete Function Apps</td><td>Contributor</td><td>Resource Group</td></tr>
<tr>
<td>Assign identity for automation</td><td>Contributor</td><td>Function App</td></tr>
<tr>
<td>Connect app to Key Vault securely</td><td>Contributor + User Access Admin (on Vault)</td><td>App + Vault</td></tr>
<tr>
<td>Deploy from GitHub using managed identity</td><td>Application Admin (Entra) + Contributor</td><td>Azure and Entra</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-19-creating-and-applying-azure-policy">1.9 Creating and Applying Azure Policy</h2>
<p>Policy allows organisations to enforce compliance controls across resources.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Task</strong></td><td><strong>Required role</strong></td><td><strong>Scope</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Create custom policies</td><td>Policy Contributor</td><td>Subscription or Management Group</td></tr>
<tr>
<td>Assign initiatives (policy sets)</td><td>Policy Contributor or Owner</td><td>Subscription</td></tr>
<tr>
<td>View policy compliance</td><td>Reader or Policy Reader</td><td>Subscription or below</td></tr>
</tbody>
</table>
</div><blockquote>
<p><mark>A Contributor cannot create or assign policies unless explicitly granted the Policy Contributor role.</mark></p>
</blockquote>
<hr />
<h1 id="heading-summary-suggested-role-set-for-a-cloud-engineer">Summary: Suggested Role Set For A Cloud Engineer.</h1>
<p>For someone building and managing Azure infrastructure autonomously (e.g. a DevOps or Lead Software Engineer), the following roles should be considered:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Scope</strong></td><td><strong>Recommended roles</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Subscription</strong></td><td>Owner <em>or</em> Contributor + User Access Administrator</td></tr>
<tr>
<td><strong>Entra ID</strong></td><td>Application Administrator</td></tr>
<tr>
<td><strong>Key Vaults</strong></td><td>Key Vault Administrator</td></tr>
<tr>
<td><strong>Budgets &amp; Costs</strong></td><td>Cost Management Contributor</td></tr>
<tr>
<td><strong>SQL Servers</strong></td><td>Contributor + SQL AD Admin (set in SQL)</td></tr>
<tr>
<td><strong>Function Apps</strong></td><td>Contributor</td></tr>
<tr>
<td><strong>Automation &amp; Identity</strong></td><td>Managed Identity Contributor</td></tr>
<tr>
<td><strong>Policy Management</strong></td><td>Policy Contributor</td></tr>
</tbody>
</table>
</div><p>If autonomy must be balanced with oversight, use Privileged Identity Management (PIM) to grant time-limited access to elevated roles.</p>
<hr />
<h1 id="heading-custom-roles-amp-security-groups-in-azure">Custom Roles &amp; Security Groups In Azure.</h1>
<p>In some scenarios, the built-in roles provided by Microsoft do not precisely match the requirements of your organisation. For example, you may want to grant a user the ability to manage storage accounts but not virtual networks, or allow access to Cost Management tools without exposing billing settings.</p>
<h2 id="heading-11-custom-roles">1.1 Custom Roles.</h2>
<p>Custom roles allow you to define a permission set tailored to your organisation’s needs. They are created using Azure Resource Manager and consist of a list of allowed actions, denied actions, and the scope to which they apply.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Use case</td><td>Example</td></tr>
</thead>
<tbody>
<tr>
<td>A developer needs to manage Function Apps but not delete them</td><td>Create a custom role without <code>Microsoft.Web/sites/delete</code></td></tr>
<tr>
<td>An auditor needs read-only access to all resources plus cost data</td><td>Base the role on Reader + add <code>Microsoft.CostManagement/*/read</code></td></tr>
<tr>
<td>A third party should manage storage but not modify networking or IAM</td><td>Grant storage actions only, and deny network/IAM actions</td></tr>
</tbody>
</table>
</div><blockquote>
<p><mark>Custom roles require planning and testing but can significantly reduce over-permissioning.<br />Learn more: </mark> <a target="_blank" href="https://learn.microsoft.com/en-us/azure/role-based-access-control/custom-roles-portal"><mark>Create or update Azure custom roles</mark></a></p>
</blockquote>
<h2 id="heading-12-security-groups-for-role-assignments">1.2 Security Groups for Role Assignments.</h2>
<p>Rather than assigning roles directly to individuals, it is best practice to assign RBAC roles to Microsoft Entra security groups. This offers several advantages:</p>
<ul>
<li><p>Scalability: Assign a role once to the group, and it applies to all members.</p>
</li>
<li><p>Audibility: Easily see who inherits access via group membership.</p>
</li>
<li><p>Lifecycle management: Use Entra dynamic groups or HR-connected provisioning to manage group population.</p>
</li>
</ul>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Group type</td><td>Use case</td></tr>
</thead>
<tbody>
<tr>
<td>Static security group</td><td>“Cloud Engineers - UK” with direct membership</td></tr>
<tr>
<td>Dynamic group</td><td>“All users with job title = ‘Software Engineer’”</td></tr>
<tr>
<td>Nested group</td><td>For complex structures (e.g. team leads, contractors)</td></tr>
</tbody>
</table>
</div><p>Security groups can be used both in Azure RBAC and Entra roles (e.g. Application Administrator), making them a key part of scalable access control.</p>
<blockquote>
<p><mark>Tip: Assigning roles to groups makes permissions easier to audit and rotate when staff change roles.</mark></p>
</blockquote>
<hr />
<h1 id="heading-final-thoughts">Final Thoughts.</h1>
<p>Azure permissions are powerful, but when misconfigured, either by selecting the wrong role or applying it at the wrong scope, they can introduce serious barriers to progress.</p>
<p>Poorly assigned access can delay deployments, create security gaps, undermine governance, and prevent critical infrastructure from being managed effectively.</p>
<p>Avoid defaulting to "Contributor" or "Owner" for all users. Instead, use targeted combinations of roles like <strong>Contributor + User Access Administrator</strong>, <strong>Application Administrator</strong>, and <strong>Key Vault Administrator</strong> to provide operational access without compromising governance.</p>
<p>As teams scale, consider formalising a <strong>permissions baseline</strong> per role (e.g. Developer, DevOps, Analyst), and use Azure PIM to support just-in-time access for high-risk actions.</p>
]]></content:encoded></item><item><title><![CDATA[Monitoring DNS Record Changes in Microsoft Sentinel]]></title><description><![CDATA[Overview.
DNS record manipulation is a subtle yet powerful technique often used by threat actors to intercept communication, reroute authentication flows, or disrupt services. Despite its impact, DNS change monitoring is often overlooked in security ...]]></description><link>https://blog.cdoherty.co.uk/monitoring-dns-record-changes-in-microsoft-sentinel</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/monitoring-dns-record-changes-in-microsoft-sentinel</guid><category><![CDATA[email spoofing]]></category><category><![CDATA[dns]]></category><category><![CDATA[Domain Name System]]></category><category><![CDATA[Microsoft Sentinel]]></category><category><![CDATA[scattered spider]]></category><category><![CDATA[DNS over https]]></category><category><![CDATA[email security]]></category><category><![CDATA[Azure Logic Apps]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[#microsoft-azure]]></category><category><![CDATA[DOH]]></category><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Fri, 11 Jul 2025 12:48:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752181946736/4e856200-023b-4439-8cbc-cb11338ae5c8.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-overview">Overview.</h1>
<p>DNS record manipulation is a subtle yet powerful technique often used by threat actors to intercept communication, reroute authentication flows, or disrupt services. Despite its impact, DNS change monitoring is often overlooked in security monitoring strategies.</p>
<p>A recent report on <a target="_blank" href="https://risky.biz/RB793/">Risky Business</a> focused on attacks by a group, seemingly exhibiting similar TTPs to Scattered Spider, where MX records for a target's domain were altered to redirect inbound emails. This easily allows for the compromise of multiple other platforms and accounts, as well as inhibiting recovery due to the inability of the victim to receive their own emails.</p>
<p>This article demonstrates how to implement DNS record change detection in Microsoft Sentinel using Azure Logic Apps and DNS-over-HTTPS (DoH). The approach enables visibility into unauthorised or accidental changes to DNS records, such as MX, A, or TXT entries, which could have significant security implications.</p>
<p>The solution requires no third-party agents or premium tools—only Azure-native components and a free DoH service. A sample Logic App template and KQL detection rule are provided to help you deploy the solution in minutes.</p>
<hr />
<h1 id="heading-why-monitor-dns-records">Why Monitor DNS Records?</h1>
<p>Attackers can exploit DNS by:</p>
<ul>
<li><p>Changing MX records to hijack emails.</p>
</li>
<li><p>Altering SPF or TXT records to spoof emails.</p>
</li>
<li><p>Modifying A records to redirect services.</p>
</li>
</ul>
<p>Unless you're actively monitoring DNS, these changes can go unnoticed until it's too late. Microsoft Azure Sentinel doesn't do this natively, so we’re building a custom detection method.</p>
<p>Some monitoring platforms (such as <a target="_blank" href="https://www.paessler.com/manuals/prtg/dns_v2_sensor?ref=blog.rothe.uk">PRTG</a>) allow for monitoring of specific DNS records and alerting on changes. However for any organizations that don't have this functionality available, a dedicated alternative based in Azure is useful.</p>
<p>This monitoring solution is based on an Azure Logic App which sends details of DNS records into Sentinel where an analytics rule can monitor for changes.</p>
<p>If possible I'd also suggest taking audit logs from the DNS hosting platform to look for these kinds of changes, but some platforms don't provide such functionality.</p>
<hr />
<h1 id="heading-prerequisites">Prerequisites.</h1>
<ul>
<li><p>Microsoft Sentinel <a target="_blank" href="https://www.paessler.com/manuals/prtg/dns_v2_sensor?ref=blog.rothe.uk">con</a>nected to a Log Analytics workspace.</p>
</li>
<li><p>Permissions to create Logic Apps and deploy templates.</p>
</li>
<li><p>A public DoH resolver like Cloudflare.</p>
</li>
<li><p>At least one domain name you want to monitor.</p>
</li>
</ul>
<hr />
<h1 id="heading-deploy-the-azure-logic-app-to-collect-dns-records">Deploy The Azure Logic App To Collect DNS Records.</h1>
<p>This Logic App periodically queries DNS records for your domain via DNS-over-HTTPS (DoH) using Cloudflare's free service, then forwards the data to Microsoft Sentinel via the Log Analytics Data Collector API.</p>
<h2 id="heading-create-the-logic-app">Create the Logic App.</h2>
<ol>
<li><p>In the Azure portal, search for Logic Apps and create a new Standard Logic App.</p>
</li>
<li><p>Assign it to the same resource group as your Sentinel workspace for ease of management.</p>
</li>
<li><p>Choose the Region closest to your resources, preferably matching Sentinel workspace location for latency optimisation.</p>
</li>
</ol>
<h2 id="heading-configure-variables">Configure Variables.</h2>
<ol>
<li><p>Add an <strong>Initialise Variable</strong> action named <code>recordTypes</code> (type: Array).</p>
<ol>
<li>Set the value to <code>["A", "MX", "TXT"]</code> or include any additional record types you wish to track.</li>
</ol>
</li>
<li><p>Optionally, create another variable named <code>domains</code> with your domain list, for example: <code>["</code><a target="_blank" href="http://cdoherty.co.uk"><code>domain.co.uk</code></a><code>"]</code>.</p>
</li>
</ol>
<h2 id="heading-add-a-recurrence-trigger">Add a Recurrence Trigger.</h2>
<ol>
<li>Configure the Logic App to run on a defined schedule, for example, every <strong>5 minutes</strong>, depending on how frequently you wish to monitor changes.</li>
</ol>
<h2 id="heading-loop-through-record-types">Loop Through Record Types.</h2>
<p>Within the Logic App, create a loop to iterate through each DNS record type stored in the <code>recordTypes</code> array.</p>
<p>For every type, the Logic App will query Cloudflare’s DNS-over-HTTPS endpoint, interpret the results, and forward the findings to Microsoft Sentinel.</p>
<p>The loop begins by making an HTTP GET request to Cloudflare’s API using the following structure:</p>
<pre><code class="lang-powershell">https://cloudflare<span class="hljs-literal">-dns</span>.com/dns<span class="hljs-literal">-query</span>?name=domain.co.uk&amp;<span class="hljs-built_in">type</span>=<span class="hljs-selector-tag">@</span>{items(<span class="hljs-string">'For_each_recordType'</span>)}
</code></pre>
<p>The request header must specify that the response should be in DNS JSON format:</p>
<pre><code class="lang-powershell">accept: application/dns<span class="hljs-literal">-json</span>
</code></pre>
<p>Once the request completes, the response can be parsed. In most Logic App environments, the HTTP connector automatically returns JSON data, but if your configuration encodes it in base64, you can decode it using a Compose action with this expression:</p>
<pre><code class="lang-powershell">@json(base64ToString(body(<span class="hljs-string">'DoH_Request'</span>)?[<span class="hljs-string">'$content'</span>]))
</code></pre>
<p>The output from this stage contains an array named <code>Answer</code>, representing the DNS responses. The next loop iterates through this array so that each record—whether it’s an A, MX, or TXT entry—is processed individually.</p>
<p>For each record, the Logic App then sends structured data to Microsoft Sentinel through the <strong>Azure Log Analytics Data Collector</strong> connector. The payload includes key details such as the domain, record type, name, TTL, and record data, along with a timestamp for correlation. A typical payload looks like this:</p>
<pre><code class="lang-powershell">{
  <span class="hljs-string">"TimeGenerated"</span>: <span class="hljs-string">"@{utcNow()}"</span>,
  <span class="hljs-string">"QueriedDomain"</span>: <span class="hljs-string">"domain.co.uk"</span>,
  <span class="hljs-string">"RecordName"</span>: <span class="hljs-string">"@{item()?['name']}"</span>,
  <span class="hljs-string">"RecordType"</span>: <span class="hljs-string">"@{items('For_each_recordType')}"</span>,
  <span class="hljs-string">"TTL"</span>: <span class="hljs-string">"@{item()?['TTL']}"</span>,
  <span class="hljs-string">"RecordData"</span>: <span class="hljs-string">"@{item()?['data']}"</span>
}
</code></pre>
<p>Make sure the request headers include:</p>
<pre><code class="lang-powershell">Log<span class="hljs-literal">-Type</span>: dnsmonitor
Content<span class="hljs-literal">-Type</span>: application/json
</code></pre>
<p>Once ingested, Sentinel automatically creates a custom log table named <strong>dnsmonitor_CL</strong>, where each row represents a single DNS record snapshot. These entries can then be queried in KQL to detect any changes or anomalies over time.</p>
<hr />
<h1 id="heading-logic-app-json-definition">Logic App JSON Definition.</h1>
<p>Below is the complete Logic App JSON Definition:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"definition"</span>: {
    <span class="hljs-attr">"$schema"</span>: <span class="hljs-string">"https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#"</span>,
    <span class="hljs-attr">"contentVersion"</span>: <span class="hljs-string">"1.0.0.0"</span>,
    <span class="hljs-attr">"triggers"</span>: {
      <span class="hljs-attr">"5_Min_Timer"</span>: {
        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Recurrence"</span>,
        <span class="hljs-attr">"recurrence"</span>: {
          <span class="hljs-attr">"interval"</span>: <span class="hljs-number">5</span>,
          <span class="hljs-attr">"frequency"</span>: <span class="hljs-string">"Minute"</span>
        }
      }
    },
    <span class="hljs-attr">"actions"</span>: {
      <span class="hljs-attr">"Set_recordtypes_variable"</span>: {
        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"InitializeVariable"</span>,
        <span class="hljs-attr">"inputs"</span>: {
          <span class="hljs-attr">"variables"</span>: [
            {
              <span class="hljs-attr">"name"</span>: <span class="hljs-string">"recordTypes"</span>,
              <span class="hljs-attr">"type"</span>: <span class="hljs-string">"array"</span>,
              <span class="hljs-attr">"value"</span>: [
                <span class="hljs-string">"A"</span>,
                <span class="hljs-string">"MX"</span>,
                <span class="hljs-string">"TXT"</span>
              ]
            }
          ]
        },
        <span class="hljs-attr">"runAfter"</span>: {}
      },
      <span class="hljs-attr">"For_each_recordType"</span>: {
        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Foreach"</span>,
        <span class="hljs-attr">"foreach"</span>: <span class="hljs-string">"@variables('recordTypes')"</span>,
        <span class="hljs-attr">"actions"</span>: {
          <span class="hljs-attr">"DoH_Request"</span>: {
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Http"</span>,
            <span class="hljs-attr">"inputs"</span>: {
              <span class="hljs-attr">"uri"</span>: <span class="hljs-string">"https://cloudflare-dns.com/dns-query?name=yourdomain.co.uk&amp;type=@{items('For_each_recordType')}"</span>,
              <span class="hljs-attr">"method"</span>: <span class="hljs-string">"GET"</span>,
              <span class="hljs-attr">"headers"</span>: {
                <span class="hljs-attr">"accept"</span>: <span class="hljs-string">"application/dns-json"</span>
              }
            }
          },
          <span class="hljs-attr">"Decode_Response"</span>: {
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Compose"</span>,
            <span class="hljs-attr">"inputs"</span>: <span class="hljs-string">"@json(base64ToString(body('DoH_Request')?['$content']))"</span>,
            <span class="hljs-attr">"runAfter"</span>: {
              <span class="hljs-attr">"DoH_Request"</span>: [
                <span class="hljs-string">"Succeeded"</span>
              ]
            }
          },
          <span class="hljs-attr">"For_each_Answer"</span>: {
            <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Foreach"</span>,
            <span class="hljs-attr">"foreach"</span>: <span class="hljs-string">"@outputs('Decode_Response')?['Answer']"</span>,
            <span class="hljs-attr">"actions"</span>: {
              <span class="hljs-attr">"Send_to_Sentinel"</span>: {
                <span class="hljs-attr">"type"</span>: <span class="hljs-string">"ApiConnection"</span>,
                <span class="hljs-attr">"inputs"</span>: {
                  <span class="hljs-attr">"host"</span>: {
                    <span class="hljs-attr">"connection"</span>: {
                      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"@parameters('$connections')['azureloganalyticsdatacollector-1']['connectionId']"</span>
                    }
                  },
                  <span class="hljs-attr">"method"</span>: <span class="hljs-string">"post"</span>,
                  <span class="hljs-attr">"path"</span>: <span class="hljs-string">"/api/logs"</span>,
                  <span class="hljs-attr">"headers"</span>: {
                    <span class="hljs-attr">"Log-Type"</span>: <span class="hljs-string">"dnsmonitor"</span>
                  },
                  <span class="hljs-attr">"body"</span>: {
                    <span class="hljs-attr">"QueriedDomain"</span>: <span class="hljs-string">"domain.co.uk"</span>,
                    <span class="hljs-attr">"RecordName"</span>: <span class="hljs-string">"@{item()?['name']}"</span>,
                    <span class="hljs-attr">"RecordType"</span>: <span class="hljs-string">"@{items('For_each_recordType')}"</span>,
                    <span class="hljs-attr">"TTL"</span>: <span class="hljs-string">"@{item()?['TTL']}"</span>,
                    <span class="hljs-attr">"RecordData"</span>: <span class="hljs-string">"@{item()?['data']}"</span>
                  }
                }
              }
            },
            <span class="hljs-attr">"runAfter"</span>: {
              <span class="hljs-attr">"Decode_Response"</span>: [
                <span class="hljs-string">"Succeeded"</span>
              ]
            }
          }
        },
        <span class="hljs-attr">"runAfter"</span>: {
          <span class="hljs-attr">"Set_recordtypes_variable"</span>: [
            <span class="hljs-string">"Succeeded"</span>
          ]
        }
      }
    },
    <span class="hljs-attr">"outputs"</span>: {},
    <span class="hljs-attr">"parameters"</span>: {
      <span class="hljs-attr">"$connections"</span>: {
        <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Object"</span>,
        <span class="hljs-attr">"defaultValue"</span>: {}
      }
    }
  },
  <span class="hljs-attr">"parameters"</span>: {
    <span class="hljs-attr">"$connections"</span>: {
      <span class="hljs-attr">"type"</span>: <span class="hljs-string">"Object"</span>,
      <span class="hljs-attr">"value"</span>: {
        <span class="hljs-attr">"azureloganalyticsdatacollector-1"</span>: {
          <span class="hljs-attr">"id"</span>: <span class="hljs-string">"/subscriptions/fb47e429-e603-41ff-86a1-36b85336ac4a/providers/Microsoft.Web/locations/uksouth/managedApis/azureloganalyticsdatacollector"</span>,
          <span class="hljs-attr">"connectionId"</span>: <span class="hljs-string">"/subscriptions/fb47e429-e603-41ff-86a1-36b85336ac4a/resourceGroups/yourresourcegroupnamehere/providers/Microsoft.Web/connections/azureloganalyticsdatacollector-1"</span>,
          <span class="hljs-attr">"connectionName"</span>: <span class="hljs-string">"azureloganalyticsdatacollector-1"</span>
        }
      }
    }
  }
}
</code></pre>
<p><mark>Remember to change your domain name in the </mark> <code>uri</code> <mark>input string on line 39, and your resource group name on 110.</mark></p>
<hr />
<h1 id="heading-microsoft-sentinel-siem-alerting">Microsoft Sentinel (SIEM) Alerting.</h1>
<p>Once the information is ingested within Microsoft Sentinel, a straightforward analytics rule can monitor for additions and deletions. This is configured below as a NRT (near real time) rule. If you run this less frequently, you may need to modify the time thresholds.</p>
<pre><code class="lang-powershell">let records=dnsmonitor_CL
    | <span class="hljs-built_in">mv</span><span class="hljs-literal">-expand</span> parse_json(Answer_s)
    | extend
        domain=tostring(parse_json(Question_s)[<span class="hljs-number">0</span>][<span class="hljs-string">'name'</span>]),
        answer=tostring(Answer_s[<span class="hljs-string">'data'</span>])
    | summarize firstSeen=min(TimeGenerated), lastSeen=max(TimeGenerated) by domain, answer;
let lastRun=toscalar(records
    | summarize max(lastSeen));
let newRecords=records
    | <span class="hljs-built_in">where</span> firstSeen &gt; ago(<span class="hljs-number">8</span>m)
    | extend action=<span class="hljs-string">"New"</span>;
let deletedRecords=records
    | <span class="hljs-built_in">where</span> lastSeen &lt; ago(<span class="hljs-number">8</span>m)
    | extend action=<span class="hljs-string">"Removed"</span>;
union newRecords, deletedRecords
</code></pre>
<hr />
]]></content:encoded></item><item><title><![CDATA[Cisco Umbrella: Entra ID IdP Federation Setup & Azure Sentinel SIEM Integration]]></title><description><![CDATA[Section 1: Cisco Umbrella Identity Federation Configuration.
Introduction.
Cisco Umbrella is a cloud-delivered security platform that provides DNS-layer protection, secure web gateway (SWG), cloud-delivered firewall, and cloud access security broker ...]]></description><link>https://blog.cdoherty.co.uk/cisco-umbrella-entra-id-idp-federation-setup-and-azure-sentinel-siem-integration</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/cisco-umbrella-entra-id-idp-federation-setup-and-azure-sentinel-siem-integration</guid><category><![CDATA[Secure Internet Gateway]]></category><category><![CDATA[Cisco Umbrella]]></category><category><![CDATA[Cisco]]></category><category><![CDATA[Umbrella]]></category><category><![CDATA[SIG]]></category><category><![CDATA[dns]]></category><category><![CDATA[logging]]></category><category><![CDATA[Microsoft Sentinel]]></category><category><![CDATA[SecOps]]></category><category><![CDATA[Azure Sentinel]]></category><category><![CDATA[Azure]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[SIEM]]></category><category><![CDATA[idp]]></category><category><![CDATA[Entra ID]]></category><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Thu, 10 Jul 2025 20:36:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752097143597/7e078469-6d12-4c70-8e66-cd5b147d8feb.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-section-1-cisco-umbrella-identity-federation-configuration">Section 1: Cisco Umbrella Identity Federation Configuration.</h1>
<h1 id="heading-introduction">Introduction.</h1>
<p><strong>Cisco Umbrella</strong> is a cloud-delivered security platform that provides DNS-layer protection, secure web gateway (SWG), cloud-delivered firewall, and cloud access security broker (CASB) capabilities. It enforces security policies and blocks threats at the DNS and IP layers before connections are established, making it a powerful first line of defence in a layered security model.</p>
<p>Cisco Umbrella logs can include DNS requests, proxy traffic, IP-layer events, and firewall decisions depending on your licensing and configuration. These logs offer valuable insight into user activity, threat prevention actions, and internet-bound traffic patterns.</p>
<p>It delivers DNS protection across endpoints, ensuring that users and devices are secure no matter the environment, which makes protecting users that work from home a breeze.</p>
<p>Integrating Cisco Umbrella with Microsoft Entra ID allows for centralised identity management, secure single sign-on (SSO), and improved visibility across your organisation's secure web gateway traffic. By federating identity, user authentication and access control policies can be managed consistently, aligning with enterprise identity governance standards.</p>
<p>This article section outlines the process to configure identity federation between Cisco Umbrella and Microsoft Entra ID, enabling user-based policy enforcement and reporting.</p>
<hr />
<h1 id="heading-step-1-creating-the-cisco-umbrella-enterprise-application-in-entra-id">Step 1: Creating The Cisco Umbrella Enterprise Application in Entra ID.</h1>
<ol>
<li><p>Navigate to the Microsoft Entra ID portal:</p>
<ul>
<li><a target="_blank" href="https://entra.microsoft.com/#view/Microsoft_AAD_IAM/AppGalleryBladeV2"><em>https://entra.microsoft.com/#view/Microsoft_AAD_IAM/AppGalleryBladeV2</em></a></li>
</ul>
</li>
<li><p>Select the <code>Cisco User Management for Secure Access</code> application and create it.</p>
<ol>
<li><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752176116646/30ea079c-4bab-4165-b286-d2fb772afcd1.png" alt class="image--center mx-auto" /></li>
</ol>
</li>
</ol>
<hr />
<h1 id="heading-step-2-configuring-the-entra-id-application-using-a-cisco-umbrella-static-api">Step 2: Configuring the Entra ID Application Using A Cisco Umbrella Static API.</h1>
<p>Once added, proceed with configuring SSO and SAML-based sign-on. Cisco provides the necessary metadata XML or input fields:</p>
<ol>
<li><ul>
<li><p>Sign in to the <a target="_blank" href="https://dashboard.umbrella.com/">Cisco Umbrella dashboard</a>.</p>
<ul>
<li><p>Navigate to <strong>Admin &gt; API Keys &gt; Static Keys</strong>.</p>
</li>
<li><p>Select <code>Azure Active Directory Provisioning</code> and create an API token.</p>
<ul>
<li>Copy the URL and token and save them somewhere securely. The URL should look something like: <a target="_blank" href="https://api.umbrella.com/identity/v2/scim"><code>https://api.umbrella.com/identity/v2/scim</code></a>.</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ol>
<hr />
<h1 id="heading-step-3-provisioning-identities-from-microsoft-entra-id">Step 3: Provisioning Identities from Microsoft Entra ID.</h1>
<p>Set up the app in the Microsoft Entra ID portal with your Cisco Umbrella token and Azure Active Directory Provisioning URL (collected from the previous step):</p>
<ol>
<li><p>Navigate to the enterprise application (created in Step 1).</p>
</li>
<li><p>Add your token to the <strong>Secret Token</strong> field.</p>
</li>
<li><p>Add the Azure Active Directory Provisioning URL to the <strong>Tenant URL</strong> field.</p>
</li>
<li><p>Click <strong>Test Connection</strong> to confirm that you can use your Umbrella SCIM token to connect the Umbrella API with Microsoft Entra ID.</p>
</li>
<li><p>Complete the steps to provision users from Microsoft Entra ID to Umbrella. Review the user attributes that are synchronized from Microsoft Entra ID to the Cisco User Management Connector app in <strong>Attribute Mappings</strong>. The attributes selected as <strong>Matching</strong> properties are used to match the user accounts in the Cisco User Management Connector app for update operations. If you choose to change the matching target attribute, ensure that the Cisco User Management Connector app supports filtering users based on that attribute.</p>
</li>
<li><p>Click <strong>Save</strong>.</p>
</li>
</ol>
<hr />
<h1 id="heading-section-2-microsoft-azure-sentinel-siem-integration-for-log-analysis">Section 2: Microsoft Azure Sentinel (SIEM) Integration For Log Analysis.</h1>
<hr />
<h1 id="heading-introduction-1">Introduction.</h1>
<p>Cisco Umbrella offers valuable security telemetry including DNS-layer threat detection, proxy activity, and cloud access visibility. Integrating these logs into Microsoft Sentinel enables enriched correlation, threat hunting, and incident response across your broader security ecosystem.</p>
<p>This section covers the process of integrating Cisco Umbrella logs into Sentinel using Log Analytics workspace connectors and parsing them for effective SIEM use.</p>
<blockquote>
<p><mark>Use this Microsoft article for reference for ingesting DNS logs into Microsoft Azure Sentinel:</mark> <a target="_blank" href="https://learn.microsoft.com/en-us/azure/sentinel/data-connectors-reference#cisco-umbrella-preview"><mark>https://learn.microsoft.com/en-us/azure/sentinel/data-connectors-reference#cisco-umbrella-preview</mark></a></p>
</blockquote>
<hr />
<h1 id="heading-overview">Overview.</h1>
<p>Cisco Umbrella supports log export to AWS S3. Microsoft Sentinel ingests these logs using a pre-built Azure Function template found in the Content Hub. This allows continuous ingestion and analysis of DNS and security events in Sentinel for threat detection, investigation, and response.</p>
<hr />
<h1 id="heading-step-1-integrating-cisco-umbrella-logs-into-microsoft-sentinel-using-the-azure-template-method">Step 1: Integrating Cisco Umbrella Logs into Microsoft Sentinel Using the Azure Template Method.</h1>
<p>To begin ingesting logs from Cisco Umbrella into Microsoft Sentinel, the first step is to enable Umbrella's log export functionality. Cisco Umbrella supports exporting logs to an Amazon S3 bucket. When you enable this feature from the Umbrella dashboard, a new managed S3 bucket will be provisioned on your behalf by Cisco.</p>
<p>During this process, you will be provided with three important values: the S3 bucket path (or "data path"), an AWS access key, and a corresponding secret key. These credentials are <strong>only visible once</strong> during the setup, so it is essential to copy and store them securely at the time.</p>
<p>These details can be found within the Cisco Umbrella dashboard:</p>
<ul>
<li>Navigate to <strong>Admin &gt; Log Management &gt; Amazon S3</strong>.</li>
</ul>
<hr />
<h1 id="heading-step-2-installing-the-microsoft-sentinel-content-solution">Step 2: Installing The Microsoft Sentinel Content Solution.</h1>
<p>Once Umbrella log export has been configured and your keys are stored, the next stage takes place in Microsoft Sentinel.</p>
<ol>
<li><p>Navigate to Microsoft Azure Sentinel &gt; Content Management &gt; <strong>Content Hub</strong>.</p>
</li>
<li><p>Search for <strong>Cisco Umbrella</strong> and select the featured solution from the list.</p>
</li>
<li><p>Select <strong>Install</strong> to begin deploying the solution.</p>
</li>
</ol>
<hr />
<h1 id="heading-step-3-deploying-the-azure-arm-template">Step 3: Deploying the Azure ARM Template.</h1>
<p>Once installed:</p>
<ol>
<li><p>Navigate to <strong>Configuration &gt; Data connectors &gt; Cisco Umbrella (Using Azure Functions)</strong> and open the connector page.</p>
</li>
<li><p>Choose the <strong>Azure Resource Manager (ARM) Template</strong> deployment method. This is the recommended approach for most environments and requires no manual code configuration.</p>
<ul>
<li>The ARM template method will launch a deployment blade in Azure with a form that needs to be completed using the information from Umbrella and your Sentinel workspace.</li>
</ul>
</li>
</ol>
<p>The steps for deploying the template are as follows:</p>
<ol>
<li><p>Provide a <strong>Function App Name</strong>. This must be unique across your tenant (e.g., <code>sentinel-umbrella-fnapp</code>).</p>
</li>
<li><p>Select a <strong>Region</strong> that matches your Sentinel Log Analytics workspace.</p>
</li>
<li><p>Enter your <strong>Workspace Name</strong> and the <strong>full Workspace Resource ID</strong>, which can be copied from Azure or referenced from 1Password.</p>
</li>
<li><p>In the AWS section of the form, input the <strong>S3 bucket path</strong> (as given in the Cisco Umbrella console), specify the <strong>region</strong> (e.g. <code>eu-west-2</code>), and paste in the <strong>AWS access key ID</strong> and <strong>secret key</strong> that were provided by Umbrella during activation.</p>
</li>
<li><p>Click <strong>Review + Create</strong>, then <strong>Create</strong> to deploy the function app.</p>
</li>
<li><p>Verify that the connector status shows <strong>Connected</strong>.</p>
</li>
</ol>
<p><mark>If you encounter any issues when deploying the template, verify that you have sufficient permissions within the resource group.</mark></p>
<hr />
<h1 id="heading-step-4-verify-log-ingestion">Step 4: Verify Log Ingestion.</h1>
<p>Logs are uploaded in ten-minute intervals from the Umbrella log queue to the S3 bucket. Within the first two hours after a completed configuration, you should receive your first log upload to your S3 bucket.</p>
<p>To check to see if everything is working, the Last Sync time in the Umbrella dashboard should update and logs should begin to appear in your S3 bucket.</p>
<h2 id="heading-example">Example:</h2>
<pre><code class="lang-plaintext">"2024-09-11 18:46:00","Active Directory User ([adusername@example.net](mailto:adusername@example.net))","Active Directory User ([adusername@example.net](mailto:adusername@example.net)),WIN11-SNG01-Example","10.10.1.100","24.123.132.133","Allowed","1 (A)","NOERROR","domain-visited.com.","Software/Technology,Business Services,Allow List,Infrastructure and Content Delivery Networks,SaaS and B2B,Application","AD Users","AD Users,Anyconnect Roaming Client","","506165","","8234970"
</code></pre>
<p>The example entry above is <code>480</code> bytes. To estimate the size of your S3 Logs, see <a target="_blank" href="https://docs.umbrella.com/deployment-umbrella/docs/log-formats-and-versioning#size">Estimate the Size of Your Logs</a>.</p>
<h2 id="heading-order-of-fields-in-the-dns-log">Order of Fields in the DNS log:</h2>
<pre><code class="lang-plaintext">&lt;timestamp&gt;&lt;most granular identity&gt;&lt;identities&gt;&lt;internal ip&gt;&lt;external ip&gt;&lt;action&gt;&lt;query type&gt;&lt;response code&gt;&lt;domain&gt;&lt;categories&gt;&lt;most granular identity type&gt;&lt;identity types&gt;&lt;blocked categories&gt;&lt;rule id&gt;&lt;destination countries&gt;&lt;organization id&gt;
</code></pre>
<ul>
<li><p><strong>Timestamp</strong>—When this request was made in UTC. This is different than the Umbrella dashboard, which converts the time to your specified time zone.</p>
</li>
<li><p><strong>Most Granular Identity</strong>—The first identity matched with this request in order of granularity.</p>
</li>
<li><p><strong>Identities</strong>—All identities associated with this request.</p>
</li>
<li><p><strong>Internal IP</strong>—The internal IP address that made the request.</p>
</li>
<li><p><strong>External IP</strong>—The external IP address that made the request.</p>
</li>
<li><p><strong>Action</strong>—Whether the request was allowed or blocked.</p>
</li>
<li><p><strong>Query Type</strong>—The type of DNS request that was made. For more information, see Common DNS Request Types.</p>
</li>
<li><p><strong>Response Code</strong>—The DNS return code for this request. For more information, see Common DNS return codes for any DNS service (and Umbrella).</p>
</li>
<li><p><strong>Domain</strong>—The domain that was requested.</p>
</li>
<li><p><strong>Categories</strong>—The security or content categories that the destination matches. For category definitions, see Understanding Security Categories and Understanding Content Categories.</p>
</li>
<li><p><strong>Most Granular Identity Type</strong>—The first identity type matched with this request in order of granularity. Available in version 3 and above.</p>
</li>
<li><p><strong>Identity Types</strong>—The type of identity that made the request. For example, Roaming Computer, Network, and so on. Available in version 3 and above.</p>
</li>
<li><p><strong>Blocked Categories</strong>—The categories that resulted in the destination being blocked. Available in version 4 and above.</p>
</li>
<li><p><strong>Rule ID</strong>—The ID of the access rule when the DNS request is matched by a policy.</p>
</li>
<li><p><strong>Destination Countries</strong>—The two-character country identifier of the domain that was requested.</p>
</li>
<li><p><strong>Organization ID</strong>—The Umbrella organization ID. For more information, see <a target="_blank" href="https://docs.umbrella.com/deployment-umbrella/docs/find-your-organization-id">Find Your Organization ID</a>.</p>
</li>
</ul>
<p><mark>For more information, read this article: </mark> <a target="_blank" href="https://docs.umbrella.com/deployment-umbrella/docs/dns-log-formats"><mark>https://docs.umbrella.com/deployment-umbrella/docs/dns-log-formats</mark></a></p>
<hr />
<h1 id="heading-step-5-verify-log-ingestion-kql">Step 5: Verify Log Ingestion - KQL.</h1>
<p>Once log ingestion has been established, the Cisco Umbrella connector will populate multiple custom log tables within your Log Analytics workspace. These typically include:</p>
<ul>
<li><p><code>Cisco_Umbrella_dns_CL</code></p>
</li>
<li><p><code>Cisco_Umbrella_proxy_CL</code></p>
</li>
<li><p><code>Cisco_Umbrella_ip_CL</code></p>
</li>
<li><p><code>Cisco_Umbrella_cloudfirewall_CL</code></p>
</li>
</ul>
<p>To confirm that logs are being ingested and structured as expected, open the <strong>Logs</strong> blade within your Sentinel workspace and run the following basic queries:</p>
<h3 id="heading-dns-log-verification">DNS Log Verification</h3>
<pre><code class="lang-sql">Cisco_Umbrella_dns_CL
| take 10
</code></pre>
<p>This query will return the latest 10 DNS logs ingested from Umbrella. If the connector is functioning correctly, you should see structured records including fields such as <code>timestamp_t</code>, <code>InternalIP_s</code>, <code>Domain_s</code>, <code>Action_s</code>, and <code>Categories_s</code>.</p>
<p>To filter DNS logs by action (e.g. blocked):</p>
<pre><code class="lang-sql">Cisco_Umbrella_dns_CL
| where Action_s == "Blocked"
| project timestamp_t, Domain_s, InternalIP_s, Identities_s, Categories_s
| sort by timestamp_t desc
</code></pre>
<p>This allows you to identify recently blocked DNS queries, which can be used for detection rules or investigation.</p>
<h3 id="heading-search-for-dns-queries-to-a-specific-domain">Search for DNS Queries to a Specific Domain</h3>
<pre><code class="lang-sql">Cisco_Umbrella_dns_CL
| where Domain_s endswith "example.com"
| project timestamp_t, Domain_s, InternalIP_s, Action_s, Identities_s
</code></pre>
<p>This will return logs showing requests to any subdomain under <a target="_blank" href="http://example.com"><code>example.com</code></a>.</p>
<h3 id="heading-filter-by-identity-type-eg-roaming-client">Filter by Identity Type (e.g. Roaming Client)</h3>
<pre><code class="lang-sql">Cisco_Umbrella_dns_CL
| where IdentityTypes_s has "Roaming"
| summarize Count = count() by bin(timestamp_t, 1h), IdentityTypes_s
| sort by timestamp_t desc
</code></pre>
<p>This provides a high-level view of DNS activity coming specifically from roaming clients over time.</p>
<h3 id="heading-proxy-log-verification">Proxy Log Verification.</h3>
<p>If Umbrella Secure Web Gateway (SWG) is in use and proxy logs are being exported, confirm ingestion with:</p>
<pre><code class="lang-sql">Cisco_Umbrella_proxy_CL
| take 10
</code></pre>
<hr />
<h1 id="heading-best-practices-for-queries">Best Practices for Queries.</h1>
<ul>
<li><p>Always use time filters (e.g. <code>where timestamp_t &gt; ago(1d)</code>) to limit the result set and optimise performance.</p>
</li>
<li><p>Combine fields such as <code>InternalIP_s</code>, <code>Domain_s</code>, and <code>Categories_s</code> to build actionable detections.</p>
</li>
<li><p>Leverage <strong>Kusto functions</strong> to alias and reuse logic consistently in scheduled rules and hunting queries.</p>
</li>
</ul>
<p>For optimal performance, consider creating <strong>custom functions</strong> or scheduled <strong>analytics rules</strong> that alert on high-risk DNS activity, such as queries to known malicious domains or sudden spikes in traffic to newly registered TLDs.</p>
<hr />
<h1 id="heading-next-steps">→ Next Steps:</h1>
<p>Once ingestion and validation are complete, you can begin to build custom detection rules, dashboards, and investigations using the data. You may also wish to enable or create custom analytic rules for SOC incident and alerting.</p>
<p>Sentinel's native integration with Microsoft Defender XDR also enables enriched correlation between Umbrella telemetry and endpoint, identity, and email signals.</p>
<p><em>For further schema details and field references, consult the official Cisco documentation.</em></p>
]]></content:encoded></item><item><title><![CDATA[Understanding MTTA and MTTR in Cyber Security Operations.]]></title><description><![CDATA[Introduction.
In a world where organisations rely on complex digital systems, the ability to respond quickly to incidents can be the difference between a minor disruption and a major breach. Two essential metrics that help measure and improve respons...]]></description><link>https://blog.cdoherty.co.uk/understanding-mtta-and-mttr-in-cyber-security-operations</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/understanding-mtta-and-mttr-in-cyber-security-operations</guid><category><![CDATA[MMTA]]></category><category><![CDATA[MTTR]]></category><category><![CDATA[SecOps]]></category><category><![CDATA[cyber security]]></category><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Mon, 23 Jun 2025 09:44:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750671565691/49421dc5-c424-4f18-a917-d4b7cd7bfa52.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction.</h1>
<p>In a world where organisations rely on complex digital systems, the ability to respond quickly to incidents can be the difference between a minor disruption and a major breach. Two essential metrics that help measure and improve response efficiency are <strong>MTTA</strong> (Mean Time to Acknowledge) and <strong>MTTR</strong> (Mean Time to Respond/Resolve).</p>
<p>Although frequently referenced, these terms are often misunderstood or used interchangeably. This article will explain what they mean, why they matter, and how they’re used in real-world IT and security operations.</p>
<hr />
<h1 id="heading-what-is-mtta">What Is MTTA?</h1>
<p><strong>Mean Time to Acknowledge (MTTA)</strong> refers to the average time it takes from when an alert or incident is first raised to when a human acknowledges it. Acknowledgement typically means someone (usually an analyst, engineer, or responder) has seen the alert and formally begun the investigation process.</p>
<blockquote>
<p>For example, if an intrusion detection system flags suspicious behaviour at 14:00, and a SOC analyst opens the alert at 14:03, the time to acknowledge or MTTA is 3 minutes.</p>
</blockquote>
<h3 id="heading-why-mtta-matters">Why MTTA Matters:</h3>
<ul>
<li><p>A low MTTA means alerts are being seen quickly, enabling faster triage.</p>
</li>
<li><p>A high MTTA could indicate alert fatigue, understaffed teams, or ineffective alerting (e.g., too many false positives).</p>
</li>
<li><p>In Cyber Security, every minute counts. A slow acknowledgement can provide attackers more time to escalate privileges or exfiltrate data.</p>
</li>
</ul>
<h2 id="heading-what-is-mttr">What Is MTTR?</h2>
<p><strong>Mean Time to Respond</strong> (or <strong>Resolve</strong>) is the average time it takes from incident detection to full remediation or closure. This metric often varies based on what "response" is defined to mean, whether it's neutralising a threat, restoring a system, or fully patching a vulnerability.</p>
<blockquote>
<p>For example, if a ransomware infection is detected at 09:00 and fully contained and cleaned up by 11:30, the time to resolve, or MTTR, is 2.5 hours.</p>
</blockquote>
<h3 id="heading-why-mttr-matters">Why MTTR Matters:</h3>
<ul>
<li><p>MTTR reflects operational readiness and efficiency in incident response.</p>
</li>
<li><p>A lower MTTR often correlates with mature playbooks, skilled analysts, and effective automation.</p>
</li>
<li><p>In regulated industries, MTTR can impact compliance, SLAs, and reporting obligations.</p>
</li>
</ul>
<h2 id="heading-mtta-vs-mttr-key-differences">MTTA vs. MTTR: Key Differences:</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Metric</td><td>Description</td><td>Phase</td><td>Who It Involves</td></tr>
</thead>
<tbody>
<tr>
<td>MTTA</td><td>Time from alert to acknowledgement</td><td>Detection</td><td>Analyst or first responder(s)</td></tr>
<tr>
<td>MTTR</td><td>Time from alert to resolution</td><td>Remediation</td><td>Response team, engineers, management</td></tr>
</tbody>
</table>
</div><p>They serve different purposes: MTTA measures alert visibility and team responsiveness, while MTTR measures overall remediation efficiency.</p>
<hr />
<h1 id="heading-how-to-improve-mtta-and-mttr">How to Improve MTTA and MTTR.</h1>
<p><strong>1. Automate low-risk triage.</strong></p>
<ul>
<li>Use SOAR platforms to auto-close benign alerts or escalate high-severity ones with rich context.</li>
</ul>
<p><strong>2. Tune your detection rules.</strong></p>
<ul>
<li>Eliminate noisy or low-value alerts that delay acknowledgement of serious incidents.</li>
</ul>
<p><strong>3. Improve incident documentation.</strong></p>
<ul>
<li>Clear runbooks and standard operating procedures (SOPs) reduce decision-making time.</li>
</ul>
<p><strong>4. Train staff effectively.</strong></p>
<ul>
<li>Regular tabletop exercises and red-teaming help analysts build muscle memory.</li>
</ul>
<p><strong>5. Implement alert routing.</strong></p>
<ul>
<li>Ensure alerts are directed to the right team or specialist, reducing hand-off delays.</li>
</ul>
<hr />
<h1 id="heading-real-world-example-sentinel-amp-defender-xdr">Real-World Example: Sentinel &amp; Defender XDR.</h1>
<p>In Microsoft Sentinel integrated with Microsoft Defender XDR, MTTA could be the time between a Sentinel Analytics Rule triggering and a SOC analyst opening the incident.</p>
<p>MTTR, on the other hand, could span the entire response lifecycle, from investigation and containment in Defender to remediation actions taken via Intune or third-party tooling.</p>
<hr />
<h1 id="heading-final-thoughts">Final Thoughts.</h1>
<p>Measuring MTTA and MTTR isn’t just about metrics. While they’re often seen as indicators of SOC or IT performance, they’re more usefully treated as guiding tools for identifying bottlenecks and opportunities for improvement.</p>
<p>With rising attack sophistication and shrinking response windows, reducing both MTTA and MTTR should be a continuous priority for any modern security team.</p>
]]></content:encoded></item><item><title><![CDATA[Privileged Identity Management: Securing Just-In-Time (JIT) Access For Privileged Roles.]]></title><description><![CDATA[This article is coming soon.]]></description><link>https://blog.cdoherty.co.uk/privileged-identity-management-securing-just-in-time-jit-access-for-privileged-roles</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/privileged-identity-management-securing-just-in-time-jit-access-for-privileged-roles</guid><category><![CDATA[privileged-identity-management]]></category><category><![CDATA[azure-pim]]></category><category><![CDATA[Privileged access management]]></category><category><![CDATA[pam]]></category><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Thu, 19 Jun 2025 11:31:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750332606852/ac984533-33ef-46aa-b36f-850619e5e9a8.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This article is coming soon.</p>
]]></content:encoded></item><item><title><![CDATA[Become a Microsoft Ninja: All the Free Microsoft Security Training in One Place.]]></title><description><![CDATA[Introduction.
Whether you're defending the cloud, responding to incidents in a SOC, or tightening access controls as an identity engineer, staying ahead of the curve means continuously sharpening your skills.
Microsoft’s official Ninja Training Serie...]]></description><link>https://blog.cdoherty.co.uk/become-a-microsoft-ninja-all-the-free-microsoft-security-training-in-one-place</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/become-a-microsoft-ninja-all-the-free-microsoft-security-training-in-one-place</guid><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Tue, 17 Jun 2025 19:10:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750185023317/9321fc5b-a201-43ef-9bc9-2a55db238ece.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction.</h1>
<p>Whether you're defending the cloud, responding to incidents in a SOC, or tightening access controls as an identity engineer, staying ahead of the curve means continuously sharpening your skills.</p>
<p>Microsoft’s official <em>Ninja Training Series</em> is one of the most underrated resources in the security community — offering deep, scenario-based learning for free. While many practitioners know about the well-circulated Sentinel or Defender paths, several advanced and recently released Ninja trainings remain largely undiscovered.</p>
<p>This article curates <strong>all known Microsoft Ninja Training programmes</strong> — including <strong>those often missed</strong>, <strong>newly updated</strong>, or <strong>hidden in Tech Community threads</strong> — so you can level up across the entire Microsoft security stack.</p>
<p>Let’s turn you into a true Microsoft Ninja.</p>
<hr />
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Microsoft Ninja Training Course Name</strong></td><td><strong>URL:</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Microsoft Sentinel</td><td><a target="_blank" href="https://aka.ms/SentinelNinjaTraining">https://aka.ms/SentinelNinjaTraining</a></td></tr>
<tr>
<td>Microsoft Sentinel Notebooks (Jupyter Notebooks for Hunting)</td><td><a target="_blank" href="https://techcommunity.microsoft.com/blog/microsoftsentinelblog/becoming-a-microsoft-sentinel-notebooks-ninja---the-series/2693491">https://techcommunity.microsoft.com/blog/microsoftsentinelblog/becoming-a-microsoft-sentinel-notebooks-ninja---the-series/2693491</a></td></tr>
<tr>
<td>Unified SOC Platform (Virtual Ninja Show: Sentinel + Defender</td><td><a target="_blank" href="https://techcommunity.microsoft.com/blog/microsoftsentinelblog/unified-operation-platform-features-released-at-public-preview/4014565">https://techcommunity.microsoft.com/blog/microsoftsentinelblog/unified-operation-platform-features-released-at-public-preview/4014565</a></td></tr>
<tr>
<td>Microsoft Defender XDR</td><td><a target="_blank" href="https://aka.ms/DefenderXDRNinja">https://aka.ms/DefenderXDRNinja</a></td></tr>
<tr>
<td>Microsoft Security Copilot</td><td><a target="_blank" href="http://aka.ms/SecurityCopilotNinja">http://aka.ms/SecurityCopilotNinja</a></td></tr>
<tr>
<td>Microsoft Defender for Cloud</td><td><a target="_blank" href="https://aka.ms/MDCNinja">https://aka.ms/MDCNinja</a></td></tr>
<tr>
<td>Microsoft Purview Compliance</td><td><a target="_blank" href="https://aka.ms/MIPNinja">https://aka.ms/MIPNinja</a>, <a target="_blank" href="https://techcommunity.microsoft.com/blog/microsoft-security-blog/become-a-microsoft-purview-ediscovery-ninja/2793108">https://techcommunity.microsoft.com/blog/microsoft-security-blog/become-a-microsoft-purview-ediscovery-ninja/2793108</a></td></tr>
<tr>
<td><a target="_blank" href="https://aka.ms/MIPNinja">Microsoft Defender</a> for Identity</td><td><a target="_blank" href="http://aka.ms/MDINinja">aka.ms/MDINinja</a></td></tr>
<tr>
<td><a target="_blank" href="https://aka.ms/MDINinja">Microsoft</a> Defender for Cloud Apps</td><td><a target="_blank" href="https://aka.ms/MDCANinjaTraining">https://aka.ms/MDCANinjaTraining</a></td></tr>
<tr>
<td>Microsoft Defender for External Attack Surface Management (EASM)</td><td><a target="_blank" href="https://techcommunity.microsoft.com/blog/defenderexternalattacksurfacemgmtblog/become-a-microsoft-defender-external-attack-surface-management-ninja-level-400-t/3743985/replies/3958720">https://techcommunity.microsoft.com/blog/defenderexternalattacksurfacemgmtblog/become-a-microsoft-defender-external-attack-surface-management-ninja-level-400-t/3743985/replies/3958720</a></td></tr>
<tr>
<td>Microsoft Defender Threat Intelligence</td><td><a target="_blank" href="https://aka.ms/BecomeAnMDTINinja">https://aka.ms/BecomeAnMDTINinja</a></td></tr>
<tr>
<td>Azure Network Security</td><td><a target="_blank" href="https://techcommunity.microsoft.com/blog/azurenetworksecurityblog/azure-network-security-ninja-training/2356101">https://techcommunity.microsoft.com/blog/azurenetworksecurityblog/azure-network-security-ninja-training/2356101</a></td></tr>
<tr>
<td>Microsoft Defender for IoT (OT &amp; ICS Security)</td><td><a target="_blank" href="https://techcommunity.microsoft.com/blog/microsoftdefenderiotblog/microsoft-defender-for-iot-ninja-training/2428899">https://techcommunity.microsoft.com/blog/microsoftdefenderiotblog/microsoft-defender-for-iot-ninja-training/2428899</a></td></tr>
</tbody>
</table>
</div><hr />
<h1 id="heading-microsoft-security-academy">Microsoft Security Academy.</h1>
<p>The Microsoft Security Academy is <strong>a learning platform designed to equip individuals with the knowledge and skills necessary for cybersecurity roles</strong>. It offers a wide range of resources and training modules focused on Microsoft's security solutions, including Microsoft Sentinel, Microsoft Defender, Microsoft Entra, and Microsoft Purview. These resources are created and delivered by experts within Microsoft's security-aligned teams.</p>
<blockquote>
<p><a target="_blank" href="https://microsoft.github.io/PartnerResources/skilling/microsoft-security-academy">https://microsoft.github.io/PartnerResources/skilling/microsoft-security-academy</a></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Microsoft Azure Lighthouse: Sentinel Across Multi-Tenant Environments.]]></title><description><![CDATA[Introduction.
Security teams within Managed Security Service Providers (MSSPs) or multi-brand organisations often require visibility into several isolated Microsoft Sentinel instances. Without centralisation, analysts must switch between portals or a...]]></description><link>https://blog.cdoherty.co.uk/microsoft-azure-lighthouse-deploying-microsoft-sentinel-across-multi-tenant-environments</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/microsoft-azure-lighthouse-deploying-microsoft-sentinel-across-multi-tenant-environments</guid><category><![CDATA[azure lighthouse]]></category><category><![CDATA[Azure]]></category><category><![CDATA[#microsoft-azure]]></category><category><![CDATA[sentinel]]></category><category><![CDATA[Microsoft Sentinel]]></category><category><![CDATA[SIEM]]></category><category><![CDATA[rbac]]></category><category><![CDATA[azure rbac]]></category><category><![CDATA[Security]]></category><category><![CDATA[Security Operations Center ]]></category><category><![CDATA[SecOps]]></category><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Mon, 16 Jun 2025 11:00:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760830703818/ff96aa30-8df1-419c-8518-e048c7ed98bc.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction.</h1>
<p>Security teams within Managed Security Service Providers (MSSPs) or multi-brand organisations often require visibility into several isolated Microsoft Sentinel instances. Without centralisation, analysts must switch between portals or accounts—inefficient, error-prone, and lacking holistic visibility.</p>
<p>Azure Lighthouse solves this by delegating access to customer tenants using Azure Resource Manager (ARM), allowing a single security operations team to manage incidents, hunting, workbooks, and automation across all tenants.</p>
<hr />
<h1 id="heading-the-need-for-multi-tenant-management">The Need for Multi-Tenant Management.</h1>
<p>Security operations teams often face challenges when managing multiple isolated Sentinel deployments. Without a centralised approach, analysts must log into separate portals or switch accounts, reducing efficiency and increasing the risk of misconfigurations or oversight.</p>
<p>Azure Lighthouse addresses this by enabling secure, delegated access to customer tenants. With proper configuration, it allows central SOC teams to investigate incidents, run queries, and manage Sentinel workspaces without direct tenant access or identity switching.</p>
<hr />
<h1 id="heading-key-benefits-of-azure-lighthouse-integration">Key Benefits of Azure Lighthouse Integration.</h1>
<p>Using Azure Lighthouse with Microsoft Sentinel provides several operational advantages. It allows for centralised security monitoring, role-based access control using Azure RBAC, streamlined analyst workflows, and consistent deployment of processes and automation. Analysts can interact with delegated Sentinel environments as if they were part of their own tenant, reducing friction and increasing responsiveness to threats.</p>
<hr />
<h1 id="heading-use-case-multi-tenant-soc-operations">Use Case: Multi-Tenant SOC Operations.</h1>
<p>With Lighthouse in place, a central SOC team can monitor multiple environments simultaneously. Analysts can perform triage on incidents, investigate threats using Kusto Query Language (KQL), and trigger Logic App playbooks to respond to alerts.</p>
<p>Although each Sentinel workspace operates independently in terms of data and analytics rules, centralised access simplifies day-to-day security operations and governance.</p>
<hr />
<h1 id="heading-limitations-and-considerations">Limitations and Considerations.</h1>
<p>There are some limitations to be aware of. Data ingestion and retention remain within the customer tenant and are billed accordingly. Analytics rules and hunting queries must be deployed individually per tenant; there is no global rule propagation across workspaces.</p>
<p>Workbooks, playbooks, and custom connectors are also isolated and must be manually deployed or automated via DevOps processes. Additionally, role scoping should always adhere to the principle of least privilege.</p>
<hr />
<h1 id="heading-conclusion">Conclusion.</h1>
<p>Deploying Microsoft Sentinel in multi-tenant environments can introduce significant operational complexity if not centralised properly. Azure Lighthouse provides a scalable, secure, and compliant method for centralising access to Microsoft Sentinel workspaces across tenant boundaries.</p>
<p>For MSSPs and enterprise security teams, this integration enables streamlined investigations, consistent automation, and enhanced visibility without compromising on control or security.</p>
<hr />
<p>If you require a downloadable version of the ARM templates or diagrams to support this article, please get in touch or refer to Microsoft’s <a target="_blank" href="https://learn.microsoft.com/en-us/azure/lighthouse/overview">Azure Lighthouse documentation</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Microsoft Sentinel Rule Not Saving? - Here's Why (ETag Errors).]]></title><description><![CDATA[Introduction.
While making a small update to an analytics rule in Microsoft Sentinel, I hit an unexpected roadblock. I tried saving the rule, and Sentinel threw this error:

Failed to save analytics rule '[REDACTED]'. Conflict: Newer instance of rule...]]></description><link>https://blog.cdoherty.co.uk/microsoft-sentinel-rule-not-saving-etag-error-heres-why</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/microsoft-sentinel-rule-not-saving-etag-error-heres-why</guid><category><![CDATA[Microsoft Sentinel]]></category><category><![CDATA[analytics]]></category><category><![CDATA[KQL]]></category><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Fri, 13 Jun 2025 19:53:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749844802482/16a057f1-75ba-4064-9da9-c79a0cdeb265.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction.</h1>
<p>While making a small update to an analytics rule in Microsoft Sentinel, I hit an unexpected roadblock. I tried saving the rule, and Sentinel threw this error:</p>
<blockquote>
<p><strong>Failed to save analytics rule '[REDACTED]'. Conflict: Newer instance of rule '[REDACTED]' exists for workspace '[REDACTED]' (ETag does not match). Data was not saved.</strong></p>
</blockquote>
<p>This happens when the rule has been modified elsewhere (e.g., by another user), and Sentinel uses an <strong>ETag</strong> to detect those changes. It's designed to stop you from accidentally overwriting a newer version, but in this case, it just blocked me from saving a minor update.</p>
<hr />
<h1 id="heading-what-i-did-instead">What I Did Instead.</h1>
<p>Rather than digging into ARM templates or scripting around it, I took a simple approach:</p>
<ol>
<li><p><strong>Duplicated the rule manually.</strong></p>
</li>
<li><p><strong>Adjusted the rule name slightly (just capitalised the words).</strong></p>
</li>
<li><p><strong>Applied my change.</strong></p>
</li>
<li><p><strong>Disabled the original rule.</strong></p>
</li>
<li><p><strong>Tested the new one.</strong></p>
</li>
<li><p><strong>Deleted the old rule once everything checked out.</strong></p>
</li>
</ol>
<hr />
<h1 id="heading-quick-clean-done">Quick, Clean, Done.</h1>
<p>This is probably not the “correct” or a recommended solution. But for a low-risk edit in a controlled environment, this workaround saved me a lot of time and avoided unnecessary overhead.</p>
<p>If Sentinel ever throws an ETag conflict your way, and the change is minor, starting fresh might be faster than fighting it.</p>
]]></content:encoded></item><item><title><![CDATA[Kusto Query Language (KQL) Queries For SOC Investigation.]]></title><description><![CDATA[Introduction.
In modern Security Operations Centres (SOCs), the ability to rapidly query large volumes of telemetry data is critical to effective incident response and threat hunting. Microsoft Sentinel, underpinned by Azure Log Analytics, leverages ...]]></description><link>https://blog.cdoherty.co.uk/top-kusto-query-language-kql-queries-for-soc-investigation</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/top-kusto-query-language-kql-queries-for-soc-investigation</guid><category><![CDATA[KQL]]></category><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Thu, 12 Jun 2025 13:50:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760829554055/74bcd1dc-9c91-4b31-9921-65bb9aeb9c6a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction.</h1>
<p>In modern Security Operations Centres (SOCs), the ability to rapidly query large volumes of telemetry data is critical to effective incident response and threat hunting. Microsoft Sentinel, underpinned by Azure Log Analytics, leverages the <strong>Kusto Query Language (KQL)</strong> — a powerful, intuitive syntax built for scalability and speed. This article outlines a selection of high-value KQL queries that every SOC analyst should be familiar with to detect suspicious behaviour, correlate incidents, and reduce mean time to resolution (MTTR).</p>
<hr />
<h1 id="heading-1-sign-in-activity">1: Sign-In Activity.</h1>
<h2 id="heading-11-identify-recent-sign-ins-for-a-specific-user-with-asn-ip-and-device-details-over-a-10-day-period">1.1: Identify recent sign-ins for a specific user with ASN, IP, and device details over a 10 day period.</h2>
<pre><code class="lang-sql">let targetUser = "FirstName.LastName";
let timeWindow = 10d;
SigninLogs
| where TimeGenerated &gt;= ago(timeWindow)
| where tolower(UserPrincipalName) has tolower(targetUser)
| extend Device = DeviceDetail, Location = LocationDetails, CAPolicies = parse_json(ConditionalAccessPolicies)
| mv-expand CAPolicy = CAPolicies
| extend
    CA_DisplayName = tostring(CAPolicy.displayName),
    CA_Result = tostring(CAPolicy.result),
    CA_GrantControls = iif(array_length(CAPolicy.enforcedGrantControls) &gt; 0, strcat_array(CAPolicy.enforcedGrantControls, ", "), "")
| summarize
    CA_Policy_Summary = make_set(strcat(CA_DisplayName, iif(CA_GrantControls != "", strcat(" (", CA_GrantControls, ")"), ""), " → ", CA_Result))
    by
    TimeGenerated,
    ResultType,
    IPAddress,
    ASN = tostring(AutonomousSystemNumber),
    DeviceID = tostring(Device.deviceId),
    DeviceName = tostring(Device.displayName),
    OS = tostring(Device.operatingSystem),
    Browser = tostring(Device.browser),
    TrustType = tostring(Device.trustType),
    App = AppDisplayName,
    ClientApp = ClientAppUsed,
    ConditionalAccess = tostring(ConditionalAccessStatus),
    MFA = tostring(MfaDetail),
    City = tostring(Location.city),
    State = tostring(Location.state),
    Country = tostring(Location.countryOrRegion),
    Latitude = tostring(Location.geoCoordinates.latitude),
    Longitude = tostring(Location.geoCoordinates.longitude),
    UserAgent
| project
    SigninTime = TimeGenerated,
    ResultType,
    IPAddress,
    ASN,
    DeviceID,
    DeviceName,
    OS,
    Browser,
    TrustType,
    App,
    ClientApp,
    ConditionalAccess,
    CA_Policy_Summary,
    MFA,
    City,
    State,
    Country,
    Latitude,
    Longitude,
    UserAgent
| sort by SigninTime desc
// NOTE:
// This query returns all interactive sign-ins for the specified user over the past 10 days.
// It includes IP address, ASN, device details, trust type, application used, Conditional Access results, and MFA status.
// Applied Conditional Access policies are listed <span class="hljs-keyword">with</span> <span class="hljs-keyword">any</span> <span class="hljs-keyword">enforced</span> <span class="hljs-keyword">grant</span> controls.
// This <span class="hljs-keyword">is</span> useful <span class="hljs-keyword">for</span> reviewing <span class="hljs-keyword">user</span> <span class="hljs-keyword">sign</span>-<span class="hljs-keyword">in</span> behaviour <span class="hljs-keyword">and</span> evaluating CA <span class="hljs-keyword">policy</span> effectiveness.
// Limitations:
// It <span class="hljs-keyword">only</span> includes interactive <span class="hljs-keyword">sign</span>-ins. Device trust <span class="hljs-keyword">type</span> <span class="hljs-keyword">or</span> location may be incomplete <span class="hljs-keyword">for</span> personal devices.
// Post-<span class="hljs-keyword">sign</span>-<span class="hljs-keyword">in</span> activity <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> shown — <span class="hljs-keyword">use</span> OfficeActivity <span class="hljs-keyword">or</span> CloudAppEvents <span class="hljs-keyword">for</span> that.
</code></pre>
<h2 id="heading-12-summarise-user-behaviour-and-conditional-access-ca-trends-over-a-30-day-period">1.2: Summarise user behaviour and Conditional Access (CA) trends over a 30 day period:</h2>
<pre><code class="lang-sql">let BaseData = 
SigninLogs
| where TimeGenerated &gt;= ago(30d)
| where UserDisplayName contains "FirstName LastName"
| extend
    LocationDetails = tostring(Location),
    App = tostring(AppDisplayName),
    Resource = tostring(ResourceDisplayName),
    ClientApp = tostring(ClientAppUsed),
    AuthRequirement = tostring(AuthenticationRequirement),
    AuthRequirementPolicies = tostring(AuthenticationRequirementPolicies),
    CAStatus = tostring(ConditionalAccessStatus),
    CAPoliciesRaw = tostring(ConditionalAccessPolicies),
    MFA = tostring(MfaDetail),
    DeviceID = tostring(DeviceDetail.DeviceId),
    DeviceName = tostring(DeviceDetail.DisplayName),
    OperatingSystem = tostring(DeviceDetail.OperatingSystem),
    Browser = tostring(DeviceDetail.Browser),
    IsCompliant = tostring(DeviceDetail.IsCompliant),
    IsManaged = tostring(DeviceDetail.IsManaged),
    IsAzureADJoined = tostring(DeviceDetail.IsAzureADJoined),
    IsHybridAzureADJoined = tostring(DeviceDetail.IsHybridAzureADJoined),
    UserAgent = tostring(UserAgent),
    ASN = tostring(AutonomousSystemNumber);
let CA_Policy_Summary = 
BaseData
| where isnotempty(CAPoliciesRaw)
| extend CAPolicyObjects = parse_json(CAPoliciesRaw)
| mv-apply policy = CAPolicyObjects on (
    where policy.result in~ ("success", "failure", "reportOnlyNotApplied")
    | extend CAPolicySummary = strcat(
        tostring(policy.displayName), " → ", tostring(policy.result),
        iif(array_length(policy.enforcedGrantControls) &gt; 0,
            strcat(" (", strcat_array(policy.enforcedGrantControls, ", "), ")"),
            "")
    )
)
| summarize SampleCAPolicies = make_set(CAPolicySummary) by IPAddress, ASN, UserPrincipalName;
BaseData
| summarize
    SuccessfulLogins = countif(ResultType == 0),
    FailedLogins = countif(ResultType != 0),
    CA_Applied_Success = countif(CAStatus == "success"),
    CA_Applied_Failure = countif(CAStatus == "failure"),
    CA_NotApplied = countif(CAStatus == "notApplied"),
    FirstSeen = min(TimeGenerated),
    LastSeen = max(TimeGenerated),
    SampleDevice = any(DeviceName),
    SampleOS = any(OperatingSystem),
    SampleBrowser = any(Browser),
    SampleADJoin = any(IsAzureADJoined),
    SampleHybridJoin = any(IsHybridAzureADJoined),
    SampleCompliant = any(IsCompliant),
    SampleManaged = any(IsManaged),
    SampleApp = any(App),
    SampleResource = any(Resource),
    SampleClientApp = any(ClientApp),
    SampleAuthRequirement = any(AuthRequirement),
    SampleAuthPolicies = any(AuthRequirementPolicies),
    SampleCAStatus = any(CAStatus),
    SampleMFA = any(MFA),
    SampleUserAgent = any(UserAgent),
    Location = any(LocationDetails),
    DeviceDetailObject = any(DeviceDetail)
  by IPAddress, ASN, UserPrincipalName
| join kind=leftouter CA_Policy_Summary on IPAddress, ASN, UserPrincipalName
| extend SampleCAPolicies = strcat_array(SampleCAPolicies, " | ")
| sort by LastSeen desc
| project
    IPAddress,
    ASN,
    UserPrincipalName,
    SuccessfulLogins,
    FailedLogins,
    CA_Applied_Success,
    CA_Applied_Failure,
    CA_NotApplied,
    FirstSeen,
    LastSeen,
    SampleDevice,
    SampleOS,
    SampleBrowser,
    SampleADJoin,
    SampleHybridJoin,
    SampleCompliant,
    SampleManaged,
    SampleApp,
    SampleResource,
    SampleClientApp,
    SampleAuthRequirement,
    SampleAuthPolicies,
    SampleCAStatus,
    SampleCAPolicies,
    SampleMFA,
    SampleUserAgent,
    Location,
    DeviceDetailObject
// NOTE: This query summarises all interactive sign-ins, even when no Conditional Access policies were evaluated.
// It separately extracts CA policy details where present and joins them back in, preserving full coverage of all sign-in IPs.
// Ideal for user behavioural summaries, not intended for incident forensics or conditional access policy effectiveness analysis without validation.
</code></pre>
<h2 id="heading-13-summarise-non-interactive-aadnoninteractiveusersigninlogs-sign-ins-for-a-specific-user-with-asn-ip-device-details-and-conditional-access-ca-over-a-30-day-period">1.3: Summarise non-interactive (<code>AADNonInteractiveUserSignInLogs</code>) sign-ins for a specific user with ASN, IP, device details and Conditional Access (CA) over a 30-day period:</h2>
<pre><code class="lang-sql">AADNonInteractiveUserSignInLogs
| where TimeGenerated &gt;= ago(30d)
| where UserPrincipalName contains "FirstName.LastName"
| extend DeviceDetailParsed = parse_json(DeviceDetail)
| extend
    DeviceID = tostring(DeviceDetailParsed["deviceId"]),
    DeviceName = tostring(DeviceDetailParsed["displayName"]),
    OperatingSystem = tostring(DeviceDetailParsed["operatingSystem"]),
    Browser = tostring(DeviceDetailParsed["browser"]),
    TrustType = tostring(DeviceDetailParsed["trustType"]),
    IsCompliant = tostring(DeviceDetailParsed["isCompliant"]),
    IsManaged = tostring(DeviceDetailParsed["isManaged"]),
    App = tostring(AppDisplayName),
    Resource = tostring(ResourceDisplayName),
    ClientApp = tostring(ClientAppUsed),
    AuthRequirement = tostring(AuthenticationRequirement),
    AuthRequirementPolicies = tostring(AuthenticationRequirementPolicies),
    CAStatus = tostring(ConditionalAccessStatus),
    CAPolicies = tostring(ConditionalAccessPolicies),
    MFA = tostring(MfaDetail),
    UserAgent = tostring(UserAgent),
    ASN = tostring(AutonomousSystemNumber)
| summarize
    SuccessfulLogins = countif(ResultType == 0),
    FailedLogins = countif(ResultType != 0),
    CA_Applied_Success = countif(CAStatus == "success"),
    CA_Applied_Failure = countif(CAStatus == "failure"),
    CA_NotApplied = countif(CAStatus == "notApplied"),
    FirstSeen = min(TimeGenerated),
    LastSeen = max(TimeGenerated),
    SampleDevice = any(DeviceName),
    SampleOS = any(OperatingSystem),
    SampleBrowser = any(Browser),
    SampleTrust = any(TrustType),
    SampleCompliant = any(IsCompliant),
    SampleManaged = any(IsManaged),
    SampleApp = any(App),
    SampleResource = any(Resource),
    SampleClientApp = any(ClientApp),
    SampleAuthRequirement = any(AuthRequirement),
    SampleAuthPolicies = any(AuthRequirementPolicies),
    SampleCAStatus = any(CAStatus),
    SampleCAPolicies = any(CAPolicies),
    SampleMFA = any(MFA),
    SampleUserAgent = any(UserAgent),
    DeviceDetailObject = any(DeviceDetailParsed),
    ASN = any(ASN)
  by IPAddress, UserPrincipalName
| sort by LastSeen desc
| project
    IPAddress,
    ASN,
    UserPrincipalName,
    SuccessfulLogins,
    FailedLogins,
    CA_Applied_Success,
    CA_Applied_Failure,
    CA_NotApplied,
    FirstSeen,
    LastSeen,
    SampleDevice,
    SampleOS,
    SampleBrowser,
    SampleTrust,
    SampleCompliant,
    SampleManaged,
    SampleApp,
    SampleResource,
    SampleClientApp,
    SampleAuthRequirement,
    SampleAuthPolicies,
    SampleCAStatus,
    SampleCAPolicies,
    SampleMFA,
    SampleUserAgent,
    DeviceDetailObject
// NOTE: This query summarises non-interactive sign-in activity (such as token refreshes and service-to-service authentications) per user and IP over the past 30 days.
// It is intended for visibility and correlation purposes only. <span class="hljs-keyword">Do</span> <span class="hljs-keyword">not</span> <span class="hljs-keyword">use</span> this <span class="hljs-keyword">query</span> <span class="hljs-keyword">to</span> <span class="hljs-keyword">audit</span> interactive <span class="hljs-keyword">user</span> behaviour <span class="hljs-keyword">or</span> <span class="hljs-keyword">sign</span>-<span class="hljs-keyword">in</span> intent.
// <span class="hljs-keyword">Always</span> <span class="hljs-keyword">validate</span> <span class="hljs-keyword">with</span> interactive <span class="hljs-keyword">sign</span>-<span class="hljs-keyword">in</span> <span class="hljs-keyword">logs</span> (SigninLogs), <span class="hljs-keyword">raw</span> activity <span class="hljs-keyword">logs</span>, <span class="hljs-keyword">or</span> <span class="hljs-keyword">session</span> <span class="hljs-keyword">records</span> <span class="hljs-keyword">where</span> <span class="hljs-keyword">authentication</span> <span class="hljs-keyword">context</span> <span class="hljs-keyword">is</span> critical.
</code></pre>
<hr />
<h1 id="heading-2-microsoft-sharepoint-onedrive-amp-cloud-activity">2: Microsoft SharePoint, OneDrive, &amp; Cloud Activity.</h1>
<h2 id="heading-21-summarise-deleted-sharepointonedrive-file-activity-by-file-type">2.1: Summarise deleted SharePoint/OneDrive file activity by file type:</h2>
<pre><code class="lang-sql">let FileActivity = 
    OfficeActivity
    | where UserId contains "FirstName.LastName"
    | where Operation has_any("FileVersionsAllDeleted", "FileDeleted", "FileRecycled", "FolderRecycled")
    | extend FileType = extract(@"\.(\w+)$", 1, SourceFileName), SiteUrl = Site_Url, SourceRelativeUrl = SourceRelativeUrl, Operation = Operation, RecordType = RecordType, OfficeWorkload = OfficeWorkload, EventSource = EventSource, ItemType = ItemType, SensitivityLabelId = SensitivityLabelId, UserId_ = UserId, ClientIP = ClientIP, UserAgent = UserAgent, IsManagedDevice = IsManagedDevice, Type = Type;
union
(
    FileActivity
    | summarize Count = count() 
    | extend FileType = "Total", FileNames = "", SiteUrl = "", SourceRelativeUrl = "", Operation = "", RecordType = "", OfficeWorkload = "", EventSource = "", ItemType = "", SensitivityLabelId = "", UserId_ = "", ClientIP = "", UserAgent = "", IsManagedDevice = "", Type = ""
),
(
    FileActivity
    | summarize Count = count(), FileNamesList = make_list(SourceFileName, 100000), SiteUrlList = make_list(SiteUrl, 100000), SourceRelativeUrlList = make_list(SourceRelativeUrl, 100000), OperationList = make_list(Operation, 100000), RecordTypeList = make_list(RecordType, 100000), OfficeWorkloadList = make_list(OfficeWorkload, 100000), EventSourceList = make_list(EventSource, 100000), ItemTypeList = make_list(ItemType, 100000), SensitivityLabelIdList = make_list(SensitivityLabelId, 100000), UserId_List = make_list(UserId_, 100000), ClientIPList = make_list(ClientIP, 100000), UserAgentList = make_list(UserAgent, 100000), IsManagedDeviceList = make_list(IsManagedDevice, 100000), TypeList = make_list(Type, 100000) by FileType
    | extend FileNames = strcat_array(FileNamesList, "\n"), SiteUrl = strcat_array(SiteUrlList, "\n"), SourceRelativeUrl = strcat_array(SourceRelativeUrlList, "\n"), Operation = strcat_array(OperationList, "\n"), RecordType = strcat_array(RecordTypeList, "\n"), OfficeWorkload = strcat_array(OfficeWorkloadList, "\n"), EventSource = strcat_array(EventSourceList, "\n"), ItemType = strcat_array(ItemTypeList, "\n"), SensitivityLabelId = strcat_array(SensitivityLabelIdList, "\n"), UserId_ = strcat_array(UserId_List, "\n"), ClientIP = strcat_array(ClientIPList, "\n"), UserAgent = strcat_array(UserAgentList, "\n"), IsManagedDevice = strcat_array(IsManagedDeviceList, "\n"), Type = strcat_array(TypeList, "\n")
    | project FileType, Count, FileNames, SiteUrl, SourceRelativeUrl, Operation, RecordType, OfficeWorkload, EventSource, ItemType, SensitivityLabelId, UserId_, ClientIP, UserAgent, IsManagedDevice, Type
)
| extend SortOrder = iif(FileType == "Total", 1000000, Count)
| order by SortOrder desc
| project-away SortOrder
| project Count, FileType, FileNames, SiteUrl, SourceRelativeUrl, Operation, RecordType, OfficeWorkload, EventSource, ItemType, SensitivityLabelId, UserId_, ClientIP, UserAgent, IsManagedDevice, Type
</code></pre>
<h2 id="heading-22-summarise-downloaded-sharepointonedrive-file-activity-by-file-type">2.2: Summarise downloaded SharePoint/OneDrive file activity by file type:</h2>
<pre><code class="lang-sql">let FileActivity = 
    OfficeActivity
    | where UserId contains "FirstName.LastName"
    | where Operation has_any("FileDownloaded")
    | extend FileType = extract(@"\.(\w+)$", 1, SourceFileName), SiteUrl = Site_Url, SourceRelativeUrl = SourceRelativeUrl, Operation = Operation, RecordType = RecordType, OfficeWorkload = OfficeWorkload, EventSource = EventSource, ItemType = ItemType, SensitivityLabelId = SensitivityLabelId, UserId_ = UserId, ClientIP = ClientIP, UserAgent = UserAgent, IsManagedDevice = IsManagedDevice, Type = Type;
union
(
    FileActivity
    | summarize Count = count() 
    | extend FileType = "Total", FileNames = "", SiteUrl = "", SourceRelativeUrl = "", Operation = "", RecordType = "", OfficeWorkload = "", EventSource = "", ItemType = "", SensitivityLabelId = "", UserId_ = "", ClientIP = "", UserAgent = "", IsManagedDevice = "", Type = ""
),
(
    FileActivity
    | summarize Count = count(), FileNamesList = make_list(SourceFileName, 100000), SiteUrlList = make_list(SiteUrl, 100000), SourceRelativeUrlList = make_list(SourceRelativeUrl, 100000), OperationList = make_list(Operation, 100000), RecordTypeList = make_list(RecordType, 100000), OfficeWorkloadList = make_list(OfficeWorkload, 100000), EventSourceList = make_list(EventSource, 100000), ItemTypeList = make_list(ItemType, 100000), SensitivityLabelIdList = make_list(SensitivityLabelId, 100000), UserId_List = make_list(UserId_, 100000), ClientIPList = make_list(ClientIP, 100000), UserAgentList = make_list(UserAgent, 100000), IsManagedDeviceList = make_list(IsManagedDevice, 100000), TypeList = make_list(Type, 100000) by FileType
    | extend FileNames = strcat_array(FileNamesList, "\n"), SiteUrl = strcat_array(SiteUrlList, "\n"), SourceRelativeUrl = strcat_array(SourceRelativeUrlList, "\n"), Operation = strcat_array(OperationList, "\n"), RecordType = strcat_array(RecordTypeList, "\n"), OfficeWorkload = strcat_array(OfficeWorkloadList, "\n"), EventSource = strcat_array(EventSourceList, "\n"), ItemType = strcat_array(ItemTypeList, "\n"), SensitivityLabelId = strcat_array(SensitivityLabelIdList, "\n"), UserId_ = strcat_array(UserId_List, "\n"), ClientIP = strcat_array(ClientIPList, "\n"), UserAgent = strcat_array(UserAgentList, "\n"), IsManagedDevice = strcat_array(IsManagedDeviceList, "\n"), Type = strcat_array(TypeList, "\n")
    | project FileType, Count, FileNames, SiteUrl, SourceRelativeUrl, Operation, RecordType, OfficeWorkload, EventSource, ItemType, SensitivityLabelId, UserId_, ClientIP, UserAgent, IsManagedDevice, Type
)
| extend SortOrder = iif(FileType == "Total", 1000000, Count)
| order by SortOrder desc
| project-away SortOrder
| project Count, FileType, FileNames, SiteUrl, SourceRelativeUrl, Operation, RecordType, OfficeWorkload, EventSource, ItemType, SensitivityLabelId, UserId_, ClientIP, UserAgent, IsManagedDevice, Type
</code></pre>
<hr />
<h1 id="heading-3-email-events">3: Email Events.</h1>
<h2 id="heading-31-summarise-email-activity-by-sender-domain-including-urls-attachments-recipients-and-post-delivery-actions">3.1: Summarise email activity by sender domain (including URLs, attachments, recipients, and post-delivery actions.</h2>
<pre><code class="lang-sql">let targetDomain = "domain.com";
let FilteredEmails = EmailEvents
| where SenderFromDomain == targetDomain;
let AttachmentInfo = EmailAttachmentInfo
| project NetworkMessageId, FileName, SHA256
| summarize AttachmentBlock = strcat_array(make_list(strcat("- ", FileName, " (SHA256: ", SHA256, ")")), "\n") by NetworkMessageId;
let UrlInfo = EmailUrlInfo
| project NetworkMessageId, Url
| extend SanitisedUrl = replace_string(replace_regex(Url, "\\.(com|net|org|co\\.uk|io|gov|uk|biz|edu|info|me|to|ai|careers)", "[.\\1]"), "://", "[://]");
let ClickInfo = UrlClickEvents
| project NetworkMessageId, Url
| summarize ClickedUrls = make_set(Url) by NetworkMessageId;
let UrlWithClicks = UrlInfo
| join kind=leftouter (ClickInfo) on NetworkMessageId
| extend ClickStatus = iff(set_has_element(ClickedUrls, Url), "Yes", "No")
| summarize UrlList = make_list(strcat(SanitisedUrl, "|", ClickStatus)) by NetworkMessageId;
let UrlBlockFormatted = UrlWithClicks
| mv-apply UrlList on (
    serialize
    | extend UrlIndex = row_number()
    | extend URLInfo = split(UrlList, "|")
    | extend UrlLine = strcat("URL ", tostring(UrlIndex), ": ", URLInfo[0], " | Clicked: ", URLInfo[1])
)
| summarize UrlCount = count(), UrlBlock = strcat_array(make_list(UrlLine), "\n") by NetworkMessageId;
let RecipientsByEmail = FilteredEmails
| project NetworkMessageId, RecipientEmailAddress
| summarize Recipients = make_list(RecipientEmailAddress) by NetworkMessageId
| extend RecipientCount = array_length(Recipients)
| extend RecipientBlock = iff(RecipientCount == 0, "Recipients: None", strcat("Recipients (", tostring(RecipientCount), "):\n", strcat_array(Recipients, "\n")));
let PostDeliveryActions = FilteredEmails
| project NetworkMessageId
| join kind=leftouter (
    EmailPostDeliveryEvents
    | project NetworkMessageId, Action, ActionTrigger, ActionType, ThreatTypes
) on NetworkMessageId
| summarize PostDeliveryAction = strcat_array(make_list(strcat("Action: ", Action, " | Trigger: ", ActionTrigger, " | Type: ", ActionType, " | Threats: ", tostring(ThreatTypes))), "\n") by NetworkMessageId
| extend PostDeliverySummary = iff(isempty(PostDeliveryAction), "🚨 No post-delivery actions found 🚨", strcat("🚨 Post-delivery action(s) 🚨\n", PostDeliveryAction));
FilteredEmails
| project Subject, NetworkMessageId, InternetMessageId, Timestamp, SenderFromAddress, DeliveryAction, EmailLanguage, RecipientEmailAddress
| join kind=leftouter (AttachmentInfo) on NetworkMessageId
| join kind=leftouter (UrlBlockFormatted) on NetworkMessageId
| join kind=leftouter (RecipientsByEmail) on NetworkMessageId
| join kind=leftouter (PostDeliveryActions) on NetworkMessageId
| extend AttachmentText = iff(isnull(AttachmentBlock), "Attachments: None", strcat("Attachments:\n", AttachmentBlock))
| extend UrlText = iff(isnull(UrlBlock), "URLs: None", strcat("URLs (", tostring(UrlCount), "):\n", UrlBlock))
| extend RecipientText = iff(isnull(RecipientBlock), "Recipients: None", RecipientBlock)
| extend PostDeliveryText = iff(isnull(PostDeliverySummary), "🚨 No post-delivery actions found 🚨", PostDeliverySummary)
| extend Template = "==== EMAIL ====\n{postdelivery}\nTime: {time}\nSender: {sender}\nDelivery: {delivery}\nLanguage: {lang}\n{recipients}\n{urls}\n{attachments}\nInternet Msg ID: {imid}\nNetwork Msg ID: {nmid}"
| extend Step1 = replace_string(Template, "{time}", format_datetime(Timestamp, "yyyy-MM-dd HH:mm:ss"))
| extend Step2 = replace_string(Step1, "{sender}", SenderFromAddress)
| extend Step3 = replace_string(Step2, "{delivery}", DeliveryAction)
| extend Step4 = replace_string(Step3, "{lang}", EmailLanguage)
| extend Step5 = replace_string(Step4, "{recipients}", RecipientText)
| extend Step6 = replace_string(Step5, "{urls}", UrlText)
| extend Step7 = replace_string(Step6, "{attachments}", AttachmentText)
| extend Step8 = replace_string(Step7, "{postdelivery}", PostDeliveryText)
| extend Step9 = replace_string(Step8, "{imid}", tostring(InternetMessageId))
| extend EmailDetail = replace_string(Step9, "{nmid}", tostring(NetworkMessageId))
| summarize Emails = make_list(EmailDetail) by Subject;
</code></pre>
<hr />
<h1 id="heading-4-network-traffic-logs">4: Network Traffic Logs.</h1>
<h2 id="heading-41-detect-suspicious-or-anomalous-activity-in-iis-logs">4.1: Detect suspicious or anomalous activity in IIS logs.</h2>
<pre><code class="lang-sql">W3CIISLog
|where * =="1.1.1.1"

//further details

W3CIISLog
|where sIP =="1.1.1.1"
|summarize attempts=count() by bin(TimeGenerated,1h)
|render timechart
</code></pre>
<hr />
<h1 id="heading-5-security-events-amp-ad">5. Security Events &amp; AD.</h1>
<h2 id="heading-51-retrieve-details-of-users-added-to-security-groups">5.1: Retrieve details of user(s) added to security group(s).</h2>
<pre><code class="lang-sql">SecurityEvent
| where EventID == 4728
| project-<span class="hljs-keyword">rename</span> AD_Group_Name = TargetAccount
| <span class="hljs-keyword">project</span> TimeGenerated, <span class="hljs-keyword">Account</span>, AccountType, AD_Group_Name, Channel, Task, EventSourceName, EventID, Activity, MemberName, MemberSid, SubjectAccount, SubjectDomainName, SubjectUserName, SubjectUserSid, TargetUserName
</code></pre>
<h2 id="heading-52-identify-recent-service-installations-over-a-10-day-period">5.2: Identify recent service installations over a 10-day period.</h2>
<pre><code class="lang-sql">SecurityEvent
| where EventID == 4697
| where TimeGenerated &gt; ago(10d)
| project TimeGenerated, Computer, Account, ServiceName, ServiceFileName, ServiceType, ServiceStartType, ServiceAccount, SubjectUserName, SubjectDomainName
| sort by TimeGenerated desc
</code></pre>
<h2 id="heading-53-investigate-ntdsdit-access">5.3: Investigate NTDS.dit access.</h2>
<pre><code class="lang-sql">DeviceFileEvents
| where FileName =~ "ntds.dit"
| where InitiatingProcessFileName !in~ ("ntdsutil.exe", "esentutl.exe", "mimikatz.exe", "powershell.exe", "cmd.exe")
| summarize count() by InitiatingProcessFileName, InitiatingProcessCommandLine
// Identifies processes creating/accessing `ntds.dit` and excludes known credential dumping tools.
// Useful for confirming benign process (e.g., `dockerd.exe`) is responsible.
// Does not <span class="hljs-keyword">show</span> timeline <span class="hljs-keyword">or</span> <span class="hljs-keyword">file</span> <span class="hljs-keyword">path</span> details — <span class="hljs-keyword">use</span> <span class="hljs-keyword">with</span> <span class="hljs-keyword">path</span>-based query.
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Interactive Microsoft Sentinel Incident Notifications in Teams via an Azure Logic App Using Adaptive Cards.]]></title><description><![CDATA[Introduction.
Many Security Operation Centres (SOCs) rely on rapid, structured, and context-rich alerting mechanisms.
This blog outlines how to build an interactive, dynamic workflow to send Microsoft Sentinel incidents to Microsoft Teams using an Az...]]></description><link>https://blog.cdoherty.co.uk/interactive-microsoft-sentinel-incident-notifications-in-teams-via-an-azure-logic-app-using-adaptive-cards</link><guid isPermaLink="true">https://blog.cdoherty.co.uk/interactive-microsoft-sentinel-incident-notifications-in-teams-via-an-azure-logic-app-using-adaptive-cards</guid><dc:creator><![CDATA[Ciaran Doherty, AfCIIS, MBCS]]></dc:creator><pubDate>Thu, 12 Jun 2025 12:16:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760829740930/072f69a1-de7f-495c-830d-42dd45b8a5ee.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction.</h1>
<p>Many Security Operation Centres (SOCs) rely on rapid, structured, and context-rich alerting mechanisms.</p>
<p>This blog outlines how to build an interactive, dynamic workflow to send Microsoft Sentinel incidents to Microsoft Teams using an Azure Logic App and Adaptive Cards.</p>
<p>This solution enables SOC teams to:</p>
<ul>
<li><p>Receive incident alerts in real time within Microsoft Teams.</p>
</li>
<li><p>View key incident metadata in a clean UI.</p>
</li>
<li><p>Take action such as changing severity or closing incidents.</p>
</li>
<li><p>Operate entirely within Microsoft Teams, improving speed and reducing context-switching.</p>
</li>
</ul>
<hr />
<h1 id="heading-prerequisites">Prerequisites:</h1>
<p>To complete this interaction, you’ll need the following:</p>
<ul>
<li><p>A Microsoft Sentinel instance (active and connected to a Log Analytic Workspace).</p>
</li>
<li><p>Azure Logic App (Standard or Consumption).</p>
</li>
<li><p>Microsoft Teams with an appropriate channel.</p>
</li>
<li><p>Adaptive Card Designer (<a target="_blank" href="https://adaptivecards.io/">adaptivecards.io</a>)</p>
</li>
<li><p>Contributor permissions on the relevant resource group.</p>
</li>
</ul>
<hr />
<h1 id="heading-example">Example.</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750332821006/110f5673-086f-4b17-91d9-92a4c35cc3b2.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-step-1-designing-the-adaptive-card">Step 1: Designing the Adaptive Card.</h1>
<p>The Adaptive Card is the UI element that appears in Teams. Begin by building your Adaptive Card using the web-based Designer tool. Set the host app to "Microsoft Teams - Dark" to preview how it will render in Teams.</p>
<h3 id="heading-card-content">Card content:</h3>
<ol>
<li><p>Header Text: Large title (e.g. “New Microsoft Sentinel Incident Created!”)</p>
</li>
<li><p>Microsoft Sentinel Logo: Insert into a left-hand column using image URL<br /> <a target="_blank" href="https://connectoricons-prod.azureedge.net/releases/v1.0.1391/1.0.1391.2130/azuresentinel/icon.png"><code>https://connectoricons-prod.azureedge.net/releases/v1.0.1391/1.0.1391.2130/azuresentinel/icon.png</code></a></p>
</li>
<li><p>Clickable Incident Link: Markdown <code>[Click here to view the Incident](incidentURL)</code></p>
</li>
<li><p>FactSet Section: Display the following dynamically retrieved incident properties:</p>
<ol>
<li><p>Incident Title</p>
</li>
<li><p>Incident ID</p>
</li>
<li><p>Creation Time (UTC)</p>
</li>
<li><p>Severity (with conditional colours)</p>
</li>
<li><p>Alert Providers (joined with <code>;</code> )</p>
</li>
<li><p>MITRE ATT&amp;CK Tactics (joined with <code>;</code> )</p>
</li>
<li><p>Description</p>
</li>
</ol>
</li>
</ol>
<h3 id="heading-action-section">Action section:</h3>
<p>Beneath the incident details:</p>
<ol>
<li><p>Dropdown: Close Incident<br /> ID: <code>incidentStatus</code><br /> Choices:</p>
<p> Close incident - False Positive → <code>FalsePositive – IncorrectAlertLogic</code></p>
<ul>
<li><p>Close incident - True Positive → <code>TruePositive – SuspiciousActivity</code></p>
</li>
<li><p>Close incident - Benign Positive → <code>BenignPositive – SuspiciousButExpected</code></p>
</li>
<li><p>Don’t close the incident → <code>no</code> (default)</p>
</li>
</ul>
</li>
<li><p>Dropdown: Change Severity<br /> ID: <code>incidentSeverity</code></p>
<p> Choices:</p>
<ul>
<li><p>High</p>
</li>
<li><p>Medium</p>
</li>
<li><p>Low</p>
</li>
<li><p>Informational</p>
</li>
<li><p>Don’t change → <code>same</code> (default)</p>
</li>
</ul>
</li>
<li><p>Submit Button<br /> Title: “Submit response!”</p>
</li>
</ol>
<p><mark>Copy the card JSON from the “Card Payload Editor” once built.</mark></p>
<hr />
<h1 id="heading-step-2-create-the-azure-logic-app">Step 2: Create the Azure Logic App.</h1>
<ol>
<li><p>Navigate to <strong>Microsoft Sentinel &gt; Automation &gt; Create &gt; Playbook with incident trigger</strong>.</p>
</li>
<li><p>Configure:</p>
<ul>
<li><p>Name: <code>Send-Teams-Adaptive-Card-on-incident-creation</code></p>
</li>
<li><p>Resource Group: <code>Your Sentinel resource group</code></p>
</li>
<li><p>Identity: <code>Use a Managed Identity</code></p>
</li>
</ul>
</li>
<li><p>Proceed to the Logic App Designer.</p>
</li>
</ol>
<hr />
<h1 id="heading-step-3-define-logic-app-actions">Step 3: Define Logic App Actions.</h1>
<h3 id="heading-31-compose-the-adaptive-card">3.1: Compose the Adaptive Card:</h3>
<ol>
<li><p>Add a <code>Compose</code> action.</p>
</li>
<li><p>Paste your full Adaptive Card JSON (designed earlier).</p>
</li>
<li><p>Replace static placeholders with <strong>dynamic content</strong> from the Sentinel trigger.</p>
</li>
</ol>
<p><strong>Examples:</strong></p>
<pre><code class="lang-plaintext">"value": "@{triggerBody()?['object']?['properties']?['title']}"
</code></pre>
<p>For arrays like Alert Providers:</p>
<pre><code class="lang-plaintext">"value": "@{join(triggerBody()?['object']?['properties']?['additionalData']?['alertProductNames'], '; ')}"
</code></pre>
<h3 id="heading-32-post-to-teams">3.2: Post to Teams:</h3>
<ul>
<li><p>Add an action: <strong>Microsoft Teams &gt; Post adaptive card and wait for a response</strong>.</p>
</li>
<li><p>Configure:</p>
<ul>
<li><p><strong>Post as</strong>: <code>Flow bot</code></p>
</li>
<li><p><strong>Post in</strong>: <code>Teams channel</code></p>
</li>
<li><p><strong>Message</strong>: <code>Outputs</code> <em>from Compose</em></p>
</li>
<li><p><strong>Update message</strong>: <code>"Thanks for your response!"</code></p>
</li>
</ul>
</li>
</ul>
<hr />
<h1 id="heading-step-4-process-the-response">Step 4: Process the Response.</h1>
<h3 id="heading-41-update-severity">4.1: Update Severity:</h3>
<ul>
<li><p>Add <strong>Condition</strong>: if <code>incidentSeverity</code> is not equal to <code>same</code></p>
</li>
<li><p>Under <strong>True</strong>:</p>
<ul>
<li><p>Add <strong>Update Incident</strong> (Sentinel)</p>
</li>
<li><p>Pass new severity from the card response using:</p>
<pre><code class="lang-plaintext">  body('Post_Adaptive_Card_and_wait_for_a_response')?['data']?['incidentSeverity']
</code></pre>
</li>
</ul>
</li>
</ul>
<h3 id="heading-42-close-the-incident">4.2: Close the Incident:</h3>
<ul>
<li><p>Add another <strong>Condition</strong>: if <code>incidentStatus</code> is not equal to <code>no</code></p>
</li>
<li><p>Under <strong>True</strong>:</p>
<ul>
<li><p>Add <strong>Update Incident</strong> with status <code>Closed</code></p>
</li>
<li><p>Add classification reason based on card input</p>
</li>
</ul>
</li>
</ul>
<hr />
<h1 id="heading-enhancements-and-best-practices">Enhancements and Best Practices.</h1>
<ul>
<li><p>Use <code>attention</code>, <code>warning</code>, and <code>good</code> colours to visually reflect incident severity</p>
</li>
<li><p>Embed Sentinel and company logos in the card for professional branding</p>
</li>
<li><p>Add follow-up cards or messages to Teams confirming actions taken</p>
</li>
<li><p>Consider including user feedback or quick triage options (e.g. "Is this activity expected?")</p>
</li>
</ul>
<hr />
<h1 id="heading-conclusion">Conclusion.</h1>
<p>This integration brings security events directly into the operational workflow, reducing response time and enabling immediate action within Microsoft Teams. By automating incident delivery and embedding response options into the conversation flow, this method enhances visibility and efficiency in SOC environments.</p>
<hr />
<h1 id="heading-next-steps">→ Next Steps:</h1>
<ul>
<li><p>Add enrichment (e.g. GeoIP, Identity Risk Levels).</p>
</li>
<li><p>Extend with logic for multi-stage approvals.</p>
</li>
<li><p>Publish your Adaptive Card JSON to internal GitHub for standardisation.</p>
</li>
<li><p>Use role-based access to scope which alerts trigger playbooks.</p>
</li>
</ul>
<p>Let incidents come to the team — not the other way around.</p>
]]></content:encoded></item></channel></rss>