Last night I spent three hours convinced my BIND split horizon DNS was misconfigured. Internal resolution was completely broken - every query was returning external IPs instead of internal ones. Spoiler alert: my DNS server was fine. My UniFi gateway was helpfully intercepting all DNS traffic to “protect” me.
What Is DNS?
Before diving into the debugging, a quick primer: DNS (Domain Name System) is the internet’s phone book. When you type homelab.example.com
into your browser, DNS translates that human-readable name into an IP address like 192.168.2.50
that computers use to communicate. Your computer sends a DNS query to a DNS server (also called a nameserver or resolver), which responds with the IP address.
The Setup
Split horizon DNS (or “DNS views” in BIND terminology) lets you return different answers based on who’s asking. It’s like having an unlisted phone number that only your friends know, while strangers get your office number:
- Internal view (
192.168.2.2
): Returns RFC1918 private addresses (like192.168.x.x
) for internal hosts - External view (public IP): Returns public IPs for the same hostnames
This is useful because you want internal clients to connect to services via fast local IPs, while external clients connect via your public IP through the firewall.
My resolver configuration had two nameservers:
- 192.168.1.1
- UniFi Dream Machine (gateway/router)
- 192.168.2.2
- My BIND9 server with split horizon views
The internal view was only configured on 192.168.2.2
. This configuration worked fine… until it didn’t.
The Problem
Suddenly, all DNS queries from my laptop were returning external IPs, even for internal services. Queries that should have returned 192.168.2.50
were returning my public IP instead.
# This should return an internal IP
dig @192.168.2.2 homelab.example.com +short
203.0.113.42 # Wrong! Should be 192.168.2.50
My first thought: the ACL for the internal view must be broken.
Diagnosis Step 1: Check the View ACL
BIND logs showed my queries were hitting the server, but which view?
# Query with extended output
dig @192.168.2.2 homelab.example.com
; <<>> DiG 9.18.24 <<>> @192.168.2.2 homelab.example.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12345
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; ANSWER SECTION:
homelab.example.com. 3600 IN A 203.0.113.42
The aa
flag (authoritative answer) meant BIND was responding authoritatively. But which view? Time to check serials.
Diagnosis Step 2: Serial Numbers and TTLs
What are serial numbers and TTLs? Every DNS zone has an SOA (Start of Authority) record that contains metadata about the zone. The serial number is like a version number - it increments every time you update the zone, allowing secondary servers to know when to pull updates. The TTL (Time To Live) tells caching servers how long to remember an answer before asking again. Think of TTL as an expiration date on cached data - a TTL of 3600 means “this answer is good for 1 hour.”
In this case, serial numbers became a forensic tool. I had deliberately set different serial numbers for my internal and external views, so I could tell which zone file was answering queries:
# Internal zone (should be returned to 192.168.x.x clients)
$ORIGIN example.com.
@ IN SOA ns1.example.com. admin.example.com. (
2025093001 ; Serial - internal
3600 ; Refresh
1800 ; Retry
604800 ; Expire
86400 ) ; Minimum TTL
homelab IN A 192.168.2.50
# External zone (public internet)
$ORIGIN example.com.
@ IN SOA ns1.example.com. admin.example.com. (
2025093002 ; Serial - external
3600
1800
604800
86400 )
homelab IN A 203.0.113.42
Check which zone is being served:
# Query the SOA record to see the serial
dig @192.168.2.2 example.com SOA +short
ns1.example.com. admin.example.com. 2025093002 3600 1800 604800 86400
That’s the external serial (2025093002
). My client at 192.168.2.15
was somehow matching the external view’s ACL instead of the internal one.
# My BIND configuration
acl "internal-networks" {
192.168.1.0/24;
192.168.2.0/24;
localhost;
};
view "internal" {
match-clients { internal-networks; };
recursion yes;
zone "example.com" {
type master;
file "/etc/bind/zones/internal/db.example.com";
};
};
view "external" {
match-clients { any; };
recursion no;
zone "example.com" {
type master;
file "/etc/bind/zones/external/db.example.com";
};
};
The ACL looked correct. My laptop’s IP (192.168.2.15
) should absolutely match 192.168.2.0/24
. What was going on?
Diagnosis Step 3: Where Is This Query Really Coming From?
The serial number told me which view was responding (external), but not why. BIND chooses views based on the source IP address of the query. My laptop’s IP (192.168.2.15
) should have matched the internal network ACL (192.168.2.0/24
). Time to see what BIND actually sees:
# Enable query logging
sudo rndc querylog on
# Watch the logs
sudo tail -f /var/log/named/queries.log
The logs showed:
30-Sep-2025 23:42:15.123 queries: info: client @0x7f8b4c001020 203.0.113.1#54823 (homelab.example.com): query: homelab.example.com IN A +E(0)K (192.168.2.2)
Wait. The source IP is 203.0.113.1
- that’s my ISP’s DNS server! That’s not my laptop’s IP at all. BIND is seeing queries from my ISP’s resolver, not from my internal network.
This explained why it was hitting the external view. But why was my query going through my ISP?
Diagnosis Step 4: Following the Packet Trail
What is tcpdump?tcpdump
is a packet capture tool that lets you see the actual network traffic flowing through your network interfaces. It’s like wiretapping your own network - you can see source IPs, destination IPs, ports, and even the contents of packets. While DNS tools like dig
show you the application layer (what the DNS protocol says), tcpdump
shows you the network layer (what’s actually on the wire).
Even though I was explicitly querying @192.168.2.2
, something was intercepting. Time for tcpdump
:
# On my laptop - watch outgoing DNS queries
sudo tcpdump -i any -n port 53
# Then query again
dig @192.168.2.2 homelab.example.com +short
Output:
23:45:10.123456 IP 192.168.2.15.54321 > 192.168.2.2.53: 12345+ A? homelab.example.com. (37)
23:45:10.125678 IP 192.168.2.2.53 > 192.168.2.15.54321: 12345 1/0/0 A 203.0.113.42 (53)
That looked normal - my laptop sent to .2.2
, got a response. But let’s see what the DNS server saw:
# On the DNS server (192.168.2.2)
sudo tcpdump -i any -n port 53 and host 192.168.2.15
Run the query again from my laptop. On the DNS server:
23:47:22.234567 IP 203.0.113.1.41234 > 192.168.2.2.53: 54321+ A? homelab.example.com. (37)
23:47:22.234789 IP 192.168.2.2.53 > 203.0.113.1.41234: 54321 1/0/0 A 203.0.113.42 (53)
There it is. The DNS server never saw a packet from 192.168.2.15
. It saw a query from 203.0.113.1
(my ISP’s DNS).
The Culprit: UniFi Ad Blocking
My UniFi Dream Machine was configured with ad blocking enabled. This feature works by intercepting all DNS traffic on port 53, regardless of destination, filtering it, and forwarding it to upstream resolvers.
The packet flow was:
1. Laptop sends DNS query to 192.168.2.2:53
2. UniFi gateway intercepts (NAT redirect on port 53)
3. UniFi forwards to ISP DNS (203.0.113.1
)
4. ISP DNS forwards to 192.168.2.2:53
(my public IP’s external view)
5. BIND sees source IP as 203.0.113.1
, matches external view
6. Response goes back through the chain
Even though I specified @192.168.2.2
in dig, the gateway intercepted it. Classic transparent proxy behavior.
The tcpdump Smoking Gun
To really confirm, watch on the gateway itself:
# SSH to UniFi Dream Machine
ssh admin@192.168.1.1
# Watch NAT redirects
tcpdump -i any -n port 53 and host 192.168.2.15
# Output shows:
# IN: 192.168.2.15:54321 -> 192.168.2.2:53
# OUT: 192.168.1.1:41234 -> 203.0.113.1:53 # <-- Intercepted and redirected!
The gateway was rewriting the destination to the ISP DNS before forwarding.
Why dig Was Inconclusive
dig
is an application-level DNS debugging tool - it sends DNS queries and shows you DNS responses. But it operates at layer 7 of the network stack. It doesn’t see layer ¾ (IP routing and NAT manipulation) that happens between your computer and the DNS server.
Here’s the sneaky part: from my laptop’s perspective, dig @192.168.2.2
showed:
;; Query time: 45 msec
;; SERVER: 192.168.2.2#53(192.168.2.2)
;; WHEN: Tue Sep 30 23:50:12 PDT 2025
It looked like the query went directly to 192.168.2.2
because the gateway was transparent. The response appeared to come from the IP I queried. Without packet capture, it was impossible to see the interception.
The Fix
Option 1: Disable UniFi DNS filtering (Settings → Security → Advanced → uncheck “Enable Ad Blocking”)
Option 2: Use a non-standard port for your internal DNS (not practical for most devices)
Option 3: Static routes/firewall rules to exempt your DNS server’s IP from interception:
# In UniFi, create firewall rule:
# LAN IN rule: Accept traffic from internal networks to 192.168.2.2:53
# Place before ad-blocking rules
Option 4: Make the gateway forward to your internal DNS:
- Set UniFi’s DNS server to 192.168.2.2
only
- Remove 192.168.1.1
from client configurations
- This way the gateway’s interception forwards to the right place
I went with option 4. Now all DNS goes through UniFi’s ad blocking → my BIND server, and the source IP is consistently 192.168.1.1
, which I added to the internal ACL.
Lessons Learned
-
dig
lies when transparent proxies are involved - TheSERVER:
line shows where you think you queried, not where the packet actually went. Application-layer tools can’t see network-layer manipulation. -
Serial numbers are your friend - Different serials per view helped identify which zone file was answering. This simple versioning trick turned SOA records into a forensic breadcrumb trail.
-
tcpdump is the only truth - Packet captures on both client and server revealed the interception. When application-layer tools are misleading, drop down to the network layer.
-
Gateways with “features” are sneaky - Modern routers do all sorts of helpful packet rewriting (NAT, transparent proxies, DNS hijacking for ad blocking). What your client sends isn’t always what your server receives.
-
Always check the source IP in logs - BIND’s query logs showed the real source immediately. Server-side logs revealed what
dig
couldn’t see from the client side. -
TTLs matter for troubleshooting - If you’re debugging DNS and getting inconsistent results, remember that caching exists. Responses might be cached with long TTLs (3600 seconds = 1 hour). Either wait, flush caches (
sudo systemd-resolve --flush-caches
on Linux,sudo dscacheutil -flushcache
on macOS), or query with+nocache
in dig.
If your split horizon DNS seems broken and dig shows the wrong view, check for: - Transparent DNS proxies on your gateway/router - Ad blocking features that intercept port 53 - VPN configurations that route DNS differently - Docker/container networks with their own DNS magic - Cached responses with stale TTLs
And when in doubt, tcpdump everything. Packet captures don’t lie. Use tcpdump -i any -n port 53
on both client and server to see the full path of your DNS queries.