So one fun project I’ve been working on at work is developing the integration to make managing our internal (server facing) and external (customer facing) dns a simpler process. This has meant integrating a few different things: BIND, Amazon Route53, DHCP, and various tools to tie them together. This meant taking some tools from various places, and tying them together.
So my system consists of basically four elements: Route 53, for serving internet facing DNS requests; Internal DNS servers that get zone data from route 53; route53d servers for pushing updates to Route53 via its API; and Route53’s Web UI.
I use Route53 is the single point of truth. Updates go there via the API or WebUI. Then the data is pulled out of there and published to my internal DNS servers.
Due to the stateless nature of how these servers work, this system is very scalable out of the box. To handle more load, I just spin up more instances of the internal DNS servers. Using a load balancer in front of them, I could provide plenty of service to basically an arbitrary amount of machines.
A couple cool programs made this project very possible: dnscurl.py, route53d, and route53tobind. These tools made it easy (as in, just writing integration code and deployment code) to tie together all these resources.
One thing that I did in this process was that I wanted to make the whole system scalable. I didn’t ever want to have to log into the boxes when they’re in production, or indeed, even when deploying them. So this meant that I developed deployment scripts and kickstart files that specified the whole structure of the internally facing DNS servers and the route53d servers. Basically, I just kickstart it, and it’s done. This is done by using a kickstart script that specifies in the postinstall section to pull over a script and run it. That script configures all the stuff and sets it to start on boot. This could (and should) be done with Chef or some other configuration management suite.
I periodically poll Route53 for new zone data, and when I get it, push it to the internal servers. Then call “rndc reload” to incorporate the updated zones into the running server.
And presto, consistent internal and external DNS with scalability and availability.