DevOps: Managing internal and external DNS with Amazon Route53

So one fun project I’ve been working on at work is developing the integration to make managing our internal (server facing) and external (customer facing) dns a simpler process.  This has meant integrating a few different things: BIND, Amazon Route53, DHCP, and various tools to tie them together.  This meant taking some tools from various places, and tying them together.

So my system consists of basically four elements: Route 53, for serving internet facing DNS requests; Internal DNS servers that get zone data from route 53; route53d servers for pushing updates to Route53 via its API; and Route53’s Web UI.

I use Route53 is the single point of truth.  Updates go there via the API or WebUI.  Then the data is pulled out of there and published to my internal DNS servers.

Due to the stateless nature of how these servers work, this system is very scalable out of the box.  To handle more load, I just spin up more instances of the internal DNS servers.  Using a load balancer in front of them, I could provide plenty of service to basically an arbitrary amount of machines.

A couple cool programs made this project very possible:  dnscurl.py, route53d, and route53tobind.  These tools made it easy (as in, just writing integration code and deployment code) to tie together all these resources.

One thing that I did in this process was that I wanted to make the whole system scalable.   I didn’t ever want to have to log into the boxes when they’re in production, or indeed, even when deploying them.  So this meant that I developed deployment scripts and kickstart files that specified the whole structure of the internally facing DNS servers and the route53d servers.  Basically, I just kickstart it, and it’s done.  This is done by using a kickstart script that specifies in the postinstall section to pull over a script and run it.  That script configures all the stuff and sets it to start on boot.  This could (and should) be done with Chef or some other configuration management suite.

I periodically poll Route53 for new zone data, and when I get it, push it to the internal servers.  Then call “rndc reload” to incorporate the updated zones into the running server.

And presto, consistent internal and external DNS with scalability and availability.

Tools used:

 

Be Sociable, Share!

3 thoughts on “DevOps: Managing internal and external DNS with Amazon Route53”

  1. Are you storing any custom-TLD stuff (e.g. .localdomain) in Route 53? If so, how’s it working for you? I realize that each Route 53 zone gets its own nameservers, but still–someone could query them and get your internal data.

    Also, how have you dealt with the problem of delegating subdomains? Since each zone gets its own set of nameservers (that you can’t predict), there’s no way to automate subdomain creation in a single operation. Granted, you can add the subdomain, query Route 53 for its nameservers, then auto-add those records to the parent zone, but it’s still a bit of a hack.

    1. Good questions.

      No, we weren’t storing any .localdomain type stuff in there — we were using it strictly as a centralized place to manage the internal and external view of a zone. We weren’t worried about exposure on internal names because
      1) You can’t do AXFRs on that and
      2) you’d have to guess the names and
      3) even if you guessed, you’d get a RFC-1918 address back, which wouldn’t be very useful

      The risks were considered but the flexibility was deemed more important than what we considered a small risk.

      We didn’t do any subdomain stuff (well, we did, but we just entered it into the domain, i.e., http://www.stage.example.com)

      Thanks for your question.

Comments are closed.