We build sites for humans, but we need robots to facilitate the experience. We need them to serve our sites efficiently, index them properly, and render them correctly. Thinking of robots as central to application architecture, at least at first, comes with many advantages. First and foremost is that the very purpose of search engine robots is to reward relevant content and good user experience, and to penalize bad content and bad UX. We should therefore not be surprised if the best way to provide an excellent experience to our users starts with pleasing robots.
In this article I outline what it might look like to embrace robot-first development. I go through a pipeline of iterative development using the example of a blog. Specific implementations are left as an exersise to the reader, however during development I personally used vscode, nginx, EC2, hugo, and Apostrophe.
I don’t want to leave the impression that gorgeous graphic designs and compelling content are not important. They are of utmost importance. But they will fail in their goal if your users cannot find or properly load your site, or if there are weird glitches and they leave in frustration. Sites that load fast are better than slow, secure better than insecure, and parsable better than opaque. This is true of both humans and robots. But robots are much better at quantifying the experience. This is why robot-first development prescribes consulting them first.
An iterative approach
Let’s suppose we’re building a blog. Our final product will look beautiful and have engaging content, of course. But our approach will start with making robots happy, and as the build advances, we check back with robots at every step.
And what do robots want? Luckily, we know:
- Qualys SSL Labs tells us robots want our HTTP traffic to be secure. Turns out they hate it when you leave yourself vulnerable to hackers (some of whom are other robots).
- Probely Header Checker tells us the same thing.
- Structured Data Testing Tool tells us robots are not particularly good at parsing meaning from human text, without help. If you do them this courtesy, they will respond by prioritizing your content and surfacing your search results in very attractive ways.
Suppose our site contains an About Me page, a bunch of posts, and some featured projects so that I can show off my software. In terms of semantic elements, it could look like this:
This is pure structure. Every page should load fast and contain schematically correct rich content. Linter errors should be addressed. The robots will tell you if you have malformed headers or improper HTML tags. Fix them.
Your app is getting more complex and fleshed out. You’ll probably need to loosen up your CSP headers to whitelist certain origins. For example, you may want fonts from
https://fonts.google.com/* and js from
https://cdnjs.com/*. The key is that the process is considered and controlled. Dissallow is the default. You know exactly how you’re expanding the attack surface to provide necessary functionality.
It’s starting to look great, but according to the robots your A+ has degraded to something lower. Ah well, such is life. Robot-first development is not Robot-last development. You have a very sturdy platform now upon which to build an excellent user experience. Time to shift your obsession away from that A+ and towards your original audience: the lowly, smelly, beautiful human being.
That’s it in a nutshell.
Tenets of Robot-first Development
- Robots help humans find your site. Help them help you.
- Robots want your site to be accessible and performant. These are good goals for humans and robots alike. They should be embraced.
- Markup should be semantic and terse. We want robots to efficiently glean meaning from our content
- We want to be honest information brokers. We want to tell robots the same story we tell humans.