Making Website Delivery as Consistent as Fast Food
Containers Have to be Consistent in Order to Provide the Best Service
Most computers we buy, both for home and in offices, aren't actually fantastic at any one particular task. They're a sort of 'Swiss Army knife' for streaming Netflix, word processing, playing music, presenting, and watching endless videos of cats on YouTube. Not brilliant at one thing, but passable at most of them.
The problem is that 'passable' doesn't quite cut it. Along with 'just adequate', 'could do better' and 'C-', it really translates as a tool that's barely able to do the task. As a result, these multi-tool computers get worn out or break down when required to do an important job to a high standard.
source: McDonald’s® Canada Image
Lots and Lots of Computers
This is why, when computers are needed to do a task reliably and well, the solution is to use lots of different ones — each doing its own task that it was solely designed for; one for a web server, one for a file server, one for a database, and so on.
In the not too distant past, this is why computer rooms had rows of different machines on tables, all with their separate keyboards and monitors, to do different jobs. Gradually, like music formats, everything got condensed. The different computers became mounted in racks, with just one monitor and keyboard to control them all and a special switch to change between them.
The trouble was that whenever a new task was needed, so was another computer. Keeping track of and maintaining all these different machines of different ages and capabilities was a bit of a nightmare.
This is why IT engineers like me used to walk around in trousers with worn-out knees and bad backs — from diving around under tables and fiddling in the depths of server rack cabinets.
Virtualization Finds a Way
Thankfully, someone who had probably run out of money from buying new trousers came up with the idea of getting one large computer to pretend it was lots of these smaller computers. This was the start of virtualization.
Virtualization essentially means that, rather than each smaller computer being a piece of physical hardware, they are actually pieces of software installed on the large machine, pretending to be separate units.
These can still each be designed and used for a singular task, but it means that, if anything needs to be changed or worked on, all that's needed is to tell the large computer to pretend to be different hardware. Giving each one different characteristics, such as adding more memory, could just be done with a few keystrokes.
A Container? Like Tupperware?
The thing is that these smaller computers usually only need to do very simple tasks. So, virtualized systems have also evolved over the years, and now we have a concept called 'containers'.
Because it proved hard work for these large computers to pretend to be lots of separate ones, the current incarnation means that, now, they only have to mimic just enough attributes to make the smaller systems work.
This is why these smaller systems are known as containers: they're too simplified to justifiably be called computers anymore. All they really consist of are self-contained packages of code and configuration files that can be used to run the independent parts of a website or an application.
Blocking Out the Problems
The benefit of a containerized website is that you can deploy a container with its associated code, using a system that lets you monitor and modify it more easily.
Using this system, once a container is built and deployed, it is possible to have it appear in the CMS interface. Here, it's possible to validate changes, review them and ultimately make it live. Because it's isolated, making those modifications — even if you're not a developer — won't damage your live website.
Another benefit of using containerized websites is that, because each container is only doing one task at once, they are very specialized and, in computing terms, quite dumb. This means that they tend to be much more reliable than the Swiss Army knife computers we watch YouTube on.
With that said, nothing is infallible. Containers have occasional problems, just like anything else. Thankfully, the old 'turn it off and on again' first rule can be applied easily with them. It's a case of telling the big computer to create a replacement container and then switching the malfunctioning one-off.
That's not to say that such a model doesn't present its own challenges. The ease with which new containers are created, and the myriad of different tasks they can perform, means that finding the correct one when it does have an issue is that much more difficult to identify.
In fact, it may be that you are operating thousands and thousands of these containers, which really is a challenge. If we were still doing this with separate computers or laptops stacked on top of each other, they'd be as high as the London Eye.
Big Macs: the IT Nerd's Favorite
As an end-to-end service provider, one of our challenges is to monitor each container's activity and to make sure everything is consistent.
Think of how consistent McDonald's core products are around the world. A Big Mac is the same whether you buy it in Texas or Tokyo — even down to the top half that's slipped off, probably.
This is what we're now working towards with Blocks and containerized software. The infrastructure we have and are constantly refining to monitor what each container is doing is what is helping us strive for that. In other words, containers have to be as consistent as Big Macs in order to give the best service.
These challenges aren't small by any stretch of the imagination. However, that's what our job is — taking care of the containers and their inherent complexities while the organization using them gets on with the exciting stuff.
Anyone fancy a Big Mac?
source: McDonald’s® Canada Image