I’m definitely excited to see Docker present at the upcoming Tech Field Day 12. I had meant to sit in on their exclusive vExpert demos at VMWorld 2016, but I, unfortunately, had a conflict at the time and skipped it (boo). I was bummed as I don’t know a ton about Docker, but I definitely want to learn more.
For background, Docker Inc. was founded in 2010 after it was split off from dotCloud, which originally developed the container technology that became Docker. As it stands, Docker Inc. has had $180 million in funding across 6 rounds, and it has helped change the landscape of traditional application deployments.
I first became aware of Docker while listening to Coder Radio a few years ago. I can’t recall specifically which episode first mentioned the use of containers, but I know that they had an interview with a couple of the dotCloud folks way back in episode 66. I believe this was even before the first DockerCon. At the time I thought the idea of containers were cool, and provided a new approach to looking at application deployment. Given the success of containers, I would say others shared that sentiment.
At least part of the reason why Docker has become such a big deal is because of the ‘cattle vs. pets’ mentality that they helped introduce into datacenters. With Docker, you can easily spin up new instances of an image very quickly, so wiping them out and rebuilding them is a very quick and easy operation. Now imagine you need to scale some web servers (running within a Docker container). Say you normally have 20 servers running, but you need to ramp up to 100 to accommodate some sudden traffic spike. This can easily be scripted and carried out in a very short amount of time (depending on infrastructure, minutes, or less). When done, you can just blow away those containers and go back down to your 20 servers.
So, how do you go about getting these images that are used to build out the containers? Wouldn’t it be handy if there was some sort of central ‘hub’ for all of them? Well, guess what? There is – Docker Hub. What you will find here is a slew of packages for pretty much any environment you could think of. Each package has some nice details, such as what it is / what it does, how to use it, and the command to pull it down. It reminds me a lot of Chocolatey, for those of you who are familiar with it.
At DockerCon 2016, a new public beta service was announced: Docker Store. What’s the difference between Docker Hub and Docker Store? Well from what I can tell, Hub is essentially a list of repositories that contains containers built by contributors. Docker Store on the other is more of a marketplace where you might find an app packed by the software author. This can also include paid containers.
So, what can we expect to see from Docker Inc.? Who knows … There is definitely a lot of potential here given how there is official support for Docker containers in Windows Server 2016, and VMware has vSphere Integrated Containers. Heck, even storage vendors like Tintri are supporting containers now. With support coming from all corners of modern datacenters, it is apparent that containers are a) growing in importance and use and b) they are unlikely to just disappear.
You can join in on the fun by watching Docker present at Tech Field Day 12 at 16:30 on Wednesday November 16th.
Disclaimer: I was invited to participate in Tech Field Day as a delegate. All of my expenses, including food, transportation, and lodging are being covered by Gestalt IT. I did not receive any compensation to write this post, nor was I requested to write this post. Anything written above was on my own accord.