Cloud Administration Aside from the base hardware on which they run, cloud infrastructures comprise hundreds or thousands of virtual resources such as compute and storage nodes Administration of virtual infrastructures, as compared with hardware, is much more of a software-oriented activity
Cloud Administration Increasingly, software has to be designed and developed with the target deployment environment closely in mind The term “DevOps” has come to mean the close cooperation of developers and operations engineers for the deployment and running of multi-component application stacks over complex virtualised environments
Interoperability A typical service comprises many layers of interoperating components Each component has some arbitrary version and configuration
Interoperability Each component has some arbitrary number of dependencies on other components and operating system services and libraries A large number of permutations of easily possible with just a small number of components
Interoperability N x M problem Static Website Web Frontend Background Workers User DB Analytics DB Queue Dev VM QA Server Single Prod Server Online Cluster Public Cloud Dev Laptop Customer Servers ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
An Analogy
An Analogy Before 1960, most cargo was shipped break bulk. Shippers and carriers alike needed to worry about bad interactions between different types of cargo (e.g. if a shipment of anvils fell on a sack of bananas). Similarly, transitions between different modes of transport were painful.
An Analogy
An Analogy The answer was found in the form of a standard shipping container. The container can be sealed, and not to be re-opened until it reached its final destination
An Analogy
Docker Docker is a shipping container for code. Enables any application and its dependencies to be packaged up as a lightweight, portable, self-sufficient container. Containers have standard operations, thus enabling more automation.
Docker
Virtual Machine Vs. Container Host OS Docker Server
Virtual Machine Vs. Container A container comprises an application and its dependencies. Containers serve to isolate processes which run in isolation in userspace on the host's operating system. Traditional hardware virtualization (e.g. VMWare, KVM, Xen, EC2) aims to create an entire virtual machine.
Virtual Machine Vs. Container Each virtualized application contains not only the application (which may only be 10's of MB) along with the binaries and libraries needed to run that application, and an entire Guest operating System (which may measure in 100s of GB).
Virtual Machine Vs. Container Since all of the containers share the same operating system (and, where appropriate, binaries and libraries), they are significantly smaller than VMs Possible to store 100s of containers on a physical host
Virtual Machine Vs. Container Since they utilize the host operating system, restarting a container does not mean restarting or rebooting the operating system. Thus, containers are much more portable and much more efficient for many use cases.
Virtual Machine Vs. Container With a traditional VM, each application, each copy of an application, and each slight modification of an application requires creating an entirely new VM. A new application on a host need only have the application and its binaries/libraries. There is no need for a new guest operating system.
Virtual Machine Vs. Container If you want to run several copies of the same application on a host, you do not even need to copy the shared binaries. Finally, if you make a modification of the application, you need only copy the differences.
Docker: Introduction
Docker: Introduction Docker is an open source, community-backed project, originally developed by DotCloud The project aims to create an application encapsulation environment using Linux containers for its isolation environment
Docker: Introduction Docker addresses one of the biggest challenges in cloud service operations management: application deployment and component interoperability in complex, multi-node environments
Docker: Introduction Docker is an open platform for developers and sys admins to build, ship, and run distributed applications. As a result, IT/IS can ship faster and run the same app, unchanged, on laptops, datacentres, VMs, and on any cloud
Docker: Introduction Enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. It consists of: Docker Engine, a portable, lightweight runtime and packaging tool, Docker Hub, a cloud service for sharing applications and automating workflows
Docker: Introduction It consists of: Docker Engine, a portable, lightweight runtime and packaging tool, Docker Hub ( or Registry), a cloud service for sharing applications and automating workflows
Docker: Introduction Containers are designed to run on virtually any Linux server. The same container that that a developer builds and tests on a laptop will run at scale, in production, on VMs, bare-metal servers, OpenStack clusters, public instances, or combinations of the above.
Docker: Introduction Developers can build their application once, and then know that it can run consistently anywhere. Operators can configure their servers once, and then know that they can run any application.
Docker: Basic Workflow
Docker
Making Copies
Testing and Qualification Before a service update can be rolled to production, it must be tested and qualified Ideally, to make testing easier and to have any kind of confidence in quality, the change between an update and a previously working configuration should be small – operations objective This need invariably conflicts with the need to patch and advance the underlying software – developer’s objective
Update Example Suppose that a web service is running a particular version of PostgreSQL (a relational database) with a particular version of Memcached (a memory cache). Each service is known to work with a particular OS version, say Ubuntu 14.04. Now suppose, a critical fix is required for PostgreSQL but that update will only work with a later version of Ubuntu
Update Example The problem is that the server OS would need to be updated but that update is not qualified to work with the current version of Memcached. A newer version of Memcached would be need too – this is known as a dependency cascade
Update Example Suppose, to make matters worse, the new version of Memcached would not work with the middleware version it services. Now we could have further upstream dependency cascades. What do we do?
Update Example Solution #1 : Split Memcached and PostgreSQL services onto different VMs. This would work but further complicates the deployment mesh and forces single-role-per VM policy, reducing flexibility Solution #2 : Containerise the PostgreSQL and Memcached services inside Docker
Docker: How it Works Starting with a base image, say some version of Ubuntu, the developer can build a bespoke container for hosting a single component, say PostgreSQL. The built image encapsulates the OS version, updates, libraries and application components
Docker: How it Works Docker can animate the image into a running container The container transparently runs a fully isolated OS instance within the host machine Resources outside the container are not visible to anything inside
Docker: How it Works Docker images are built on a union filesystem (AUFS) Updates to the filesystem by in-container applications are visible as discrete changes which can be preserved (forming named, image snapshots) or thrown away
Docker: How it Works Containers have their own virtual networking layer with automatically assigned IP addresses which are visible in the host system Docker supports TCP/IP port mapping and forwarding from the host
Docker: How it Works The philosophy for building a Docker component mesh is to have the highest degree of componentisation as possible:
Docker: How it Works For example, a LAMP stack would look like the following: MySQL container(s) Linked MySQL data container Apache (or Nginx) and PHP container(s) Memcached container(s) Each one can run on top of different Linux OS versions and be updated independently
Docker: How it Works Docker supports automated, script-driven image generation. A Dockerfile specifies the following: Base Linux image Individual software installation commands TCP/IP ports to expose Default command(s) to execute when container is run https://docs.docker.com/engine/reference/builder/
Docker: How it Works Consult the latest docs at the official site to see what it supported for your target platform https://docs.docker.com/install/ Linux, MacOS and Windows supported (Mac and Windows use virtualised Linux to make this work)
Lightweight
The Good, the Bad, and the Interesting of Docker
THE GOOD
Docker: The Benefits Make the entire lifecycle more efficient, consistent, and repeatable Increase the quality of code produced by developers Eliminate inconsistencies between development, test, production, and customer environments
Docker: The Benefits Support segregation of duties Significantly improves the speed and reliability of continuous deployment and continuous integration systems Because the containers are so lightweight, address significant performance, costs, deployment, and portability issues normally associated with VMs
Docker: The Benefits Because any given container only runs some subset of a service’s total number of components, the surface area for an attack on a service is highly fragmented
Docker: The Benefits Further, only components which need to be exposed on a network should be network-facing, allowing everything else to be isolated on internally secured networks For example, a reverse HTTP container proxy in front of the middeware and database containers which are hidden
THE BAD
Docker: The Weaknesses It does not provide cross-platform compatibility which means that if an application is designed to run in a Docker container on Windows, then it cannot run on Linux Docker container Docker is only suitable when the development OS and the testing OS are same
Docker: The Weaknesses It is not good for application that requires rich GUI It is difficult to manage large amount of containers It does not provide any solution for data backup and recovery
THE INTERESTING
Docker: Interesting It Allows You to Run Your Own Malware Analysis Engine Sandboxing and isolation are central to today's malware analysis mechanisms; to this end, Docker can be a lightweight alternative to complete virtualization. This REMnux project provides Docker images for malware analysis, giving information security professionals an easy way to set up tools and environments for malware detonation.
Docker: Interesting It Allows You to Containerize Your Skype Sessions Wish you could Skype Grandma in complete isolation? By running your Skype sessions inside a Docker container, you can do just that.
Docker: Interesting It Allows You to Manage Your Raspberry Pi Cluster With Docker Swarm Using the Docker Machine, you can install Swarm on Raspberry Pi to setup a Raspberry Pi Swarm cluster.