Welcome to the Linux Foundation Forum!

Section 16 - Differences between LXC and Docker

I'm trying to improve my understanding of the differences between LXC containers and Docker containers. Although not essential it would help me to better understand the use-case of each.

LXC

The first flavor of containers was the OS container. This type of container runs an image of an operating system with the ability to run init processes and spawn multiple applications.

Docker

  • A container run-time manager
  • Can operate with type 1 (kernel - Linux) and type 2 (OS - Docker4Mac) hypervisors
  • Containers are applications

I'm looking for clarification, on all of it but particularly on the last point. I can understand that the OS vs application phrasing is to help identify the scope of the containers use-case but Docker containers (AFAIK) are still operating system images, albeit as bare bones as possible. They still need to init processes and spawn applications, one possible way of doing this is by the entry point on the containers init

This makes me think that the difference between them has more to do with the methodology and utilisation rather than technical construct. It's common practice in the use of Docker to isolate unique functionality to individual containers and connecting containers when that functionality is required. LXC however is bundling all of these functionalities into a single image. Is that a reasonable interpretation? Are the differences simply optimisation for these different use-cases?

Anything to help me understand the differences between the two would be appreciated.

Glossary

Hypervisor Process that runs a VM
Hypervisor - type 1 External to the host OS kernel - Container interfaces directly with host hardware
Hypervisor - type 2 Internal to the host OS kernel - An additional program of the OS that interfaces with hardware i.e. KVM

Comments

  • GRO 108
    GRO 108 Posts: 46
    edited September 2020

    Reading into the differences further I've come to the following conclusion.

    Both run by integrating with a hypervisor for system resources, allocating them direct hardware access. In the case of Docker the bundled binaries and libraries are a sub-set of container requirements. This is because the containers themselves use the host OS for basic universally required binaries. If this is correct, how are those dependencies managed?

    Docker containers are smaller components of functionality and are managed by the Docker engine to use the OS resources of the host. LXCs on the other hand have a larger feature set OS - something like CentOS.

    The next thing I'd like to understand is what are some of those core bins/libs that Docker containers omit and defer to the host OS for?

    What I'm trying to nail down is what, if any, start-up or system processes a Docker container can omit. Going through the boot and startup sequences it's not immediately obvious to me what could be omitted. I've presented my understanding of the components that handle the processes in the tables below in the hope someone can tell me if it's correct.

    This is however a pretty high-level run through so there's probably a lot of processes that occur when going into the detail. It's those processes that I'm hoping someone can provide me an example of.

    Docker (Linux)

    process hypervisor/kernel/container
    BIOS Host - type 1 hypervisor
    bootloader Host - type 1 hypervisor
    kernel init Host - OS kernel
    system (i.e. systemd) startup scripts container

    LXC

    process hypervisor/kernel/container
    BIOS Host - type 2 hypervisor
    bootloader Host - type 2 hypervisor
    kernel init container
    system (i.e. systemd) startup scripts container
  • I think my understanding was way off in that last comment.

    My understanding of this continues to evolve in regards to the process that a container uses to boot. In my last comment I listed the boot process in an attempt to follow what happens when a container is booted.

    But do containers actually boot? I was wondering this after considering the speed benefits of containers, mainly the speed that they can be scaled (i.e from 0 - 1).

    Docker containers utlise the underlying kernel of their host (the host Docker daemon is running on) as a launchpad.

    This answers my query regarding how a container is much faster. Docker containers have no init process because the Docker host kernel has already reached this state, so a container is technically already booted when it is started. VMs on the other hand need to go through the whole boot process because they're running their own kernel.

    Is it then true to say a container accesses the kernel for system calls and the kernel (which provides the hypervisor) manages hardware resources for the container?

  • Hi @GRO 108 ,

    It's a long question, I'll try to answer from my perspective:

    Are the differences simply optimisation for these different use-cases?

    It's said the differences are in the usage; if you plan to keep the underlying services, it would be better to use LXC containers. For another hand, if you just want to package some code and then discard the container, it will be better Docker (as it's pretty prepared to build packages, put them into a volume in the host and then be gone with the "--rm" parameter).

    But do containers actually boot? I was wondering this after considering the speed benefits of containers, mainly the speed that they can be scaled (i.e from 0 - 1).

    I didn't find anything in the official Docker documentation about this, but I found the following that can be of help:

    https://padgeblog.com/2014/12/02/containers-dont-really-boot/

    ==> “It is perhaps more precise to say that a linux environment is started rather than booted with containers.”

    I hope this helps.

    Many regards,
    Luis.

  • GRO 108
    GRO 108 Posts: 46
    edited September 2020

    Thanks for the response @luisviveropena

    I'm looking forward to being able to get into some practical exercises that would give me an understanding of the differences.

    I've been struggling with being able to clearly articulate the role that the kernel and the hypervisor play in container virtualisation.

    With a bit more reading I've been able to define the hypervisor more clearly as a virtual machine manager - an interface for the guest containers to access the hardware.

    Hypervisors can be a part of the kernel of the host system (type 1*) or external to the kernel, running a separate program on top of the OS (type 2)

    So in a 'regular' system the kernel is the interface to the lower level resources. For virtualisation this role is performed by a hypervisor.

    This quote from "LFS201 - Virtualization Overview - KVM" and Linux helped a bit

    When running a virtual machine, KVM engages in a co-processing relationship with the Linux kernel.
    In this format, KVM runs the virtual machine monitor within one or more of the CPUs, using VMX or SVM instructions. At the >same time, the Linux kernel is executing on the other CPUs.

    *At the BIOS level virtualisation needs to be enabled for type 1 hypervisors to function. These hypervisors have dedicated resources, partitioned off from the host kernel. They take instructions (would you call these ipcs?) and translate them to lower level code based on the type of virtualisation that the hardware provides.

  • @luisviveropena I thought this issue I've reported might interest you in context of the boot process of containers.

    The error lxc_container: lxc_start.c: main: 290 Executing '/sbin/init' with no configuration file may crash the host indicates to me the container is possibly using the hosts /sbin/initbut setting a chroot?

  • lee42x
    lee42x Posts: 380

    Please run the lxc commands as root or sudo.

    sudo lxc-start container1

    Lee

  • Hi @GRO 108 ,

    Are you in the Cloud Engineer Bootcamp? If so, you will find more information about this in LFS253 Containers Fundamentals.

    Many regards,
    Luis.

Categories

Upcoming Training