Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

  1. Blog
  2. Article

Bill Wear
on 6 December 2021


We are happy to announce that MAAS 3.1 has been released. Bare metal provisioning just got even easier! MAAS 3.1 brings some of the most frequently-requested features into the product. A lot of this is serendipity — or maybe you could say that it’s about like minds tracking the same problem. Either way, we’re doing our best to provide features that match our users’ needs, as soon as we possibly can.

In any case, the details of these features are a little big for one blog, so we’ll be taking a detailed look at one feature a week over the next seven or eight weeks (not counting the Christmas break). In this introductory blog, we’ll be introducing these requested features and linking you to more information in the product documentation.

Ability to enlist deployed machines

Users can enlist deployed machines, a top feature poll request

MAAS is a great bare metal provisioning system, but what happens when you’ve already provisioned a server, and it’s already running a workload? When adding a machine, MAAS network boots the machine into an ephemeral environment to collect hardware information. This doesn’t work for machines that are already running a workload:

  1. You might not be able to disrupt the workload in order to network boot it.
  2. The machine would be marked as Ready, not Deployed, which is incorrect.

With MAAS 3.1, you may specify that the machine is already deployed. It won’t be commissioned, but it will be marked as “Deployed”.  In order to gather hardware information, a non-invasive script is provided to run a subset of the commissioning scripts and send the data back to MAAS.

Read more.

Machine cloning via the UI

Extends machine cloning to the UI, a step toward profile templates

Let’s face it: Bare-metal provisioning can be really boring, at times, especially if you’re using the same storage configuration for machine after machine. MAAS 3.1 provides the ability to quickly clone or copy configuration from one machine to one or more other machines, via the MAAS UI. This provides convenient access to an existing API feature.

Creating a machine profile is a repetitive task. We have observed that many users create multiple machines — with the same configuration — in batches. Some users create a machine profile template and loop them through the API, while others create a script to interface with the CLI. However, there is no easy way to clone configurations in the UI.

MAAS API already has the cloning functionality, but it was never exposed in the UI. Hence, users may not know that this API feature exists. Although the current cloning API feature does not solve all machine profile templating problems, it is a great place for us to start moving in the direction of machine templates.

Read more.

Static Ubuntu image upload / reuse

Users can upload, deploy and reuse a bootable ubuntu image

MAAS aleady supports deploying custom OS images. Canonical provides both lp:maas-image-builder and gh:canonical/packer-maasto support creating custom images. With 3.1, these custom images can include static Ubuntu images, created with whatever tool you choose, and then easily deployed with MAAS.

Canonical continues to recommend, though, that you customise Ubuntu using cloud-init user_data or Curtin preseed data whenever possible.

Read more.

Improved image sync performance

After the region downloads images, racks sync them more quickly

Downloading and syncing images is a known delay element in MAAS. While images aren’t small, and do take some time to download, we decided to try to speed up the process as much as possible.

After the region has downloaded new images, the rack controllers are now much quicker at syncing the new images. There is nothing required of our users to experience this improved sync performance, other than upgrading to 3.1.

LXD auth UX improvements

Easier MAAS to LXD certificate management

Not all server provisioning is about bare metal. Many of our users rely on LXD virtual machines to get things done. But VM provisioning with authentication — especially certificates — can be, well, rather convoluted.

MAAS 3.1 provides a smoother experience when connecting an existing LXD server to MAAS, guiding the user through manual steps and providing increased connection security with use of certificates.

Currently, each MAAS region/rack controller has its own certificate. To add a LXD VM host to MAAS, the user needs to either add the certificate for each controller that can reach the LXD server to the trust list in LXD, or use the trust_password (in which case the controller talking to LXD will automatically add its certificate to the trust).

This doesn’t provide a great user experience, as the former process is cumbersome, and the latter is not suggested for production use for security reasons. To improve this, MAAS 3.1 manages per-LXD keys/certificates, and provide a way for users to get the content of certificates, to authorize MAAS in LXD.

Read more.

Support for LXD clusters

MAAS 3.1 allows you to use LXD clusters with MAAS KVMs

LXD clusters within the context of MAAS are a way of viewing and managing existing VM host clusters and composing VMs within said cluster. MAAS will not create a new cluster, but will discover an existing cluster when you provide the info for adding a single clustered host.

MAAS assumes you have already configured a cluster within the context of LXD. You then need to configure said cluster with a single trust mechaism that MAAS will use to communicate with said cluster. Adding a LXD cluster is similar to adding a single LXD host, in that you provide authentication the same way for a single host within the cluster, and then select a project.

The only difference is this: The name you provide will be used for the cluster instead of the individual host. MAAS will then connect to the provided host and discover the other hosts within the cluster, and rename the initially defined host with the cluster member name configured in LXD.

Read more.

Installing MAAS 3.1

Via debian packages:
MAAS 3.1 can be installed by adding the `3.1` PPA:

sudo add-apt-repository ppa:maas/3.1
sudo apt update
sudo apt install maas

You can then either install MAAS 3.1 fresh (recommended) with:

sudo apt-get -y install maas

Or, if you prefer to upgrade, you can do so with:

sudo apt upgrade maas

At this point, you may proceed with a normal installation.

Via snaps:
MAAS 3.1 can be installed fresh (recommended) with:

sudo snap install maas --channel=3.1/stable maas

At this point, you may proceed with a normal installation.

Further reading

Canonical has released an extensive whitepaper for bare metal Kubernetes – going in depth into many of the different aspects involved and heavily featuring the usage of MAAS.

Related posts


Bill Wear
10 October 2023

MAAS Outside the Lines

Cloud and server Article

Far from the humdrum of server setups, this is about unusual deployments – Raspberry Pis, loose laptops, cheap NUCs, home appliances, and more. What the heck is stormrider deploying this week? ...


Benjamin Ryzman
21 June 2024

Data Centre AI evolution: combining MAAS and NVIDIA smart NICs

AI Article

It has been several years since Canonical committed to implementing support for NVIDIA smart NICs in our products. Among them, Canonical’s metal-as-a-service (MAAS) enables the management and control of smart NICs on top of bare-metal servers. NVIDIA BlueField smart NICs are very high data rate network interface cards providing advanced s ...


Bill Wear
16 October 2023

A call for community

Cloud and server Article

Introduction Open source projects are a testament to the possibilities of collective action. From small libraries to large-scale systems, these projects rely on the volunteer efforts of communities to evolve, improve, and sustain. The principles behind successful open source projects resonate deeply with the divide-and-conquer strategy, a ...