MAAS 3.3 is supported
Errors or typos? Topics missing? Hard to read? Let us know!
These are the release notes for MAAS 3.3.
We are happy to announce that MAAS 3.3.4 has been released. This is a maintenance release, with no new features, providing the following bug fixes:
We are happy to announce that MAAS 3.3.3 has been released. This is a maintenance release, with no new features, providing the following bug fixes:
We are happy to announce that MAAS 3.3.2 has been release with the following bug fixes:
We are happy to announce that MAAS 3.3.1 has been released with the following bug fixes:
We are happy to announce that MAAS 3.3 has been released, with one additional bug fix. MAAS 3.3 is a concerted effort to improve MAAS on multiple fronts, including a large number of bug fixes.
Cumulative summary of MAAS 3.3 features
New features created for MAAS 3.3 include:
Improved machine list filtering: MAAS 3.3 enhances the presentation and filtering of the machine list, with a shorter wait to start filtering and a wider range of filter choices.
Integration of Vault for credential storage: MAAS 3.3 allows you to use Hashicorp Vault^ to protect your secrets, if you wish.
Improved capabilities include the following:
Native support for 22.04 LTS and core22: We've removed the requirement to use snaps on 22.04 (Jammy Jellyfish); you now can load MAAS 3.3 on 22.04 using packages.
UI performance improvements for large machine counts: We've improved the performance of the UI machine list for large (>10000 machines) MAAS instances. The machine list now goes live just a few seconds after the first visible page loads, with the rest of the list loading in background.
Enhanced MIB support for Windows OS images: The procedure for creating custom Windows OS images has been thoroughly updated and verified.
Greatly expanded documentation sections include:
MAAS configuration settings reference: There is now one reference page that addresses all MAAS settings in one place. Other references throughout the document are preserved for now.
Improved MAAS event documentation: MAAS event documentation has been expanded to include much better explanations of MAAS events, including many examples.
Improved MAAS audit event documentation: MAAS audit event documentation has been greatly expanded to include much better explanations of MAAS audit events, including many examples and use cases.
Several forward-looking improvements are included as well:
Reliability improvements for simultaneous machine deployments
The first phase of Nvidia DPU^ support
Shifting the MAAS API documentation toward OpenAPI standards^
These will be documented later in blog posts.
This release also includes well over one-hundred bug fixes. Read on to catch up with what we've done so far this cycle.
MAAS will run on just about any modern hardware configuration, even a development laptop. If you're not sure whether your target server will handle MAAS, you can always double-check.
NOTE that PostgreSQL 12 is deprecated with the release of MAAS 3.3, in favour of PostgreSQL 14. Support for PostgreSQL 12 will be discontinued in MAAS 3.4. Also note, though, that Postgres 14 does not run on Focal 20.04 LTS.
How to do a fresh snap install of MAAS 3.3
To install MAAS 3.3 from a snap, simply enter the following:
$ sudo snap install --channel=3.3 maas
After entering your password, the snap will download and install from the 3.3 channel.
How to upgrade from an earlier snap version to MAAS 3.3
Maybe instead of a fresh install, you want to upgrade from a earlier snap version to the 3.3 snap, and you are using a region+rack
configuration, use this command:
$ sudo snap refresh --channel=3.3 maas
After entering your password, the snap will refresh from the 3.3 candidate channel. You will not need to re-initialise MAAS.
If you are using a multi-node maas deployment with separate regions and racks, you should first run the upgrade command above for rack nodes, then for region nodes.
How to initialise MAAS 3.3 snap for a test or POC environment
You can initialise MAAS as a compact version for testing. To achieve this, we provide a separate snap, called maas-test-db
, which contains a PostgreSQL database for use in testing and evaluating MAAS. The following instructions will help you take advantage of this test configuration.
Once MAAS is installed, you can use the --help
flag with maas init
to get relevant instructions:
$ sudo maas init --help
usage: maas init [-h] {region+rack,region,rack} . . .
Initialise MAAS in the specified run mode.
optional arguments:
-h, --help show this help message and exit
run modes:
{region+rack,region,rack}
region+rack Both region and rack controllers
region Region controller only
rack Rack controller only
When installing region or rack+region modes, MAAS needs a
PostgreSQL database to connect to.
If you want to set up PostgreSQL for a non-production deployment on
this machine, and configure it for use with MAAS, you can install
the maas-test-db snap before running 'maas init':
sudo snap install maas-test-db
sudo maas init region+rack --database-uri maas-test-db:///
We'll quickly walk through these instructions to confirm your understanding. First, install the maas-test-db
snap:
sudo snap install maas-test-db
Note that this step installs a a running PostgreSQL and a MAAS-ready database instantiation. When it's done, you can double check with a built-in PostgreSQL shell:
$ sudo maas-test-db.psql
psql (12.4)
Type "help" for help.
postgres=# \l
This will produce a list of databases, one of which will be maasdb
, owned by maas
. Note that this database is still empty because MAAS is not yet initialised and, hence, is not yet using the database. Once this is done, you can run the maas init
command:
sudo maas init region+rack --database-uri maas-test-db:///
After running for a moment, the command will prompt you for a MAAS URL; typically, you can use the default:
MAAS URL [default=http://10.45.222.159:5240/MAAS]:
When you've entered a suitable URL, or accepted the default, the following prompt will appear:
MAAS has been set up.
If you want to configure external authentication or use
MAAS with Canonical RBAC, please run
sudo maas configauth
To create admins when not using external authentication, run
sudo maas createadmin
Let's assume you just want a local testing user named admin
:
$ sudo maas createadmin
Username: admin
Password: ******
Again: ******
Email: admin@example.com
Import SSH keys [] (lp:user-id or gh:user-id): gh:yourusername
At this point, MAAS is basically set up and running. You can confirm this with sudo maas status
. If you need an API key, you can obtain this with sudo maas apikey --username yourusername
. Now you will be able to test and evaluate MAAS by going to the URL you entered or accepted above and entering your admin
username and password.
Initialise MAAS for a production configuration
To install MAAS in a production configuration, you need to setup PostgreSQL, as described below.
Setting up PostgreSQL from scratch
To set up PostgreSQL, even if it's running on a different machine, you can use the following procedure:
You will need to install PostgreSQL on the machine where you want to keep the database. This can be the same machine as the MAAS region/rack controllers or a totally separate machine. If PostgreSQL (version 14) is already running on your target machine, you can skip this step. To install PostgreSQL, run these commands:
sudo apt update
sudo apt install -y postgresql
You want to make sure you have a suitable PostgreSQL user, which can be accomplished with the following command, where $MAAS_DBUSER
is your desired database username, and $MAAS_DBPASS
is the intended password for that username. Note that if you're executing this step in a LXD container (as root, which is the default), you may get a minor error, but the operation will still complete correctly.
sudo -u postgres psql -c "CREATE USER \"$MAAS_DBUSER\" WITH ENCRYPTED PASSWORD '$MAAS_DBPASS'"
Create the MAAS database with the following command, where $MAAS_DBNAME
is your desired name for the MAAS database (typically known as maas
). Again, if you're executing this step in a LXD container as root, you can ignore the minor error that results.
sudo -u postgres createdb -O "$MAAS_DBUSER" "$MAAS_DBNAME"
Edit /etc/postgresql/14/main/pg_hba.conf
and add a line for the newly created database, replacing the variables with actual names. You can limit access to a specific network by using a different CIDR than 0/0
.
host $MAAS_DBNAME $MAAS_DBUSER 0/0 md5
You can then initialise MAAS via the following command:
sudo maas init region+rack --database-uri "postgres://$MAAS_DBUSER:$MAAS_DBPASS@$HOSTNAME/$MAAS_DBNAME"
Don't worry; if you leave out any of the database parameters, you'll be prompted for those details.
How to do a fresh install of MAAS 3.3 from packages
MAAS 3.3 from packages runs on 22.04 LTS only. The recommended way to set up an initial MAAS environment is to put everything on one machine:
sudo apt-add-repository ppa:maas/3.3
sudo apt update
sudo apt-get -y install maas
Executing this command leads you to a list of dependent packages to be installed, and a summary prompt that lets you choose whether to continue with the install. Choosing "Y" proceeds with a standard apt
package install.
For a more distributed environment, you can place the region controller on one machine:
sudo apt install maas-region-controller
and the rack controller on another:
sudo apt install maas-rack-controller
sudo maas-rack register
These two steps will lead you through two similar apt
install sequences.
How to upgrade from 3.2 or lower to MAAS 3.3
If you are running MAAS 3.2 or lower, you can upgrade directly to MAAS 3.3. You must first make sure that the target system is running Ubuntu 22.04 LTS by executing the following command:
lsb_release -a
The response should look something like this:
Distributor ID: Ubuntu
Description: Ubuntu xx.yy
Release: xx.yy
Codename: $RELEASE_NAME
The required “xx.yy” required for MAAS 3.3 is “22.04,” code-named “jammy”.
If you are currently running Ubuntu focal 20.04 LTS, you can upgrade to jammy 22.04 LTS with the following procedure:
Upgrade the release:
sudo do-release-upgrade --allow-third-party
Accept the defaults for any questions asked by the upgrade script.
Reboot the machine when requested.
Check whether the upgrade was successful:
lsb_release -a
A successful upgrade should respond with output similar to the following:
Distributor ID: Ubuntu
Description: Ubuntu 20.04(.nn) LTS
Release: 20.04
Codename: focal
If you’re upgrading from MAAS version 2.8 or lower to version 3.3: While the following procedures should work, note that they are untested. Use at your own risk. Start by making a verifiable backup; see step 1, below.
Back up your MAAS server completely; the tools and media are left entirely to your discretion. Just be sure that you can definitely restore your previous configuration, should this procedure fail to work correctly.
Add the MAAS 3.3 PPA to your repository list with the following command, ignoring any apparent error messages:
sudo apt-add-repository ppa:maas/3.3
Run the release upgrade like this, answering any questions with the given default values:
sudo do-release-upgrade --allow-third-party
Check whether your upgrade has been successful by entering:
lsb_release -a
If the ugprade was successful, this command should yield output similar to the following:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04(.nn) LTS
Release: 20.04
Codename: focal
Check your running MAAS install (by looking at the information on the bottom of the machine list) to make sure you’re running the 3.3 release.
If this didn’t work, you will need to restore from the backup you made in step 1, and consider obtaining separate hardware to install MAAS 3.3.
Improved machine list filtering
MAAS 3.3 dramatically reduces the latency associated with refreshing large machine lists.
You can filter machines mere seconds after one page loads.
How list filtering is improved
NOTE that this feature is still in development, so some of the feature-set described in this section may not be fully operational yet. As always, we reserve the right to change this feature-set until the final release of MAAS 3.3. These release notes will be updated as the feature develops.
MAAS 3.3 enhances the way you can filter the machine list, in two ways:
You may begin filtering within a very short time after the first page of the machine list loads, even if you have more than 10,000 machines in the list.
You have a wider range of filter choices, as described in the table below.
Note that with this version of MAAS, matching machine counts have been removed from the filter list for better performance.
More filter parameters have been added
The following table describes the expanded filter set for the MAAS machine list:
See How to search MAAS for more details on how to use these parameters.
Parameter (bold) w/example | Shows nodes... | Dyn | Grp | Man |
---|---|---|---|---|
arch:(=architecture) | with "architecture" | Grp | ||
arch:(!=architecture) | NOT with "architecture" | Dyn | ||
zone:(=zone-name) | in "zone-name" | Dyn | Grp | |
zone:(!=zone-name) | NOT in "zone-name" | Dyn | ||
pool:(=resource-pool) | in "resource-pool" | Dyn | Grp | |
pool:(!=resource-pool) | NOT in "resource-pool" | Dyn | ||
pod:(=pod-name) | with "pod-name" | Dyn | Grp | |
pod:(!=pod-name) | NOT with "pod-name" | Dyn | ||
pod_type:(=pod-type) | with power type "pod-type" | Dyn | Grp | Man |
pod_type:(!=pod-type) | NOT with power type "pod-type" | Dyn | Man | |
domain:(=domain-name) | with "domain-name" | Dyn | Grp | Man |
domain:(!=domain-name) | NOT with "domain-name" | Dyn | Man | |
status:(=op-status) | having "op-status" | Grp | ||
status:(!=op-status) | NOT having "op-status" | Dyn | ||
owner:(=user) | owned by "user" | Dyn | Grp | |
owner:(!=user) | NOT owned by "user" | Dyn | ||
power_state:(=power-state) | having "power-state" | Grp | Man | |
power_state:(!=power-state) | NOT having "power-state" | Dyn | Man | |
tags:(=tag-name) | with tag "tag-name" | Dyn | ||
tags:(!=tag-name) | NOT with tag "tag-name" | Dyn | ||
fabrics:(=fabric-name) | in "fabric-name" | Dyn | ||
fabrics:(!=fabric-name) | NOT in "fabric-name" | Dyn | ||
fabric_classes:(=fabric-class) | in "fabric-class" | Dyn | Man | |
fabric_classes:(!=fabric-class) | NOT in "fabric-class" | Dyn | Man | |
fabric_name:(=fabric-name) | in "boot-interface-fabric" | Dyn | Man | |
fabric_name:(!=fabric-name) | NOT in "boot-interface-fabric" | Dyn | Man | |
subnets:(=subnet-name) | attached to "subnet-name" | Dyn | ||
subnets:(!=subnet-name) | Not attached to "subnet-name" | Dyn | ||
link_speed:(link-speed) | having "link-speed" | Dyn | Man | |
link_speed:(!link-speed) | NOT having "link-speed" | Dyn | Man | |
vlans:(=vlan-name) | attached to "vlan-name" | Dyn | ||
vlans:(!=vlan-name) | NOT attached to "vlan-name" | Dyn | ||
storage:(storage-MB) | having "storage-MB" | Dyn | Man | |
total_storage:(total-stg-MB) | having "total-stg-MB" | Dyn | Man | |
total_storage:(!total-stg-MB) | NOT having "total-stg-MB" | Dyn | Man | |
cpu_count:(cpu-count) | having "cpu-count" | Dyn | Man | |
cpu_count:(!cpu-count) | NOT having "cpu-count" | Dyn | Man | |
mem:(ram-in-MB) | having "ram-in-MB" | Dyn | Man | |
mem:(!ram-in-MB) | NOT having "ram-in-MB" | Dyn | Man | |
mac_address:(=MAC) | having MAC address "MAC" | Dyn | Man | |
mac_address:(!=MAC) | NOT having | Dyn | Man | |
agent_name:(=agent-name) | Include nodes with agent-name | Dyn | Man | |
agent_name:(!=agent-name) | Exclude nodes with agent-name | Dyn | Man | |
cpu_speed:(cpu-speed-GHz) | CPU speed | Dyn | Man | |
cpu_speed:(!cpu-speed-GHz) | CPU speed | Dyn | Man | |
osystem:(=os-name) | The OS of the desired node | Dyn | Man | |
osystem:(!=os-name) | OS to ignore | Dyn | Man | |
distro_series:(=distro-name) | Include nodes using distro | Dyn | Man | |
distro_series:(!=distro-name) | Exclude nodes using distro | Dyn | Man | |
ip_addresses:(=ip-address) | Node's IP address | Dyn | Man | |
ip_addresses:(!=ip-address) | IP address to ignore | Dyn | Man | |
spaces:(=space-name) | Node's spaces | Dyn | ||
spaces:(!=space-name) | Node's spaces | Dyn | ||
workloads:(=annotation-text) | Node's workload annotations | Dyn | ||
workloads:(!=annotation-text) | Node's workload annotations | Dyn | ||
physical_disk_count:(disk-count) | Physical disk Count | Dyn | Man | |
physical_disk_count:(!disk-count) | Physical disk Count | Dyn | Man | |
pxe_mac:(=PXE-MAC) | Boot interface MAC address | Dyn | Man | |
pxe_mac:(!=PXE-MAC) | Boot interface MAC address | Dyn | Man | |
fqdn:(=fqdn-value) | Node FQDN | Dyn | Man | |
fqdn:(!=fqdn-value) | Node FQDN | Dyn | Man | |
simple_status:(=status-val) | Include nodes with simple-status | Dyn | Man | |
simple_status:(!=status-val) | Exclude nodes with simple-status | Dyn | Man | |
devices:(=) | Devices | Dyn | Man | |
interfaces:(=) | Interfaces | Dyn | Man | |
parent:(=) | Parent node | Dyn | Grp | Man |
Native support for 22.04 LTS and core22
MAAS can now be installed as a PPA, directly on Ubuntu 22.04, without the need to use snaps.
MAAS packages now run on Ubuntu 22.04, aka Jammy Jellyfish.
Notes on 22.04 LTS MAAS packages
MAAS users want to install MAAS on a 22.04 LTS system via deb packages, as well as upgrade machines currently running MAAS on Ubuntu 20.04 LTS to 22.04 LTS. With the advent of MAAS 3.3, we have created an appropriate PPA with all required dependencies. This PPA can be directly installed on Ubuntu 22.04, Jammy Jellyfish, with no requirement to use snaps.
Note that the upgrade procedure will require a release upgrade from previous Ubuntu versions to Ubuntu 22.04. Also note that, with this version of MAAS, PostgreSQL 12 is deprecated and should be upgraded to PostgreSQL 14. The installation guide provides the necessary details.
We wanted to improve the performance of the machine list page for large (>10000 machines) MAASes, and allow users to search and filter machines as quickly as possible.
We're working on making large machine lists load in background.
In MAAS 3.2 and earlier, machine search and filter requires that all machines be fetched by the UI client before it becomes usable. For smaller MAASes this may not be an issue, but when considering MAASes with 1000 machines or more this can make the user wait an unacceptably long time before they can search and filter. With the release of MAAS 3.3, when a MAAS UI user wants to find a particular machine, they do not have to wait for all their machines data to load before they can start searching. The user can start searching for machines within a short time after the visible page of the machine list has fully loaded on the UI screen. See Improved machine list filtering, in these release notes, for details on the enhanced filtering capabilities that were included in this work.
Enhanced MIB support for Windows OS images
The procedure for creating custom Windows OS images has been thoroughly updated and verified.
MAAS custom Windows images now support most releases and options.
What has been added to Windows custom images
Specifically, MIB now supports a much wider range of Windows images. Previously, only 2012 and 2106 Windows versions were supported with MIB. Now the list is much longer, bringing deployable MAAS versions up to date with the current Windows releases:
There are also special instructions for using both UEFI and BIOS bootloaders, as well as instructions for using LXD containers with custom-built Windows images.
Finally, MIB has been extended to accept a much wider range of options for windows builds. Some of the new Windows-specific options include:
Some news Windows-specific platform options include:
This update should make it much simpler to use custom-built Windows images with MAAS.
Shifting the MAAS API documentation to OpenAPI standards
MAAS API User want to experience the MAAS API in a more standard way, along the lines of the OpenAPI definition. MAAS 3.3 begins this process by providing most of the MAAS API functionality in a discover-able form. You should now be able to easily retrieve human-readable service documentation and API definitions using standard methods. Consult the API documentation^ for details.
MAAS configuration settings reference
MAAS 3.3 documentation consolidates configuration settings in one article, in addition to their other mentions throughout the documentation set.
"Settings" now has its own page, and some new options.
MAAS configuration settings are scattered in various (generally relevant) places throughout the documentation, but there has never been one reference page that addresses all settings in one place. MAAS 3.3 remedies this by adding the Configuration settings reference.
A minor new feature added with MAAS 3.3 is MAAS site identity, which enables some new configuration parameters:
MAAS name: The “* MAAS name” is a text box that sets the text which appears at the bottom of every MAAS screen, in front of the version descriptor.
MAAS name emoji: You may also paste a suitable emoji in front of the MAAS name to help identify it.
MAAS theme main colour: You may also help identify your MAAS instance by changing the colour of the top bar; several colour choices are available.
These enhancements were made available to assist users who have more than one instance (e.g., production and staging), and have issues with operations accidentally making changes to the wrong instance.
Improved MAAS event documentation
MAAS event documentation has been expanded to include much better explanations of MAAS events, including many examples.
We've finally documented MAAS events, making them easier to decode.
Events are state changes that happen to MAAS elements, caused by MAAS itself, an external agent, or a users. Understanding events is an essential debugging skill. But events appear in three different places in MAAS, each presentation providing slightly different information. These screens are usually dense and hard to search.
In this major documentation update, we've standardised on the MAAS CLI events query command as the best way to review, filter, and summarise events. We've summarised the six main event types:
INFO: the default, used if no level= is specified; shows INFO and ERROR events. A typical INFO event is “Ready”, indicating that a machine has reached the “Ready” state.
CRITICAL: critical MAAS failures; shows only CRITICAL events. These events usually represent severe error conditions that should be immediately remedied.
ERROR: MAAS errors; shows only ERROR events. Typical ERROR events include such things as power on/off failures, commissioning timeouts, and image import failures.
WARNING: failures which may or may not affect MAAS performance; shows WARNING and ERROR events. A typical warning event, for example, might include the inability to find and boot a machine.
DEBUG: information which would help debug MAAS behaviour; shows DEBUG and INFO events. Typical DEBUG events involve routine image import activities, for example.
AUDIT: information which helps determine settings and user actions in MAAS; shows only AUDIT events. They are covered in more detail elsewhere.
In addition, the new document explains how these event types tend to overlap when queried. We've also provide detailed instructions on how to use the most common filters:
hostname: Only events relating to the node with the matching hostname will be returned. This can be specified multiple times to get events relating to more than one node.
mac_address: Only nodes with matching MAC addresses will be returned. Note that MAC address is not part of the standard output, so you’d need to look it up elsewhere.
id: Only nodes with matching system IDs will be returned. This corresponds to the node parameter in the JSON listing, not the id parameter there, which is a serial event number.
zone: Only nodes in the zone will be returned. Note that zones are not part of the standard output, so you’d need to look these up elsewhere.
level: The event level to capture. You can choose from AUDIT, CRITICAL, DEBUG, ERROR, INFO, or WARNING. The default is INFO.
limit: Number of events to return. The default is 100, the maximum in one command is 1000.
before: Defines an event id to start returning older events. This is the “id” part of the JSON, not the system ID or “node”. Note that before and after cannot be used together, as the results are unpredictable.
after: Defines an event id to start returning newer events. This is the “id” part of the JSON, not the system ID or “node”. Note that before and after cannot be used together, as the results are unpredictable.
Since the MAAS CLI returns JSON -- which is hard to humans to parse -- we've included some exemplary jq
predicates of the form:
maas $PROFILE events query limit=20 \
| jq -r '(["USERNAME","NODE","HOSTNAME","LEVEL","DATE","TYPE","EVENT"] |
(., map(length*"-"))),
(.events[] | [.username,.node,.hostname,.level,.created,.type,.description])
| @tsv' | column -t -s$'\t'
And finally, we provided some detailed usage examples. For instance, we walked a MAAS machine called fun-zebra
through the following states:
We used this example command:
maas $PROFILE events query level=INFO hostname=fun-zebra limit=1000 | jq -r '(["USERNAME","NODE","HOSTNAME","LEVEL","DATE","TYPE","EVENT"] | (., map(length*"-"))),(.events[] | [.username,.node,.hostname,.level,.created,.type,.description]) | @tsv' | column -t -s$'\t'
This gave us a reasonably thorough report of what happened to the machine:
USERNAME NODE HOSTNAME LEVEL DATE TYPE EVENT
-------- ---- -------- ----- ---- ---- -----
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:29:53 Exited rescue mode
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:29:52 Powering off
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:28:58 Rescue mode
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:27:18 Loading ephemeral
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:26:40 Performing PXE boot
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:26:23 Power cycling
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:26:23 Entering rescue mode
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:26:14 Powering off
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:26:14 Aborted testing
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:24:08 Performing PXE boot
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:23:51 Powering on
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:23:51 Testing
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:23:38 Released
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:23:37 Powering off
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:23:37 Releasing
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:22:41 Deployed
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:21:49 Rebooting
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:18:42 Configuring OS
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:17:42 Installing OS
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:17:30 Configuring storage
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:15:31 Loading ephemeral
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:14:48 Performing PXE boot
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:14:31 Powering on
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 21:14:27 Deploying
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 20:04:17 Ready
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 20:04:07 Running test smartctl-validate on sda
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 20:01:27 Gathering information
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 20:01:10 Loading ephemeral
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 20:00:35 Performing PXE boot
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 20:00:16 Powering on
unknown bk7mg8 fun-zebra INFO Thu, 29 Sep. 2022 20:00:16 Commissioning
Additional examples and techniques are provided as part of this new documentation.
Improved MAAS audit event documentation
MAAS audit event documentation has been greatly expanded to include much better explanations of MAAS audit events, including detailed examples of how to reconstruct machine life-cycles in the updated version of "How to work with audit event logs".
We've finally offered details about how you should audit MAAS.
Understanding how audit events explain MAAS internal operations
There's probably no limit to what you can figure out if you use audit events properly. The problems are: (1) a lot goes on in MAAS, and (2) you need more than just the explicit audit events to get a clear picture of what's happening. We've tried to address this by taking a deeper look at the auditing process (not just the events).
As you may know, an audit event is just a MAAS event tagged with AUDIT
. It generally captures changes to the MAAS configuration and machine states. These events provide valuable oversight of user actions and automated updates -- and their effects -- especially when multiple users are interacting with multiple machines.
Audit events are examined using the MAAS CLI with the level=AUDIT
parameter set:
$ maas $PROFILE events query level=AUDIT
You'll probably get better results by appending a jq
filter, to prettify the output:
$ maas $PROFILE events query level=AUDIT after=0 limit=20 \
| jq -r '(["USERNAME","HOSTNAME","DATE","EVENT"] |
(., map(length*"-"))),
(.events[] | [.username,.hostname,.created,.description])
| @tsv' | column -t -s$'\t'
By itself, such a command might produce output similar to this:
USERNAME HOSTNAME DATE EVENT
-------- -------- ---- -----
unknown valued-moth Thu, 21 Apr. 2022 19:45:14 pci device 2 was updated on node 8wmfx3
unknown valued-moth Thu, 21 Apr. 2022 19:45:14 pci device 1 was updated on node 8wmfx3
unknown valued-moth Thu, 21 Apr. 2022 19:45:14 pci device 1 was updated on node 8wmfx3
unknown valued-moth Thu, 21 Apr. 2022 19:45:14 pci device 1 was updated on node 8wmfx3
unknown valued-moth Thu, 21 Apr. 2022 19:45:14 pci device 1 was updated on node 8wmfx3
unknown valued-moth Thu, 21 Apr. 2022 19:45:14 pci device 1 was updated on node 8wmfx3
unknown valued-moth Thu, 21 Apr. 2022 19:45:14 pci device 1 was updated on node 8wmfx3
unknown valued-moth Thu, 21 Apr. 2022 19:45:14 pci device 1 was updated on node 8wmfx3
unknown valued-moth Thu, 21 Apr. 2022 19:45:14 pci device 1 was updated on node 8wmfx3
unknown valued-moth Thu, 21 Apr. 2022 19:45:14 pci device 0 was updated on node 8wmfx3
unknown valued-moth Thu, 21 Apr. 2022 19:45:14 block device sda was updated on node 8wmfx3
unknown valued-moth Thu, 21 Apr. 2022 19:45:14 interface enp5s0 was updated on node 8wmfx3
unknown valued-moth Thu, 21 Apr. 2022 19:45:14 0 bytes of memory was removed on node 8wmfx3
admin valued-moth Thu, 21 Apr. 2022 19:36:48 Started deploying 'valued-moth'.
admin valued-moth Thu, 21 Apr. 2022 19:36:21 Acquired 'valued-moth'.
admin unknown Thu, 21 Apr. 2022 19:21:46 Updated configuration setting 'completed_intro' to 'True'.
admin unknown Thu, 21 Apr. 2022 19:20:49 Updated configuration setting 'upstream_dns' to '8.8.8.8'.
admin unknown Thu, 21 Apr. 2022 19:20:49 Updated configuration setting 'maas_name' to 'neuromancer'.
admin unknown Thu, 21 Apr. 2022 19:20:47 Updated configuration setting 'http_proxy' to ''.
admin unknown Thu, 21 Apr. 2022 19:20:24 Logged in admin.
You can, of course, use the various event filters with level=AUDIT
to further restrict your output.
Later on in the documentation, we walk through a sample of audit events and demonstrate how to interpret and use them. This includes detailed examples of various audit event queries, walking through real-world examples to answer questions like:
Who deployed comic-muskox
?
What happened to sweet-urchin
?
Why is fleet-calf
in rescue mode?
Where did these changes come from in setup.sh
?
What caused ruling-bobcat
to be marked as broken?
Who's responsible for the DHCP snippet called foo
?
As part of the updates to our "How to work with audit event logs", we've tried to offer you some finesse in reconstructing machine life-cycles. We've shown how to combine various levels of MAAS event queries with standard command line utilities to produce clear audit trails such as this one:
418606 ERROR Marking node broken Wed, 17 Nov. 2021 00:02:52 A Physical Interface requires a MAC address.
418607 DEBUG Node changed status Wed, 17 Nov. 2021 00:02:52 From 'New' to 'Broken'
418608 DEBUG Marking node fixed Wed, 17 Nov. 2021 00:04:24
418609 DEBUG Node changed status Wed, 17 Nov. 2021 00:04:24 From 'Broken' to 'Ready'
418613 DEBUG User acquiring node Wed, 17 Nov. 2021 00:04:51 (admin)
418614 DEBUG Node changed status Wed, 17 Nov. 2021 00:04:51 From 'Ready' to 'Allocated' (to admin)
418615 DEBUG User starting deployment Wed, 17 Nov. 2021 00:04:51 (admin)
418616 DEBUG Node changed status Wed, 17 Nov. 2021 00:04:51 From 'Allocated' to 'Deploying'
418617 INFO Deploying Wed, 17 Nov. 2021 00:04:51
418618 AUDIT Node Wed, 17 Nov. 2021 00:04:51 Started deploying 'ruling-bobcat'.
418619 INFO Powering on Wed, 17 Nov. 2021 00:04:55
418625 ERROR Marking node failed Wed, 17 Nov. 2021 00:05:32 Power on for the node failed: Failed talking to node's BMC: Failed to power pbpncx. BMC never transitioned from off to on.
418626 DEBUG Node changed status Wed, 17 Nov. 2021 00:05:32 From 'Deploying' to 'Failed deployment'
418627 ERROR Failed to power on node Wed, 17 Nov. 2021 00:05:32 Power on for the node failed: Failed talking to node's BMC: Failed to power pbpncx. BMC never transitioned from off to on.
In this case, we managed to recognise, rather quickly, that no physical interface had been defined for ruling-bobcat
, hence deployment fails because MAAS can't communicate with the node's BMC. There are many other issues you can recognise with careful use of MAAS events to audit machine behaviours. We welcome your feedback on this new documentation endeavour.
The following sections enumerate the bugs we've fixed in MAAS 3.3.
So far in MAAS 3.3, we've fixed well over 100 bugs:
More bug-fixes are planned for later 3.3 releases.
Release notes for other MAAS versions
Here are release notes for other relatively recent MAAS versions: