NOTE: THIS IS A WORK-IN-PROGRESS DOCUMENT AND MAY CONTAIN OUTDATED INFORMATION!

What is a toolchain (cross, native)?

The primary purpose of a toolchain is to compile and link source code into executables and shared libraries. It can be a native or a cross toolchain. Native means your host and target are the same. Cross means they’re different.

To build a toolchain, traditionally the steps are:
* Download sources
* Build dependencies
* Build binutils
* Install kernel header files
* Build 1st stage GCC - a minimal GCC C compiler suitable for building the C library
* Build C library
* Build 2nd stage GCC - a full GCC

Using ABE, the steps now become:

  • ./configure
    ./abe.sh --target <triplet> --build all

If you don't specify --target, a native build is done instead of a cross build.

What is ABE? Why it exists?

ABE stands for Advanced Build Environment. It is the official Linaro Toolchain Build Framework. It exists because Linaro needed a better tool for automating the build and test process for the toolchain.

Historically, every Linaro engineer (or anyone who wants to build a toolchain for that matter) has his/her own scripts that build toolchains in various different ways. Among the issues with this approach are:

* unsupported hacks required
* unsupported patches required
* incompatible with git
* not fully integrated with the Linaro build and test processes

These issues introduce discrepancies between what we use and what we distribute to member companies. In order to have some uniformity and be sure what we release is what we tested, i.e. built the same way, ABE was developed.

ABE Basic Concepts

* Rather than Makefiles, it should uses Bourne shell scripts for more functionality and maintainability. Both Android and Chromium use this technique.
* Uses shell scripts as config files, ala *.conf and set global variables or custom functions.
* Each config file lists its major build time dependencies and configure options. The default config files are copied into the build tree, where they can be edited, overriding the default options. This makes it easier to integrate manual building and testing with automation in a way that's consistent.
* Replaces crosstool-ng with similar functionality, but better integrated into Linaro build and test processes.
* Should build all toolchains with no custom patches.

ABE Methodology

ABE is a toolchain build utility tool. It produces:
* gcc-linaro-*.tar.xz – the compiler and tools
* runtime-linaro-*.tar.xz – runtime libraries needed on the target
* sysroot-linaro-*.tar.xz – sysroot (a set of libraries and headers to develop against)

What can ABE do

ABE is a feature-rich centralized platform. It isn't just used for x86->arm (cross) and arm->arm (native). It could be used for any target or host.

Among the common build options are:

* native toolchain: x86, x86_64, arm, aarch64
* cross toolchain: Likely candidates: x86*->arm, x86*->aarch64
* Canadian-cross toolchain: x86*->arm or x86*->aarch64, but runs on Win32

Simply by supplying the right options to a single command line script, users can:

* specify native or cross compilation
* specify which compiler and binutils to use for compiling programs
* specify custom configure options or use default set
* use binary snapshots for builds
* bootstrap development environment (when not using binary snapshots, i.e. pre-installed sysroot)
* run 3rd party benchmarks
* supply data for graphing performance and regressions
* build with stock Ubuntu toolchains
* build with trunk/master
* build all or part of the toolchain components
* specify download sources (Linaro, FSF, etc.) and versions
* automate configure and make commands
* automatically tag releases (e.g. src tarball ver X built with ABE ver Y)

Getting ABE

ABE is available from the Linaro GIT server:

https://git.linaro.org/toolchain/abe.git

Clone from one of the URL above and checkout the stable branch. E.g.

Building Toolchains With ABE

While it is possible to run ABE from its source tree, this isn't recommended. The best practice is to create a new project or build directory separate from the source directory, and configure ABE in that directory. That makes it easy to change branches, or even delete subdirectories. There are defaults for all paths, which is to create them under the same directory configure is run in.

To get started, create the build directory and then change to it. Any name for the directory will work.

mkdir _build; cd _build

Over the course of using ABE, multiple toolchains will obviously be built. A separate build directory is REQUIRED for each of these toolchains to ensure a clean build and use of the proper dependency chain.

Configuring ABE

From the build directory:

../abe/configure

Dependencies

During the configure step, ABE will let you know if there are any missing dependencies. Mostly, they are:

bison automake autoconf libtool libncurses-dev gawk gcc-multilib g++-multilib zlib1g-dev

Repeat the configure step until there are no longer warnings for missing dependencies.

Most of the dependencies are actually for the toolchain, ABE just makes sure they're installed. The only non-default dependency for ABE is git-new-workdir.

Note that ‘sudo apt-get install git-new-workdir’ does NOT work. To install git-new-workdir, copy the script from your git installation’s contrib/workdir/ to one of your $PATH directories and chmod it to be executable.

Building the Toolchain with ABE

From the build directory:

../abe/abe.sh --target <triplet> --build all

If you don't specify --target, a native build is done instead of a cross build.

Common Examples

Cross Builds

  • Build a Linux cross toolchain:
    • abe.sh --target arm-linux-gnueabihf --build all

    Build a Linux cross toolchain with eglibc as the clibrary:
    • abe.sh --target arm-linux-gnueabihf --set libc=eglibc --build all

    Build a bare metal toolchain:
    • abe.sh --target aarch64-elf --build all

Canadian Cross Builds

  • Build a Windows cross toolchain on a Linux system:
    • abe.sh --target arm-linux-gnueabihf --build all
      abe.sh --host x86_64-linux-gnu --target arm-linux-gnueabihf --build all

For Canadian Cross builds, first make sure you have the appropriate Debian/Ubuntu mingw32 and mingw-w64 packages installed if building for Windows. After that, build the cross compiler and add the built bin directory to your PATH before finally doing the Canadian Cross build. The final build is used to build host executables only. The cross compiler is required to compile the libraries (sysroot, libgcc, libstdc++, etc.), as the host executables can't be executed on the build system.

ABE Directory Structure

Running configure in a clean project directory will define the following default assumptions about the build environment:

* local snapshots - This is where all the downloaded sources get stored. The default is the current directory under snapshots.
* local build - This is where all the executables get built. The default is to have builds go in a top level directory which is the full hostname of the build machine. Under this a directory is created for the target architecture.

If configure is executed without any parameters, the defaults are used. It is also possible to change those values at configure time. For example:

../abe/configure

--with-local-snapshots=/path/to/snapshots

--with-local-builds=/path/to/destdir

This is useful if say you want to have 2 sessions of ABE running and do not want to have to download all the packages again. All you have to do is configure the second ABE with --with-local-snaphosts=<absolute path to the first ABE’s local snapshots directory>. Note that path for these options are absolute!

You can execute ./configure --help to get the full list of configure time parameters.

The configure process produces a host.conf file, with the default settings. This file is read by ABE at runtime, so it's possible to change the values and rerun abe.sh to use the new values. Each toolchain component also has a config file. The default version is copied at build time to the build tree. It is also possible to edit this file, usually called something like gcc.conf, to change the toolchain component specific values used when configuring the component.

You can see what values were used to configure each component by looking in the top of the config.log file, which is produced at configure time. For example:

head _build/${hostname}/${target}/${component}/config.log

User Config File (~/.aberc)

It's also possible for each user to have their own config file for default runtime options, which are ignored at configure time. This is located in the users home directory, and is called .aberc. Typical settings are: your email address (used for ChangeLog entries), your launchpad ID, and your subversion ID for upstream commit. These are primarily used when building releases. One often-used setting is how many make jobs to run, in this example, it's on a quad-core machine with 8 threads. Any variable can be overridden by this file, so be careful. A list of useful variables is in lib/globals.sh.

# Used for ChangeLog enttries

fullname="John Doe"

email=john.doe@linaro.org

# Used to checkout sources

launchpad_id=johndoe-foobar

# Optional flags to make we always want, in this example, spawn 8 jobs

make_flags="-j 8"

Default Behaviors

There are several behaviors that may not be obvious, so they're documented here. One relates to GCC builds. When built natively, GCC only builds itself once, and is fully functional. When cross compiling, this works differently. A fully functional GCC can't be built without a sysroot, so a minimum compiler is built to build the C library. This is called stage1. This is then used to compile the C library. One this library is installed, then stage2 of GCC can be built. GCC stage2 is a fully functional compiler.

When building an entire toolchain using the 'all' option to --build, all components are built in the correct order, including both GCC stages. It is possible to specify building only GCC, in this case the stage1 configure flags are used if the C library isn't installed. If the C library is installed, then the stage2 flags are used. A message is displayed to document the automatic configure decision.

Specifying Sources

Sources are either snapshots (tarballs) or repositories (git). The locations of the sources for all the packages can be found in config/sources.conf. You can also find and edit the location, along with other options, for each specific individual package under config/<package>.conf.

It's possible to specify a full tarball name, minus the compression part, which will be found dynamically. If the word 'tar' appears in the supplied name, it's only looked for on the remote directory. If the name is a URL for http or git, the source are instead checked out from a remote repository instead with the appropriate source code control system.

Specifying Component Versions

By default ABE uses the latest versions of all toolchain components as specified by the appropriate config/sources.conf and config/<component>.conf file. It's also possible to specify different versions on the command line. For example, you might want to do a toolchain build using a binutils snapshot you are working on. To do that, you'd do this:

abe.sh --target arm-linux-gnueabihf binutils=binutils-linaro-2.25.0-2015.01-2.tar.xz --build all

By default, this would go out and wget http://148.251.136.42/snapshots-ref/binutils-linaro-2.25.0-2015.01-2.tar.xz. This would only work with source tarballs. The syntax for a git revision is different, ie…

abe.sh --target arm-linux-gnueabihf binutils=binutils-gdb.git@abcdef --build all

For an url:

abe.sh --target arm-linux-gnueabihf binutils=https://<url>/binutils*.{git|tar.xz} --build all

This would build and install that version of the binutils, and use it for the rest of the build. Specifying a version works Makefile-style, i.e. name=value. A version for each of the components can be specified at the same time.

Specifying Local or Remote Tarballs or Repositories

Remote tarballs are downloaded by default from http://148.251.136.42/snapshots-ref/. You can override download locations of tarballs or repositories by specifying "<component>=https://<url>/<component>.{git|tar.xz}" on the command line. E.g.

gcc=https://git.foo.com/gcc.git, or
gcc=http://bar.com/gcc.tar.xz


For local tarballs, the build process is as follows:

1. copy the tar file and corresponding .asc file (if available) into the snapshots directory

2. if no .asc file was available, then create one using: md5sum .../snapshots/foo.tar.xz > foo.tar.xz.asc

3. build using "--disable update". If "--disable update" is not used then the build will fail because ABE will try to check for a newer copy of the file on the webserver and will fail because the file does not exist.

Please NOTE that even with "--disable update", ABE will still fail if there is no network connection, as it always validates the git repo URLs.

Building Linaro Releases from Manifest

Support for manifests is only available for releases 2016.05 and newer!

abe.sh --manifest /path/to/manifest.txt --build all --tarbin

--tarbin is required if the manifest was created by a build with --tarbin, which is the case for our release builds.

The manifest encodes the target and the configure flag options, so you need the correct manifest for the target you want to build. You can download manifests from http://releases.linaro.org/components/toolchain/binaries/.

If using the win32 manifest to build a Canadian cross, from the same build dir, do:

  1. sudo apt-get install mingw32 mingw32-binutils mingw32-runtime #or install from https://sourceforge.net/projects/mingw

  2. abe.sh --manifest /path/to/linux/manifest.txt --build all
  3. abe.sh --manifest /path/to/win32/manifest.txt --host i686-w64-mingw32 --build all --tarbin

Building Your Own Releases

Releases can be built using the below option:

--release <release_date_or_name> --tarball/--tarbin/--tarsrc

To package a release, add to the abe.sh command line '--tarball' which makes source and binary tarballs. These have a long name based on the branch and revision, so to specify the release use '--release <release_date_or_name>'. For example:

abe.sh --target arm-linux-gnueabihf --release 2017.01 --tarball --build all

Testing and Verifying A Toolchain Built With ABE

Native and Remote Testing

To verify the built toolchain, use the --check option.

abe.sh --check --build all

This will run make check on the packages. For cross builds this will run the tests on native hardware.

For setting up remote testing, Dejagnu can be used.

Getting/Downloading Component Source code

abe.sh --manifest /path/to/manifest --checkout all

Full Options List

Run ‘abe.sh --help’ for a detailed description of all options.

Manual Hacking

ABE is oriented towards development, so it’s possible to mix the automated builds with manual hacking. ABE creates a ‘builds’ directory under the directory configure was run in. Under this ‘builds’ directory is a top level directory that is of the value returned by the hostname command. Under this directory another is created using ${target}. Under this, all of the build directories for each project are created. ABE does not build in the source tree of a project, it uses a separate build directory to keep things clean.

For configure changes, change to the build directory _build/builds/$host/$target. Then change into the project directory. You can extract the existing configure parameters by doing a head config.log, and then cutting and pasting them into the terminal. You can edit the command line to configure at this time to change anything you need. Make sure the cross-compiler is in your path. This can be found in the build tree as well. This gets installed using the prefix builds/destdir/$host/bin. Once configured, you can run make manually to build the project.

To avoid having to time download the _build/snapshots directory again, which can take a long time, you can have multiple _build/builds_xxx directories created (e.g. _build/builds_gcc5 and _build/builds_gcc6) and just rename the one you want to work on back to _build/builds before you start building. Just make sure to handle carefully the frequent need to rename the builds and builds_xxx directories back and forth, and not confuse yourself.

Another way to change the configuration of a project, but still use ABE for the building the rest of the toolchain, is to edit the runtime config file for the project. When the abe.sh script is run, it copies the config file for that project from the config directory to the build directory. This is sourced last, so any changes to this file will be used by ABE when configuring. This file is located in the top-level of the build tree for the project. Look for the *.conf file.

For source file changes, you can change to the directory the snapshots are stored in. Each project's source is extracted into this directory. As long as the source tarball hasn't changed, it isn't extracted over an existing source tree. Any changes you make to those sources won't be overwritten.

Jenkins Integration

ABE has been integrated into the Jenkins automated build system. This is enabled through the jenkins.sh script, which is executed by Jenkins to do the actual build. You can access the main page for ABE at https://ci.linaro.org/job/tcwg-make-release/. Build logs can be accessed through the build on the Jenkins web page.

FAQ

  • Is it possible to install the toolchain at a desired location?
    We suggest creating a tarball release (with --tarbin) and then extracting it at the required path.

ABE (last modified 2017-02-16 04:25:42)