NOTE: THIS PAGE CONTAINS OUTDATED INFORMATION! USE AT YOUR OWN RISK!

Cbuildv2 is a bourne shell rewrite of the existing cbuild system as used by Linaro. While being oriented to the Linaro way of doing things, Cbuildv2 should be usable by others by just reconfiguring.

Getting Cbuildv2

Cbuildv2 is available from the Linaro GIT server:

git clone git://git.linaro.org/toolchain/cbuild2

Please note that Cbuildv2 is still a work in progress at this stage.

Configuring Cbuildv2

While it is possible to run Cbuildv2 from its source tree, this isn't recommended. The best practice is to create a new build directory, and configure Cbuildv2 in that directory. That makes it easy to change branches, or even delete subdirectories. There are defaults for all paths, which is to create them under the same directory configure is run in.

To get started, create the build directory and then change to it. Any name for the directory will work.

$ mkdir _build; cd _build

$ ../cbuild2/configure

There are several directories that Cbuildv2 needs. These are the following:

* local snapshots - This is where all the downloaded sources get stored. The default is the current directory under snapshots.

* local build - This is where all the executables get built. The default is to have builds go in a top level directory which is the full hostname of the build machine. Under this a directory is created for the target architecture.

* remote snapshots - This is where remote tarballs are stored. This currently defaults to cbuild.validation.linaro.org, which is accessible via HTTP to the general public.

If configure is executed without any parameters, the defaults are used. It is also possible to change those values at configure time. For example: $CBUILD-PATH/configure --with-local-snapshots=$CBUILD_PATH/snapshots --with-local-builds=$CBUILD-PATH/destdir --with-remote-snapshots=cbuild@toolchain64.lab:/home/cbuild/var/snapshots/

This changes the 3 primary paths, including changing the remote host to use rsync or ssh to download tarballs instead of HTTP. You can execute ./configure --help to get the full list of configure time parameters.

The configure process produces a host.conf file, with the default settings. This file is read by cbuildv2 at runtime, so it's possible to change the values and rerun cbuild2.sh to use the new values. Each toolchain component also has a config file. The default version is copied at build time to the build tree. It is also possible to edit this file, usually called something like gcc.conf, to change the toolchain component specific values used when configuring the component.

You can see what values were used to configure each component by looking in the top of the config.log file, which is produced at configure time. For example: head ${hostname}/${target}/${component}/config.log

User Config File

It's also possible for each user to have their own config file for default options. This is located in the users home directory, and is called .cbuildrc. Typical settings are: your email address (used for ChangeLog entries), your launchpad ID, and your subversion ID for upstream commit. These are primarily used when building releases. One often-used setting is how many make jobs to run, in this example, it's on a quad-core machine with 8 threads. Any variable can be overridden by this file, so be careful. A list of useful variables is in lib/globals.sh.

# Used for ChangeLog enttries

fullname="John Doe"

email=john.doe@linaro.org

# Used to checkout sources

launchpad_id=johndoe-foobar

svn_id=jdoe

# Optional flags to make we always want, in this example, spawn 8 jobs

make_flags="-j 8"

Default Behaviors

There are several behaviors that may not be obvious, so they're documented here. One relates to GCC builds. When built natively, GCC only builds itself once, and is fully functional. When cross compiling, this works differently. A fully functional GCC can't be built without a sysroot, so a minimum compiler is built to build the C library. This is called stage1. This is then used to compile the C library. One this library is installed, then stage2 of GCC can be built.

When building an entire toolchain using the 'all' option to --build, all components are built in the correct order, including both GCC stages. It is possible to specify building only GCC, in this case the stage1 configure flags are used if the C library isn't installed. If the C library is installed, then the stage2 flags are used. A message is displayed to document the automatic configure decision.

Specifying The Source

It's possible to specify a full tarball name, minus the compression part, which will be found dynamically. If the word 'tar' appears in the supplied name, it's only looked for on the remote directory. If the name is a URL for bzr, sv, http, or git, the source are instead checked out from a remote repository instead with the appropriate source code control system.

It's also possible to specify an 'alias', ie ... 'gcc-4.8' instead. In this case, the remote snapshot directory is checked first. If the name is not unique, but matches are found, and error is generated. If the name isn't found at all, then a URL for the source repository is extracted from the sources.conf file, and the code is checkout out.

There are a few ways of listing the files to see what is available. There are two primary remote directories where files that can be downloaded are stored. These are 'snapshots' or 'infrastructure'. Infrastructure is usually only installed once per host, and contains the other packages needed to build GCC, like gmp. Snapshots is the primary location of all source tarballs. To list all the available snapshots, you can do this:

cbuild2.sh --list snapshots

To build an entire cross toolchain, the simplest way is to let cbuildv2 control all the steps. Although it is also possible to do each step separately. To build a full cross full toolchain, do this:

cbuild2.sh --target arm-none-linux-gnueabihf --build all

If you don't specify --target, a native build is done instead of a cross build.

You can also run cbuildv2 with the --interactive options, which will display a subset of all the packages that matches the supplied string. To build a specific component, use the --build option to cbuildv2. the --target option is also used for cross builds. For example:

cbuild2.sh --target arm-none-linux-gnueabihf --build gcc-linaro-4.8.2013.07-1

This would fetch the source tarball for this release, build anything it needs to compile, the binutils for example, and then build these sources. You can also specify a URL to a source repository instead. For example:

cbuild2.sh --target arm-none-linux-gnueabihf --build git://git.linaro.org/toolchain/eglibc.git

Specifying Component Versions

By default Cbuildv2 uses the latest versions of all toolchain components as specified by the appropriate config*.conf file. It's also possible to specify different versions on the command line. For example, you might want to do a toolchain build using a binutils snapshot you are working on. To do that, you'd do this:

cbuild2.sh --target arm-none-linux-gnueabihf binutils=2.24~20130602+git7afcd65 --build all

This would build and install that version of the binutils, and use it for the rest of the build. Specifying a version works Makefile style, name=value. Versions for each of the components can be specified at the same time.

Command Line Options

--target

config triplet for the target machine, default native

--build

component to build, usually all

--infrastructure

build only the infrastructure libraries, gmp, mpc, mpfr

--dryrun

Display all configuration and build options but don't do anything

Manual Hacking

Cbuildv2 is oriented towards development, so i's possible to mix the automated builds with manual hacking. Cbuildv2 creates a build directory under the directory configure was run in. The top level directory is the value returned by the hostname command. Under this directory another is created using ${target}. Under this all of the build directories for each project are created. Cbuildv2 does not build in the source tree of a project, it uses a separate build directory to keep things clean.

For configure changes, change to the build directory builds/$host/$target. Then change into the project directory. You can extract the existing configure parameters by doing a head config.log, and then cutting and pasting them into the terminal. You can edit the command line to configure at this time to change anything you need. Make sure the cross-compiler is in your path. This can be found in the build tree as well. This gets installed using the prefix builds/destdir/$host/bin. Once configured, you can run make manually to build the project..

Another way to change the configuration of a project, but still use Cbuildv2 for the building the rest of the toolchain, is to edit the runtime config file for the project. When the cbuild2.sh script is run, it copies the config file for that project from the config directory to the build directory. This is sourced last, so any changes to this file will be used by cbuildv2 when configuring. This file is located in the top-level of the build tree for the project, look for the *.conf file.

For source file changes, you can change to the directory the snapshots are stored in. Each project's source is extracted into this directory. As long as the source tarball hasn't changed, it isn't extracted over an existing source tree. Any changes you make to those sources won't be overwritten. For git/svn/bzr source trees, update is run after the initial checkout, so any changes are preserved.

Jenkins Integration

Cbuildb2 has been integrated into the Jenkins automated build system. This is enabled through the jenkins.sh script, which is executed by Jenkins to do the actual build. You can access the main page for Cbuildv2 at Jenkins. You'll need to be logged-in using your Linaro credentials (job is restricted to the Linaro TCWG group). From there you can click on the Build Now link to start a build. Builds then get started for all supported configurations, both cross and native. Test results are optional, and need to be enabled on the Build Now page. On that page you can also select the target platform, by default all are built. Build logs can be accessed through the build on the Jenkins web page.

Notes on Bourne Shell Scripting

These scripts use a few techniques in many places that relate to shell functions. One is heavy use of bourne shell functions to reduce duplication, and make the code better organized. Any string echo'd by a function becomes its return value. Bourne shell supports 2 types of return values though. One is the string returned by the function. This is used whenever the called function returns data. This is captured by the normal shell commands like this: values="call_function $1". The other type of return value is a single integer. Much like system calls, these scripts all return 0 for success, and 1 for errors. This enables the calling function to trap errors, and handle them in a clean fashion.

A few good habits to mention, always enclose a sub-shell execution in double quotes. If the returned string contains spaces, this preserves the data, otherwise it'll get truncated.

Another good habit is to always prepend a character when doing string comparisons. If one of the two strings is undefined, the script will abort. So always using "test x${foo} = xbar" prevents that.


CategoryDevOps

Cbuildv2Usage (last modified 2015-02-27 15:09:46)