Home | Libraries | People | FAQ | More |
This section explains how to extend Boost.Build to accomodate your local requirements—primarily to add support for non-standard tools you have. Before we start, be sure you have read and understoon the concept of metatarget, the section called “Concepts”, which is critical to understanding the remaining material.
The current version of Boost.Build has three levels of targets, listed below.
Object that is created from declarations in Jamfiles. May be called with a set of properties to produce concrete targets.
Object that corresponds to a file or an action.
Low-level concrete target that is specific to Boost.Jam build engine. Essentially a string—most often a name of file.
In most cases, you will only have to deal with concrete targets and the process that creates concrete targets from metatargets. Extending metatarget level is rarely required. The jam targets are typically only used inside the command line patterns.
Warning | |
---|---|
All of the Boost.Jam target-related builtin functions, like
|
Metatarget is an object that records information specified
in Jamfile, such as metatarget kind, name, sources and properties,
and can be called with specific properties to generate concrete
targets. At the code level it is represented by an instance of
class derived from abstract-target
.
[13]
The generate
method takes the build properties
(as an instance of the property-set
class) and returns
a list containing:
As front element—Usage-requirements from this invocation
(an instance of property-set
)
As subsequent elements—created concrete targets (
instances of the virtual-target
class.)
It's possible to lookup a metataget by target-id using the
targets.resolve-reference
function, and the
targets.generate-from-reference
function can both
lookup and generate a metatarget.
The abstract-target
class has three immediate
derived classes:
project-target
that
corresponds to a project and is not intended for further
subclassing. The generate
method of this
class builds all targets in the project that are not marked as
explicit.
main-target
corresponds to a target in a project
and contains one or more target alternatives. This class also should not be
subclassed. The generate
method of this class selects
an alternative to build, and calls the generate
method of that
alternative.
basic-target
corresponds to a
specific target alternative. This is base class, with a number of
derived classes. The generate
method
processes the target requirements and requested build properties to
determine final properties for the target, builds all sources, and
finally calls the abstract construct
method with the list
of source virtual targets, and the final properties.
The instances of the project-target
and
main-target
classes are created
implicitly—when loading a new Jamfiles, or when a new target
alternative with as-yet unknown name is created. The instances of the
classes derived from basic-target
are typically
created when Jamfile calls a metatarget rule,
such as such as exe
.
It it permissible to create a custom class derived from
basic-target
and create new metatarget rule
that creates instance of such target. However, in the majority
of cases, a specific subclass of basic-target
—
typed-target
is used. That class is associated
with a type and relays to generators
to construct concrete targets of that type. This process will be explained below.
When a new type is declared, a new metatarget rule is automatically defined.
That rule creates new instance of type-target, associated with that type.
Concrete targets are represented by instance of classes derived
from virtual-target
. The most commonly used
subclass is file-target
. A file target is associated
with an action that creates it— an instance of the action
class. The action, in turn, hold a list of source targets. It also holds the
property-set
instance with the build properties that
should be used for the action.
Here's an example of creating a target from another target, source
local a = [ new action $(source) : common.copy : $(property-set) ] ; local t = [ new file-target $(name) : CPP : $(project) : $(a) ] ;
The first line creates an instance of the action>
class.
The first parameter is the list of sources. The second parameter is the name
a jam-level action.
The third parameter is the property-set applying to this action. The second line
creates a target. We specifie a name, a type and a project. We also pass the
action object created earlier. If the action creates several targets, we can repeat
the second line several times.
In some cases, code that creates concrete targets may be invoked more than
once with the same properties. Returning to different instance of file-target
that correspond to the same file clearly will result in problems. Therefore, whenever
returning targets you should pass them via the virtual-target.register
function, that will replace targets with previously created identical ones, as
necessary.[14]
Here are a couple of examples:
return [ virtual-target.register $(t) ] ; return [ sequence.transform virtual-target.register : $(targets) ] ;
In theory, every kind of metatarget in Boost.Build (like exe
,
lib
or obj
) could be implemented
by writing a new metatarget class that, independently of the other code, figures
what files to produce and what commands to use. However, that would be rather inflexible.
For example, adding support for a new compiler would require editing several metatargets.
In practice, most files have specific types, and most tools
consume and produce files of specific type. To take advantage of this
fact, Boost.Build defines concept of target type and
generators, and has special metatarget class
typed-target
. Target type is merely an
identifier. It is associated with a set of file extensions that
correspond to that type. Generator is an abstraction of a tool. It advertises
the types it produces and, if called with a set of input target, tries to construct
output targets of the advertised types. Finally, typed-target
is associated with specific target type, and relays the generator (or generators)
for that type.
A generator is an instance of a class derived from generator
.
The generator
class itself is suitable for common cases.
You can define derived classes for custom scenarios.
Say you're writing an application that generates C++ code. If you ever did this, you know that it's not nice. Embedding large portions of C++ code in string literals is very awkward. A much better solution is:
It's quite easy to achieve. You write special verbatim files that are
just C++, except that the very first line of the file contains the name of a
variable that should be generated. A simple tool is created that takes a
verbatim file and creates a cpp file with a single char*
variable
whose name is taken from the first line of the verbatim file and whose value
is the file's properly quoted content.
Let's see what Boost.Build can do.
First off, Boost.Build has no idea about "verbatim files". So, you must register a new target type. The following code does it:
import type ; type.register VERBATIM : verbatim ;
The first parameter to type.register
gives
the name of the declared type. By convention, it's uppercase. The second
parameter is the suffix for files of this type. So, if Boost.Build sees
code.verbatim
in a list of sources, it knows that it's of
type VERBATIM
.
Next, you tell Boost.Build that the verbatim files can be
transformed into C++ files in one build step. A
generator is a template for a build step that
transforms targets of one type (or set of types) into another. Our
generator will be called verbatim.inline-file
; it
transforms VERBATIM
files into CPP
files:
import generators ; generators.register-standard verbatim.inline-file : VERBATIM : CPP ;
Lastly, you have to inform Boost.Build about the shell
commands used to make that transformation. That's done with an
actions
declaration.
actions inline-file { "./inline-file.py" $(<) $(>) }
Now, we're ready to tie it all together. Put all the code above in file
verbatim.jam
, add import verbatim ;
to
Jamroot.jam
, and it's possible to write the following
in your Jamfile:
exe codegen : codegen.cpp class_template.verbatim usage.verbatim ;
The listed verbatim files will be automatically converted into C++ source files, compiled and then linked to the codegen executable.
In subsequent sections, we will extend this example, and review all the
mechanisms in detail. The complete code is available in the
example/customization
directory.
The first thing we did in the intruduction was declaring a new target type:
import type ; type.register VERBATIM : verbatim ;
The type is the most important property of a target. Boost.Build can automatically generate necessary build actions only because you specify the desired type (using the different main target rules), and because Boost.Build can guess the type of sources from their extensions.
The first two parameters for the type.register
rule
are the name of new type and the list of extensions associated with
it. A file with an extension from the list will have the given target
type. In the case where a target of the declared type is generated
from other sources, the first specified extension will be used.
Sometimes you want to change the suffix used for generated targets
depending on build properties, such as toolset. For example, some compiler
uses extension elf
for executable files. You can use the
type.set-generated-target-suffix
rule:
type.set-generated-target-suffix EXE : <toolset>elf : elf ;
A new target type can be inherited from an existing one.
type.register PLUGIN : : SHARED_LIB ;
The above code defines a new type derived from
SHARED_LIB
. Initially, the new type inherits all the
properties of the base type - in particular generators and suffix.
Typically, you'll change the new type in some way. For example, using
type.set-generated-target-suffix
you can set the suffix for
the new type. Or you can write special a generator for the new type. For
example, it can generate additional metainformation for the plugin.
In either way, the PLUGIN
type can be used whenever
SHARED_LIB
can. For example, you can directly link plugins
to an application.
A type can be defined as "main", in which case Boost.Build will automatically declare a main target rule for building targets of that type. More details can be found later.
Sometimes, a file can refer to other files via some include system. To make Boost.Build track dependencies between included files, you need to provide a scanner. The primary limitation is that only one scanner can be assigned to a target type.
First, we need to declare a new class for the scanner:
class verbatim-scanner : common-scanner { rule pattern ( ) { return "//###include[ ]*\"([^\"]*)\"" ; } }
All the complex logic is in the common-scanner
class, and you only need to override the method that returns
the regular expression to be used for scanning. The
parentheses in the regular expression indicate which part
of the string is the name of the included file. Only the
first parenthesized group in the regular expression will be
recognized; if you can't express everything you want that
way, you can return multiple regular expressions, each of
which contains a parenthesized group to be matched.
After that, we need to register our scanner class:
scanner.register verbatim-scanner : include ;
The value of the second parameter, in this case
include
, specifies the properties that contain the list
of paths that should be searched for the included files.
Finally, we assign the new scanner to the VERBATIM
target type:
type.set-scanner VERBATIM : verbatim-scanner ;
That's enough for scanning include dependencies.
This section will describe how Boost.Build can be extended to support new tools.
For each additional tool, a Boost.Build object called generator
must be created. That object has specific types of targets that it
accepts and produces. Using that information, Boost.Build is able
to automatically invoke the generator. For example, if you declare a
generator that takes a target of the type D
and
produces a target of the type OBJ
, when placing a
file with extention .d
in a list of sources will
cause Boost.Build to invoke your generator, and then to link the
resulting object file into an application. (Of course, this requires
that you specify that the .d
extension corresponds
to the D
type.)
Each generator should be an instance of a class derived from the
generator
class. In the simplest case, you don't need to
create a derived class, but simply create an instance of the
generator
class. Let's review the example we've seen in the
introduction.
import generators ; generators.register-standard verbatim.inline-file : VERBATIM : CPP ; actions inline-file { "./inline-file.py" $(<) $(>) }
We declare a standard generator, specifying its id, the source type
and the target type. When invoked, the generator will create a target
of type CPP
with a source target of
type VERBATIM
as the only source. But what command
will be used to actually generate the file? In bjam, actions are
specified using named "actions" blocks and the name of the action
block should be specified when creating targets. By convention,
generators use the same name of the action block as their own id. So,
in above example, the "inline-file" actions block will be used to
convert the source into the target.
There are two primary kinds of generators: standard and composing,
which are registered with the
generators.register-standard
and the
generators.register-composing
rules, respectively. For
example:
generators.register-standard verbatim.inline-file : VERBATIM : CPP ; generators.register-composing mex.mex : CPP LIB : MEX ;
The first (standard) generator takes a single
source of type VERBATIM
and produces a result. The second
(composing) generator takes any number of sources, which can have either
the CPP
or the LIB
type. Composing generators
are typically used for generating top-level target type. For example,
the first generator invoked when building an exe
target is
a composing generator corresponding to the proper linker.
You should also know about two specific functions for registering
generators: generators.register-c-compiler
and
generators.register-linker
. The first sets up header
dependecy scanning for C files, and the seconds handles various
complexities like searched libraries. For that reason, you should always
use those functions when adding support for compilers and linkers.
(Need a note about UNIX)
The standard generators allows you to specify source and target types, an action, and a set of flags. If you need anything more complex, you need to create a new generator class with your own logic. Then, you have to create an instance of that class and register it. Here's an example how you can create your own generator class:
class custom-generator : generator { rule __init__ ( * : * ) { generator.__init__ $(1) : $(2) : $(3) : $(4) : $(5) : $(6) : $(7) : $(8) : $(9) ; } } generators.register [ new custom-generator verbatim.inline-file : VERBATIM : CPP ] ;
This generator will work exactly like the
verbatim.inline-file
generator we've defined above, but
it's possible to customize the behaviour by overriding methods of the
generator
class.
There are two methods of interest. The run
method is
responsible for the overall process - it takes a number of source targets,
converts them to the right types, and creates the result. The
generated-targets
method is called when all sources are
converted to the right types to actually create the result.
The generated-targets
method can be overridden when you
want to add additional properties to the generated targets or use
additional sources. For a real-life example, suppose you have a program
analysis tool that should be given a name of executable and the list of
all sources. Naturally, you don't want to list all source files
manually. Here's how the generated-targets
method can find
the list of sources automatically:
class itrace-generator : generator { .... rule generated-targets ( sources + : property-set : project name ? ) { local leaves ; local temp = [ virtual-target.traverse $(sources[1]) : : include-sources ] ; for local t in $(temp) { if ! [ $(t).action ] { leaves += $(t) ; } } return [ generator.generated-targets $(sources) $(leafs) : $(property-set) : $(project) $(name) ] ; } } generators.register [ new itrace-generator nm.itrace : EXE : ITRACE ] ;
The generated-targets
method will be called with a single
source target of type EXE
. The call to
virtual-target.traverse
will return all targets the
executable depends on, and we further find files that are not
produced from anything.
The found targets are added to the sources.
The run
method can be overriden to completely
customize the way the generator works. In particular, the conversion of
sources to the desired types can be completely customized. Here's
another real example. Tests for the Boost Python library usually
consist of two parts: a Python program and a C++ file. The C++ file is
compiled to Python extension that is loaded by the Python
program. But in the likely case that both files have the same name,
the created Python extension must be renamed. Otherwise, the Python
program will import itself, not the extension. Here's how it can be
done:
rule run ( project name ? : property-set : sources * ) { local python ; for local s in $(sources) { if [ $(s).type ] = PY { python = $(s) ; } } local libs ; for local s in $(sources) { if [ type.is-derived [ $(s).type ] LIB ] { libs += $(s) ; } } local new-sources ; for local s in $(sources) { if [ type.is-derived [ $(s).type ] CPP ] { local name = [ $(s).name ] ; # get the target's basename if $(name) = [ $(python).name ] { name = $(name)_ext ; # rename the target } new-sources += [ generators.construct $(project) $(name) : PYTHON_EXTENSION : $(property-set) : $(s) $(libs) ] ; } } result = [ construct-result $(python) $(new-sources) : $(project) $(name) : $(property-set) ] ; }
First, we separate all source into python files, libraries and C++
sources. For each C++ source we create a separate Python extension by
calling generators.construct
and passing the C++ source
and the libraries. At this point, we also change the extension's name,
if necessary.
Often, we need to control the options passed the invoked tools. This is done with features. Consider an example:
# Declare a new free feature import feature : feature ; feature verbatim-options : : free ; # Cause the value of the 'verbatim-options' feature to be # available as 'OPTIONS' variable inside verbatim.inline-file import toolset : flags ; flags verbatim.inline-file OPTIONS <verbatim-options> ; # Use the "OPTIONS" variable actions inline-file { "./inline-file.py" $(OPTIONS) $(<) $(>) }
We first define a new feature. Then, the flags
invocation
says that whenever verbatin.inline-file action is run, the value of
the verbatim-options
feature will be added to the
OPTIONS
variable, and can be used inside the action body.
You'd need to consult online help (--help) to find all the features of
the toolset.flags
rule.
Although you can define any set of features and interpret their values in any way, Boost.Build suggests the following coding standard for designing features.
Most features should have a fixed set of values that is portable
(tool neutral) across the class of tools they are designed to work
with. The user does not have to adjust the values for a exact tool. For
example, <optimization>speed
has the same meaning for
all C++ compilers and the user does not have to worry about the exact
options passed to the compiler's command line.
Besides such portable features there are special 'raw' features that
allow the user to pass any value to the command line parameters for a
particular tool, if so desired. For example, the
<cxxflags>
feature allows you to pass any command line
options to a C++ compiler. The <include>
feature
allows you to pass any string preceded by -I
and the interpretation
is tool-specific. (See the section called “
Can I get capture external program output using a Boost.Jam variable?
” for an example of very smart usage of that
feature). Of course one should always strive to use portable
features, but these are still be provided as a backdoor just to make
sure Boost.Build does not take away any control from the user.
Using portable features is a good idea because:
When a portable feature is given a fixed set of values, you can build your project with two different settings of the feature and Boost.Build will automatically use two different directories for generated files. Boost.Build does not try to separate targets built with different raw options.
Unlike with “raw” features, you don't need to use specific command-line flags in your Jamfile, and it will be more likely to work with other tools.
Adding a feature requires three steps:
Declaring a feature. For that, the "feature.feature" rule is used. You have to decide on the set of feature attributes:
if you want a feature value set for one target to automaticaly propagate to its dependant targets then make it “propagated”.
if a feature does not have a fixed list of
values, it must be “free.” For example, the include
feature is a free feature.
if a feature is used to refer to a path relative
to the Jamfile, it must be a “path” feature. Such features will
also get their values automatically converted to Boost Build's
internal path representation. For example, include
is a path feature.
if feature is used to refer to some target, it must be a “dependency” feature.
Representing the feature value in a
target-specific variable. Build actions are command
templates modified by Boost.Jam variable expansions. The
toolset.flags
rule sets a target-specific
variable to the value of a feature.
Using the variable. The variable set in step 2 can be used in a build action to form command parameters or files.
Here's another example. Let's see how we can make a feature that refers to a target. For example, when linking dynamic libraries on Windows, one sometimes needs to specify a "DEF file", telling what functions should be exported. It would be nice to use this file like this:
lib a : a.cpp : <def-file>a.def ;
Actually, this feature is already supported, but anyway...
Since the feature refers to a target, it must be "dependency".
feature def-file : : free dependency ;
One of the toolsets that cares about DEF files is msvc. The following line should be added to it.
flags msvc.link DEF_FILE <def-file> ;
Since the DEF_FILE variable is not used by the msvc.link action, we need to modify it to be:
actions link bind DEF_FILE { $(.LD) .... /DEF:$(DEF_FILE) .... }
Note the bind DEF_FILE
part. It tells
bjam to translate the internal target name in
DEF_FILE
to a corresponding filename in
the link
action. Without it the expansion of
$(DEF_FILE)
would be a strange symbol that is
not likely to make sense for the linker.
We are almost done, but we should stop for a small workaround. Add the following code to msvc.jam
rule link { DEPENDS $(<) : [ on $(<) return $(DEF_FILE) ] ; }
This is needed to accomodate some bug in bjam, which hopefully will be fixed one day.
Sometimes you want to create a shortcut for some set of
features. For example, release
is a value of
<variant>
and is a shortcut for a set of features.
It is possible to define your own build variants. For example:
variant crazy : <optimization>speed <inlining>off <debug-symbols>on <profiling>on ;
will define a new variant with the specified set of properties. You can also extend an existing variant:
variant super_release : release : <define>USE_ASM ;
In this case, super_release
will expand to all properties
specified by release
, and the additional one you've specified.
You are not restricted to using the variant
feature
only.
Here's example that defines a brand new feature:
feature parallelism : mpi fake none : composite link-incompatible ; feature.compose <parallelism>mpi : <library>/mpi//mpi/<parallelism>none ; feature.compose <parallelism>fake : <library>/mpi//fake/<parallelism>none ;
This will allow you to specify the value of feature
parallelism
, which will expand to link to the necessary
library.
A main target rule (e.g “exe
”
Or “lib
”) creates a top-level target. It's quite likely that you'll want to declare your own and
there are two ways to do that.
The first way applies when
your target rule should just produce a target of specific type. In that case, a
rule is already defined for you! When you define a new type, Boost.Build
automatically defines a corresponding rule. The name of the rule is
obtained from the name of the type, by downcasing all letters and
replacing underscores with dashes.
For example, if you create a module
obfuscate.jam
containing:
import type ; type.register OBFUSCATED_CPP : ocpp ; import generators ; generators.register-standard obfuscate.file : CPP : OBFUSCATED_CPP ;
and import that module, you'll be able to use the rule "obfuscated-cpp" in Jamfiles, which will convert source to the OBFUSCATED_CPP type.
The second way is to write a wrapper rule that calls any of the existing rules. For example, suppose you have only one library per directory and want all cpp files in the directory to be compiled into that library. You can achieve this effect using:
lib codegen : [ glob *.cpp ] ;
If you want to make it even simpler, you could add the following
definition to the Jamroot.jam
file:
rule glib ( name : extra-sources * : requirements * ) { lib $(name) : [ glob *.cpp ] $(extra-sources) : $(requirements) ; }
allowing you to reduce the Jamfile to just
glib codegen ;
Note that because you can associate a custom generator with a target type,
the logic of building can be rather complicated. For example, the
boostbook
module declares a target type
BOOSTBOOK_MAIN
and a custom generator for that type. You can
use that as example if your main target rule is non-trivial.
If your extensions will be used only on one project, they can be placed in
a separate .jam
file and imported by your
Jamroot.jam
. If the extensions will be used on many
projects, users will thank you for a finishing touch.
The using
rule provides a standard mechanism
for loading and configuring extensions. To make it work, your module
should provide an init
rule. The rule will be called
with the same parameters that were passed to the
using
rule. The set of allowed parameters is
determined by you. For example, you can allow the user to specify
paths, tool versions, and other options.
Here are some guidelines that help to make Boost.Build more consistent:
The init
rule should never fail. Even if
the user provided an incorrect path, you should emit a warning and go
on. Configuration may be shared between different machines, and
wrong values on one machine can be OK on another.
Prefer specifying the command to be executed to specifying the tool's installation path. First of all, this gives more control: it's possible to specify
/usr/bin/g++-snapshot time g++
as the command. Second, while some tools have a logical "installation root", it's better if the user doesn't have to remember whether a specific tool requires a full command or a path.
Check for multiple initialization. A user can try to initialize the module several times. You need to check for this and decide what to do. Typically, unless you support several versions of a tool, duplicate initialization is a user error. If the tool's version can be specified during initialization, make sure the version is either always specified, or never specified (in which case the tool is initialied only once). For example, if you allow:
using yfc ; using yfc : 3.3 ; using yfc : 3.4 ;
Then it's not clear if the first initialization corresponds to version 3.3 of the tool, version 3.4 of the tool, or some other version. This can lead to building twice with the same version.
If possible, init
must be callable
with no parameters. In which case, it should try to autodetect all
the necessary information, for example, by looking for a tool in
PATH
or in common installation locations. Often this
is possible and allows the user to simply write:
using yfc ;
Consider using facilities in the
tools/common
module. You can take a look at how
tools/gcc.jam
uses that module in the init
rule.