Free Electron
|
For a quick colorama install:
Free Electron is publically available on Gitlab.
If you are using a username and password and don't want to enter it every time, git can store them for you.
Currently, to get the media for some unit tests, you still have to use Mercurial.
The media
repository is currently only needed for a subset of the unit tests. If this media is absent, those particular tests won't be run.
If you are using a username and password and don't want to enter it every time, you can add a section to the .hgrc file in your home directory, using your particular password. It is generally good practice to adjust this file to not be readable by anyone but yourself.
The base
repo contains a copy of the Free Electron source code, including a large set of extensions modules.
The media
repo contains a copy of the Free Electron test files, like models and scenes. Currently, these media files are only needed to run the unit tests.
Free Electron is built with pure python and currently requires a minimum version of 3.8.
The common build command build.py
may be run in some environments without explicitly calling it as an argument to the python3 command, as the script can be considered runnable on its own.
Additional technologies should be installed before compiling. You can choose which whichever ones you want. The Free Electron build should ignore modules for technology you don't have installed. A full list of installation notes can be found on a separate page.
The default.env
and local.env
files are used to customize your build, mostly describing code options and where to find external libraries. The local.env file is intended for local overrides to values in the default.env file, which is shared with other developers on the repo.
First, if you have a new pull from the repo, you will probably have no local.env
file. If these files are not found during the build, it will automatically be copied from the distribution file local.env.dist
in the repository. The reason for this is that you don't generally want to copy your customized versions back to the repository. If you want to edit these files before your first build, you can copy them yourself. Otherwise, just jump ahead to the next section.
Next, edit the local.env
file in the base directory. This describes what you might want built and what external libraries you want to use. You probably don't have to 'cd' for every step; that's just a reminder in case you wandered off.
Instead of installing all prerequisite libraries by hand, many common packages can be installed and updated all at once.
If you just want to use some prebuilt binary packages, skip ahead to the next section.
To build packages yourself, first install the freely available vcpkg system following the instructions at https://github.com/microsoft/vcpkg. It might look something like this:
Within vcpkg, you can install whatever packages you like, but you can also use a wrapper script to install the prescribed set.
To post your build for everyone to use, you will need ssh credentials for your server. The identity of that server can be set in your local.env
file. Then you can then post your build with the wrapper script.
This will use the vcpkg export function to assemble a collection of packages and then copy the resulting zip file to the server.
The time stamp of the "current" version is dictated by FE_VCPKG_DOWNLOAD_STAMP
variable in the default.env
file.
If you just want to use the prebuilt packages, simply call a wrapper script to get the current posted version from your server.
Note that the freeelectron.org server listed in default.env
is not currently serving vcpkg images. You will need to edit your local.env
and set FE_VCPKG_DOWNLOAD_URL
to your server and update FE_VCPKG_DOWNLOAD_STAMP
to pick a particular version.
If you have a brand new Windows install without substantial browser usage, you may get an error such as "SSL: CERTIFICATE_VERIFY_FAILED". This is a reasonable possibility if you are doing a build in a container or runner that resets itself. This can be resolved by manually installing a root certificate.
Once you're happy with your env files and installed any dependencies you need, just call the build script.
For Windows, you will need to run the script from a "Native Tools Command Prompt", either x86 or x64.
Running build.py
without an argument actually specifies a "product" named "default". You can specify any product and thereby build a subset of modules seeking a particular set of compiled binaries and other related files.
Products are defined in files named product.py
found at the root of the FE repo as well as at the base of any module set. The python variable forge.product is a dictionary of dictionaries. The first key is the name of each product.
Each product dictionary has an entry called "FE_EXTENSIONS" which is a colon-delimited list of the extensions that should be built. All entries in a product dictionary are used to set environment variables at the beginning of the build. For example, an entry for "FE_CC" can change the compiler used for the entire build.
Just using build.py
as above may suffice in many situations, but a developer may wish to build with more control.
No environment variables are required, but CODEGEN
can be used to change the build type, usually debug
or optimize
.
For more control, a full list of environment variables can be found on a separate page.
Under Windows, you will need to first open a "Native Tools Command Prompt", either x86 or x64, depending on which kind of code you want to generate.
Again, you may not actually need to specify "python3" on the command line.
The tolerant
flag will try to build what remains if some modules are filtered as unbuildable. To make all builds tolerant, set FE_REQUIRE_ALL_MODULES
to 0 in your local.env
file.
As long as you don't set the environment variable PRETTY
to 0, you should get a neatly formatted colorful output during the code compilation.
A full build proceeds through several distinct steps. It may be help to recognize what to expect during the build.
The first step is to determine the fundamental build environment, such as the compiler and its settings, ccache and distcc availablilty, and python version.
Some modules require external code to build before they can determine viability in the filtering step that follows. This may involve a different build mechanism, like Scons or CMake.
After general configuring, each of modules in each module set is configured individually. The first message during this step may involve custom configuration similar to the previous step, such as alternate compilers.
After the configuration messages for each module set, a count of modules for that set is announced and outstanding dependency issues hindering any of those modules may be listed. This is followed by a long single line that lists all the requested modules for that module set, along with some configuration data, such as versions and options. If there were problems, small strings can show up here, like "fail:build", which means that a minimal build-and-run test for that module could not be compiled, perhaps due to missing headers or libraries. If that minimal test compiled, but threw an uncaught exception, you may get "fail:exception". If the minimal test completed its execution, but returned a non-zero exit code, you may get "fail:run".
If a module is otherwise confident that it can be built and lists another module as a prerequisite, then, if that prerequisite module could not be built, the first module will show the string "removed" and not remain on the list to be built. This determination can propogate.
The setups for each module are allowed to insert arbitrary strings, so you may get other short messages.
A module failure during this step is not necessarily a fatal issue. Some failures may be expected. For example, if you do not have Houdini and Maya installed on your machine, but you do have the 'houdini' and 'maya' modules active in a products.py file, then you will always get a build failure during this step. If tolerant mode is used, those modules will simply be removed from the list of things to be compiled. If tolerant mode is not used, any module failure will prevent any building from occuring.
There are some common non-error messages in concatenated module list. Aside from version numbers, the string 'dl' indicates that a dynamic library will be built, and 'lib' indicates that a static library will be built. If both are built, they do not necessarily contain the same content. The string 'auto' indicates that the module participates in the "autoload" mechanism that can determine the source of requested components at run time and automatically load the required dynamic libraries just as they are needed. Some strings start with "-" to indicate some feature of that module has been deactivated, often giving the name of another module.
The dependency scan may only print two lines, one to start and a summary after it is done. However, if there are issues, like an include loop, warnings will follow.
Once the build plan has been determined, the actual compilation commences. If all goes perfectly, each object, library, or executable file that is generated will show up in the output as one short line.
If there are any warnings or errors, they will be sent to the terminal once that generation of the relevant file is completed or terminated. The warning level is set to be quite verbose, so many of the third party headers can present unpleasant streams of warning messages.
After all the files have been built (or have failed to build), a list is presented of how many seconds each module took to build.
After this, a total elapsed time should be the last message of the build.
Many modules contain unit tests. These unit tests are not built by default. To build the unit tests for all the active modules (selected with FE_EXT or products.py), just add 'tests' to the build command.
To run the unit tests for all the active modules (selected with FE_EXT or products.py),