본문 바로가기

Build

CONTENT

Build Chromium, V8, Extension . . .

Prerequisite

Updates the package lists for upgrades and available packages, and installs newer versions of the packages currently installed on the system.

apt-get -y update && apt-get -y upgrade

Get depot_tools

Clone the depot_tools repository:

git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git

Add depot_tools to the front of your PATH(you can put this in your ~/.bashrc or ~/.zshrc).

export PATH=/path/to/depot_tools:$PATH

Update depot_tools.

gclient

V8

Get V8

Setting up the build

This is not an essential element. You can ignore this command. What metrics they are collecting is here. Following command stops collecting metrics. In summary, They collect metrics about how developers use gclient and other tools in depot_tools to better understand the performance and failure modes of the tools, as well of the pain points and workflows of depot_tools users. But, it will be collected only for Googler on the corp network. If you cannot access, metrics will be not collected.

gclient metrics --opt-out

Pick an empty directory and run the following to get the code:

fetch v8

When the fetch tool completes you shold have the follwing in your working directory:

.gclient: A configuration file for you source checkout

I can't see the build/ directory in the V8 github, but it does exist when i access the Docker container. (Only on linux, only needed once) Build dependencies are installed by running:

./build/install-build-deps.sh

Fetch latest source code and checkout requested source revision:

git pull
git checkout $revision_you_want

All build dependencies are fetched by running:

gclient sync # Update thirt_party repos and run pre-compile hooks

This will pull all dependencies of the target(V8, Chromium, . . .) src checkout. You will need to run this any time you update the main src checkout, including when you switch branches.

There are two workflows for building V8. A convenience workflow using wrapper scripts and a raw workflow using commands on a lower level. There is also convenience all-in-one script that generates build files, triggers the build and optionally runs the tests(I think this test feature is deprecated).

Before getting started, let's explore the directory names between out.gn and out and why many docs use these two kinds of directory names. I think out.gn is an older version naming convention. Note that due to the conversion from gyp to gn, use a separate out.gn folder, to not collide with old gyp folders. If you don't use gyp or keep your subfolders separate, you can also use just out.

Build instructions (raw workflow)

First, generate the necessary build files:
An editor will open for specifying the gn arguments.

gn args out/release

Or, you can just pass the arguments on the command line:

gn gen out/release --args='is_debug=false target_cpu="x64"'

This will generate build files for compiling V8 with release mode. For an overview of all available gn arguments run:

gn args out/release --list # <out_dir> is a necessary variant, so it is required.

The next part is a collection of args that i frequently use for the fuzzing or debugging specific versions:

is_debug=false
is_component_build=true
target_cpu="x64"
v8_enable_object_print = true

# for fuzzing
is_asan=true 
dcheck_always_on=true 
v8_static_library=true 
v8_enable_slow_dchecks=true 
v8_enable_v8_checks=true 
v8_enable_verify_heap=true 
v8_enable_verify_csa=true
v8_enable_backtrace=true
v8_enable_disassembler=true
v8_fuzzilli=true 
sanitizer_coverage_flags="trace-pc-guard"

The option use_goma is deprecated.
A helpful tip: When you enable the v8_enable_object_print option and set is_debug to false, you can build in release mode while still being able to use the %DebugPrint command feature.

I don't usually use the next two build instructions, but are documented in the official docs, so i'm summarizing them here. Additionally, it seems to contain some testing-related features, which i plan to explore later. Since test binaries are definitely lighter than building the entire Chromium, they seem well-suited for use with tools like fuzzer harness.

Build instructions (all-in-one script)

To use the helper script for instance for x64.release configuration, run:
tools/dev/gm.py x64.release
tl;dr

Build instructions (convenience workflow)

Use a convenience script to generate your build files, e.g.:
tools/dev/v8gen.py x64.release
tl;dr

Build V8

Before getting into the build, I expect there might be some slight confusion as I summarize alongside the Chromium build docs, but I anticipate no issues.
Now it's time to build V8. For building all of V8 run:
autoninja -C out\Default or ninja -C out/release
To build specific targets like d8, add them to the command line:
autoninja -C out\Default d8 or ninja -C out/release d8

To be honest, I've never tried using autoninja to build d8. I've only used it when building Chromium, so I'm not sure if d8 can be built in the same way. If not, then just building with ninja should suffice.
autoninja is a wrapper that automatically provides optimal values for the arguemts passed to ninja like -j option.

We can get a list of all of the other build targets from GN by running gn ls out\Default from the command line. To compile one,

Docker

When fuzzing with Fuzzilli, I built the target V8 using Docker, which turned out to be faster and more convenient than I expected, so I decided to share it. I have placed three files build.sh, V8Build.sh,Dockerfile in order. The Patches/ directory is optional. If you don't have a patch to apply, create an empty Patches/ directory. Next, set the revision what you want (default is the main branch) in the build.sh. Then, simply run the build.sh. You can refer to how to use this Docker files here (bi0sCTF 2024/CVE-2020-6418).

build.sh

#!/bin/bash

# Stop the execution of a script if a command or pipeline has an error
# But it's considered bad practice by some. It's recommended to use: trap 'do_something' ERR
set -e

# $0 - The name of the script.
# #1 - The first argument sent to the script.
# dirname /a/b/c/d.txt evaluates to `/a/b/c`. If you invoked it as ./d.txt, then simply evaluates to `.`.
# Chage to the directory, but critically, in a subshell so that your woring directory isn't modified.
cd $(dirname $0)

# Setup build context
# REV means Revision. In Git, a revision refers to an expression that can denote a specific Git object. Examples inlcude HEAD~2, master, hash, and HEAD:test.txt.
#REV="main" # or "4bbbb521f4267d0f8ec6edd07be595eed82dac9c"
REV=${1:-main} # if the first arg doesn't exist, use the main branch.

# Fetch the source code, apply patches, and compile the engine
# -t Name and optionally a tag in the name:tag format
docker build --no-cache --build-arg rev=$REV -t v8_builder .

# Copy build products
# The "icudtl.dat" file is an International Components for Unicode data file used by the V8 for Unicode-related operations. The deafult is a file called icudtl.dat side-by-side with the executable. This file is not essential.
mkdir -p out
docker create --name temp_container v8_builder
docker cp temp_container:/home/builder/v8/v8/out/build/d8 out/d8
docker cp temp_container:/home/builder/v8/v8/out/build/snapshot_blob.bin out/snapshot_blob.bin
docker cp temp_container:/home/builder/v8/v8/out/build/icudtl.dat out/icudtl.dat

# Clean up
docker rm temp_container

V8Build.sh

If you build with this args, the build will be in release version, but you can still use features like %DebugPrint.

#!/bin/bash

if [ "$(uname)" == "Linux" ]; then
    # See https://v8.dev/docs/compile-arm64 for instructions on how to build on Arm64
    gn gen out/build --args='is_debug=false v8_enable_object_print=true v8_enable_backtrace=true v8_enable_disassembler=true target_cpu="x64"'
else
    echo "Unsupported operating system"
fi

ninja -C ./out/build d8

Dockerfile

#FROM swift:latest
FROM ubuntu:22.04

ENV DEBIAN_FRONTEND=noninteractive
ENV SHELL=bash

RUN apt-get -y update && apt-get -y upgrade
RUN apt-get install -y python3 git curl

# -m: create the user's home directory
RUN useradd -m builder

# Fetch v8 source code
WORKDIR /home/builder
RUN git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git
ENV PATH="${PATH}:/home/builder/depot_tools"
RUN gclient
RUN gclient metrics --opt-out
#RUN mkdir v8 && cd v8 && fetch v8
#WORKDIR /home/builder/v8/v8
#RUN git checkout main

# Docker will attempt to cache the output of every step. That's fine (and useful to speed things up, e.g. by avoiding
# the need to download the entire source repository again every time!). However, whenever the following ARG is changed
# (i.e. we are building a new version of the engine), a cache miss occurs (because the build context changed) and all
# steps from here on are rerun. That, however, means we might be operating on an old checkout of the source code from
# the cache, and so we update it again before checking out the requested revision.
# Or, you can use `docker build --no-cache` in build.sh
ARG rev=main

# Update system packages first
#RUN apt-get -y update && apt-get -y upgrade

# Fetch latest source code and checkout requested source revision
#RUN git pull
#RUN git checkout $rev
#RUN gclient sync

# Due to the rename issue on `gclient sync`, all commands were concatenated into a single RUN.
# There is a two version. fast version needs to install essential lib priorly.
# fast version; need to install essential lib priorly.
RUN apt-get install -y xz-utils libglib2.0-dev
RUN mkdir v8 && cd v8 && fetch v8 && cd v8 && git checkout main && git pull && git checkout $rev && gclient sync

# slow version 
#RUN apt-get install -y file lsb-release xz-utils
#RUN mkdir v8 && cd v8 && fetch v8 && cd v8/build && sed -i 's/"sudo",//g' ./install-build-deps.py && ./install-build-deps.sh && git stash && cd ../ && git checkout main && git pull && git checkout $rev && gclient sync
WORKDIR /home/builder/v8/v8

# Upload and apply patches
ADD Patches Patches
RUN for i in `ls Patches`; do patch -p1 < Patches/$i; done

# Start building!
ADD V8Build.sh V8Build.sh
RUN ./V8Build.sh

Issue

Running v8 build inside containers, gclient sync throws on a cipd ensure ... with invalid cross-device link

I sovled by reading the Issue. In short, There was an opinion that the error occurred because the RUN fetch v8 and RUN gclient sync steps were using diffrent file system layer. So, A simple workaround might be doing all this stuff in a single RUN step. e.g. by moving all checkout logic in a script and running this script as a single RUN (or chaining a series of commands in RUN using &&).

#18 12.28 ________ running 'cipd ensure -log-level error -root /home/builder/v8 -ensure-file /tmp/tmpb1yoypc5.ensure' in '.'
#18 12.28 [P701 09:27:55.196 client.go:1915 E] [cleanup] Failed to remove infra/build/siso/linux-amd64 in "v8/third_party/siso": removing the deployed package directory: rename /home/builder/v8/.cipd/pkgs/2 /home/builder/v8/.cipd/pkgs/7Q85PsIJP-_z: invalid cross-device link
#18 12.28 Errors:
#18 12.28   failed to remove infra/build/siso/linux-amd64 in "v8/third_party/siso": removing the deployed package directory: rename /home/builder/v8/.cipd/pkgs/2 /home/builder/v8/.cipd/pkgs/7Q85PsIJP-_z: invalid cross-device link

In Issue, he says the problem is that CIPD(Chrome Infrastructure Package Deployer) relies on the atomicity of rename(...) in a lot of places. It assumes files in the same directory are on the same device. Looks like Docker layer violate this assumption. The rename function is an atomic operation, meaning that it does not allow the other processes to access the file during the renaming process. Replacing atomic rename with cp+rm will likely introduce subtle bugs.

As a result, although I aimed to reduce the time it takes to fetch v8 using cache feature, I decided to abandon the cache functionality and modify the Dockerfile due to this issue. Additionally, I included the --no-cache option in docker build in build.sh.

tar (child): xz: Cannot exec: No such file or directory

I get this error When i swap the docker base image from FROM swift:latest to FROM ubuntu:22.04.
Add xz-utils in Dockerfile. sudo apt-get install xz-utils
During the build, similar errors are occurring due to missing necessary libraries, so I added the command ./build/install-build-deps.sh in Dockerfile.
I had no idea I'd end up struggling for so long because of adding that command. Anyway, I removed all the sudo from that file.

Reference

depot_tools tutorial