User Guide

Notice

The Total Compute 2022 (TC2) software stack uses bash scripts to build an integrated solution comprising Board Support Package (BSP) and Debian distribution.

Prerequisites

These instructions assume that:
  • Your host PC is running Ubuntu Linux 20.04;

  • You are running the provided scripts in a bash shell environment;

  • This release requires TC2 Fast Model platform (FVP) version 11.23.28.

To get the latest repo tool from Google, please run the following commands:

mkdir -p ~/bin
curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
chmod a+x ~/bin/repo
export PATH=~/bin:$PATH

To avoid errors while attempting to clone/fetch the different TC software components, your system should have a proper minimum git config configuration. The following command exemplifies the typical git config configuration required:

git config --global user.name "<user name>"
git config --global user.email "<email>"
git config --global protocol.version 2

To install and allow access to docker, please run the following commands:

sudo apt install docker.io
# ensure docker service is properly started and running
sudo systemctl restart docker

To manage Docker as a non-root user, please run the following commands:

sudo usermod -aG docker $USER
newgrp docker

Download the source code and build

The TC2 software stack supports the following distro:
  • Debian (based on Debian 12 Bookworm);

Download the source code

Create a new folder that will be your workspace, which will henceforth be referred to as <TC2_WORKSPACE> in these instructions.

mkdir <TC2_WORKSPACE>
cd <TC2_WORKSPACE>
export TC2_RELEASE=refs/tags/TC2-2024.02.22-LSC

To sync Debian source code, please run the following repo commands:

repo init -u https://gitlab.arm.com/arm-reference-solutions/arm-reference-solutions-manifest \
            -m tc2.xml \
            -b ${TC2_RELEASE} \
            -g bsp
repo sync -j `nproc` --fetch-submodules
Once the previous process finishes, the current <TC2_WORKSPACE> should have the following structure:
  • build-scripts/: the components build scripts;

  • run-scripts/: scripts to run the FVP;

  • src/: each component’s git repository.

Initial Setup

The setup includes two parts:
  1. setup a docker image;

  2. setup the environment to build TC images.

Setting up a docker image involves pulling the prebuilt docker image from a docker registry. If that fails, it will build a local docker image.

To setup a docker image, patch the components, install the toolchains and build tools, please run the commands mentioned in the following Build variants configuration section, according to the distro and variant of interest.

The various tools will be installed in the <TC2_WORKSPACE>/tools/ directory.

Build options

Debian OS build variant

Currently, the Debian OS build distro does not support software or hardware rendering. Considering this limitation, this build variant should be only used for development or validation work that does not imply pixel rendering.

Build variants configuration

This section provides a quick guide on how to build the TC software stack considering the Debian build variant, using the most common options.

Debian build

Currently, the Debian build does not support software or hardware rendering. As such, the TC_GPU variable value should not be defined. The Debian build can still be a valuable resource when just considering other types of development or validation work, which do not involve pixel rendering.

Debian build (UEFI boot with ACPI Support)

To build the Debian with UEFI based boot which has ACPI support, please run the following commands:

export PLATFORM=tc2
export FILESYSTEM=debian
export TC_TARGET_FLAVOR=fvp
export TC_BL33=uefi
cd build-scripts
./setup.sh

Warning

If building the TC2 software stack for more than one target, please ensure you run a clean build between each different build to avoid setup/building errors (refer to the next section “More about the build system” for command usage examples on how to do this).

Warning

If running repo sync again is needed at some point, then the setup.sh script also needs to be run again, as repo sync can discard the patches.

Note

Most builds will be done in parallel using all the available cores by default. To change this number, run export PARALLELISM=<number of cores>

Build command

To build the whole TC2 software stack, simply run:

./run_docker.sh ./build-all.sh build
Once the previous process finishes, the previously defined environment variable $FILESYSTEM will be automatically used and the current <TC2_WORKSPACE> should have the following structure:
  • build files are stored in <TC2_WORKSPACE>/output/<$FILESYSTEM>/tmp_build/;

  • final images will be placed in <TC2_WORKSPACE>/output/<$FILESYSTEM>/deploy/.

More about the build system

The build-all.sh script will build all the components, but each component has its own script, allowing it to be built, cleaned and deployed separately. All scripts support the build, clean, deploy, patch commands. build-all.sh also supports all, which performs a clean followed by a rebuild of all the stack.

For example, to build, deploy, and clean SCP, run:

./run_docker.sh ./build-scp.sh build
./run_docker.sh ./build-scp.sh deploy
./run_docker.sh ./build-scp.sh clean

The platform and filesystem used should be defined as described previously, but they can also be specified as the following example:

./run_docker.sh ./build-all.sh \
            -p $PLATFORM \
            -f $FILESYSTEM \
            -t $TC_TARGET_FLAVOR \
            -g $TC_GPU \
            -b ${TC_BL33} build

Build Components and its dependencies

A new dependency to a component can be added in the form of $component=$dependency in the dependencies.txt file

To build a component and rebuild those components that depend on it, run:

./run_docker.sh ./<BUILD-SCRIPT-FILENAME> build with_reqs

Those options work for all the build-*.sh scripts.

Provided components

Firmware Components

Trusted Firmware-A

Based on Trusted Firmware-A

Script

<TC2_WORKSPACE>/build-scripts/build-tfa.sh

Files

  • <TC2_WORKSPACE>/output/<$FILESYSTEM>/deploy/tc2/bl1-tc.bin

  • <TC2_WORKSPACE>/output/<$FILESYSTEM>/deploy/tc2/fip-tc.bin

System Control Processor (SCP)

Based on SCP Firmware

Script

<TC2_WORKSPACE>/build-scripts/build-scp.sh

Files

  • <TC2_WORKSPACE>/output/<$FILESYSTEM>/deploy/tc2/scp_ramfw.bin

  • <TC2_WORKSPACE>/output/<$FILESYSTEM>/deploy/tc2/scp_romfw.bin

Hafnium

Based on Hafnium

Script

<TC2_WORKSPACE>/build-scripts/build-hafnium.sh

Files

  • <TC2_WORKSPACE>/output/<$FILESYSTEM>/deploy/tc2/hafnium.bin

OP-TEE

Based on OP-TEE

Script

<TC2_WORKSPACE>/build-scripts/build-optee-os.sh

Files

  • <TC2_WORKSPACE>/output/<$FILESYSTEM>/tmp_build/tfa_sp/tee-pager_v2.bin

S-EL0 trusted-services

Based on Trusted Services

Script

<TC2_WORKSPACE>/build-scripts/build-trusted-services.sh

Files

  • <TC2_WORKSPACE>/output/<$FILESYSTEM>/tmp_build/tfa_sp/crypto-sp.bin

  • <TC2_WORKSPACE>/output/<$FILESYSTEM>/tmp_build/tfa_sp/internal-trusted-storage.bin

Linux

The component responsible for building a 6.1 version of the mainline kernel (Linux).

Script

<TC2_WORKSPACE>/build-scripts/build-linux.sh

Files

  • <TC2_WORKSPACE>/output/<$FILESYSTEM>/deploy/tc2/Image

Distributions

Debian Linux distro

Script

<TC2_WORKSPACE>/build-scripts/build-debian.sh

Files

  • <TC2_WORKSPACE>/output/<$FILESYSTEM>/deploy/tc2/debian-12-nocloud-arm64-20230612-1409.raw.img

UEFI

Script

<TC2_WORKSPACE>/build-scripts/build-uefi.sh

Files

  • <TC2_WORKSPACE>/output/<$FILESYSTEM>/deploy/tc2/uefi.bin

GRUB

Script

<TC2_WORKSPACE>/build-scripts/build-grub.sh

Files

  • <TC2_WORKSPACE>/output/<$FILESYSTEM>/deploy/tc2/grubaa64.efi

Run scripts

Within the <TC2_WORKSPACE>/run-scripts/ there are several convenience functions for testing the software stack. Usage descriptions for the various scripts are provided in the following sections.

Obtaining the TC2 FVP

The TC2 FVP is available to partners to build and run on Linux host environments.

To download the latest publicly available TC2 FVP model, please visit the Arm Ecosystem FVP downloads webpage or contact Arm (support@arm.com).

Running the software on FVP

A Fixed Virtual Platform (FVP) of the TC2 platform must be available to run the included run scripts.

The run-scripts structure is as follows (assuming <TC2_WORKSPACE> location):

run-scripts
|--tc2
   |--run_model.sh
   |-- ...

Ensure that all dependencies are met by running the FVP: ./path/to/FVP_TC2. You should see the FVP launch, presenting a graphical interface showing information about the current state of the FVP.

The run_model.sh script in <TC2_WORKSPACE>/run-scripts/tc2 will launch the FVP, providing the previously built images as arguments. Run the ./run_model.sh script:

./run_model.sh
Incorrect script use, call script as:
<path_to_run_model.sh> [OPTIONS]
OPTIONS:
-m, --model                      path to model
-d, --distro                     distro version, values supported [buildroot, android-fvp, debian]
-b, --bl33                       BL33 software, values supported [uefi, u-boot]
-a, --avb                        [OPTIONAL] avb boot, values supported [true, false], DEFAULT: false
-t, --tap-interface              [OPTIONAL] enable TAP interface
-n, --networking                 [OPTIONAL] networking, values supported [user, tap, none]
                                 DEFAULT: tap if tap interface provided, otherwise user
--                               [OPTIONAL] After -- pass all further options directly to the model

Running Debian (UEFI boot with ACPI support)

The TC2 FVP with Debian (UEFI boot with ACPI support) will require to enable the tap interface, since systemd services of Debian require network access while booting. This can be ensured using the following command:

# following command does assume that current location is <TC2_WORKSPACE>
./run-scripts/tc2/run_model.sh -m <model binary path> -d debian -b uefi -t tap0

Expected behaviour

When the script is run, four terminal instances will be launched:
  • terminal_uart_ap used by the non-secure world components EDK2, Grub, Linux Kernel and filesystem (Debian);

  • terminal_uart1_ap used by the secure world components TF-A, Hafnium and OP-TEE;

  • terminal_s0 used for the SCP logs;

  • terminal_s1 used by RSS logs (no output by default).

Once the FVP is running, the hardware Root of Trust will verify AP and SCP images, initialize various crypto services and then handover execution to the SCP. SCP will bring the AP out of reset. The AP will start booting from its ROM and then proceed to boot Trusted Firmware-A, Hafnium, Secure Partitions (OP-TEE and Trusted Services). Following this stage, EDK2 UEFI FW and Grub bootloader will take place, and finally the corresponding Linux Kernel distro boot will happen.

When booting Debian, the model will boot the Linux kernel and present a login prompt on the terminal_uart_ap window. Login using the username root (no password is required). You may need to hit Enter for the prompt to appear.

The GUI window Fast Models - Total Compute 2 DP0 is intended to show any rendered pixels, but this feature is not currently supported for the provided Debian image on the current release.

Running sanity tests

This section provides information on some of the suggested sanity tests that can be executed to exercise and validate the TC Software stack functionality, as well as information regarding the expected behaviour and test results.

Note

The information presented for any of the sanity tests described in this section should NOT be considered as indicative of hardware performance. These tests and the FVP model are only intended to validate the functional flow and behaviour for each of the features.

ACS (UEFI boot with ACPI support)

To run ACS (UEFI boot with ACPI support), please proceed as follows:

  1. build the stack for UEFI enabled Debian distro;

  2. download the latest ACS disk image from here. This will download a compressed ACS disk prebuilt image called sr_acs_live_image.img.xz;

    # download sr_acs_live_image.img.xz to the root folder of <TC2_WORKSPACE> using wget util
    cd <TC2_WORKSPACE>
    wget --show-progress -O sr_acs_live_image.img.xz https://github.com/ARM-software/arm-systemready/raw/main/SR/prebuilt_images/v23.09_2.0.0/sr_acs_live_image.img.xz
    
  3. extract the compressed ACS disk image by running the following command:

    xz -d sr_acs_live_image.img.xz
    
  4. setup the stack for running the ACS test suite by running the following command:

    # following commands do assume that current location is <TC2_WORKSPACE>
    mkdir -p ./output/acs-test-suite/deploy
    ln -sf $(pwd)/output/debian/deploy/* ./output/acs-test-suite/deploy/
    cp sr_acs_live_image.img ./output/acs-test-suite/deploy/tc2/
    
  5. ACS test suite can be executed running the following command:

    # following command does assume that current location is <TC2_WORKSPACE>
    ./run-scripts/tc2/run_model.sh -m <model binary path> -d acs-test-suite -b uefi
    

Note

An example of the expected test result for this sanity test is illustrated in the related Total Compute Platform Expected Test Results document section.

ACPI Test Suite

Verify the ACPI tables in UEFI shell

To verify all the ACPI tables in UEFI shell, please proceed as described below:

  1. start the TC2 FVP model running Debian and pay close attention to the FVP terminal_uart_ap window (as you need to be very quick to succeed on the next step):

    # following command does assume that current location is <TC2_WORKSPACE>
    ./run-scripts/tc2/run_model.sh -m <model binary path> -d debian -b uefi -t tap0
    
  2. once the Press ESCAPE for boot options … message appears, quickly press ESC key to interrupt the initial boot and launch the boot options menu:

    ../../_images/step1.png
  3. using the navigation keys on your keyboard, select the Boot Manager option as illustrated on the next image and press ENTER key to select it:

    ../../_images/step2.png
  4. select the UEFI Shell option and press the ENTER key:

    ../../_images/step3.png
  5. allow the platform to boot into the UEFI shell (the ENTER key can be pressed to skip the 5 seconds waiting if desired):

    ../../_images/step4.png
  6. once the UEFI shell prompt appears, dump the ACPI content by running the command acpiview as illustrated on the next image:

    ../../_images/step5.png

It is possible to filter the output to a single ACPI table by specifying the respective table name of interest. This can be achieved by running the command acpiview -s <TABLE-NAME>, where <TABLE-NAME> can be any of the following values: FACP, DSDT, DBG2, GTDT, SPCR, APIC, PPTT or SSDT.

Note

This test is specific to Debian with UEFI ACPI support only. An example of the expected test result for this test is illustrated in the related Total Compute Platform Expected Test Results document section.

Verify PPTT ACPI table content in Debian shell

The following screenshot exemplifies how to dump the data cache information of the CPU cores while in Debian shell (command can be run on the terminal_uart_ap window):

PPTT ACPI table content with data cache information

The following screenshot exemplifies how to dump the instruction cache information of the CPU cores while in Debian shell (command can be run on the terminal_uart_ap window):

PPTT ACPI table content with instruction cache information

The following screenshot exemplifies how to dump the L2 cache information of the CPU cores while in Debian shell (command can be run on the terminal_uart_ap window):

PPTT ACPI table content with L2 cache information

Note

This test is specific to Debian with UEFI ACPI support only.

Debugging on Arm Development Studio

This section describes the steps to debug the TC software stack using Arm Development Studio.

Attach and Debug

  1. Build the target with debug enabled (the file <TC2_WORKSPACE>/build-scripts/config can be configured to enable debug);

  2. Run the distro as described in the section Running the software on FVP with the extra parameters -- -I to attach to the debugger. The full command should look like the following:

    ./run-scripts/tc2/run_model.sh -m <model binary path> -d debian -b uefi -- -I
    
  3. Select the target Arm FVP -> TC2 -> Bare Metal Debug -> Hayesx4/Hunterx3/HunterELP SMP;

  4. After connection, use the options in debug control console (highlighted in the below diagram) or the keyboard shortcuts to step, run or halt;

  5. To add debug symbols, right click on target -> Debug configurations and under files tab add path to elf files;

  6. Debug options such as break points, variable watch, memory view and so on can be used.

../../_images/Debug_control_console.png

Note

This configuration requires Arm DS version 2023.a or later. The names of the cores shown are based on codenames instead of product names. The mapping for the actual names follows the below described convention:

Codename

Product name

Hayes

Cortex A520

Hunter

Cortex A720

Hunter ELP

Cortex X4

Switch between SCP and AP

  1. Right click on target and select Debug Configurations;

  2. Under Connection, select Cortex-M3 for SCP or any of the remaining targets to attach to a specific AP (please refer to the previous note regarding the matching between the used codenames and actual product names);

  3. Press the Debug button to confirm and start your debug session.

../../_images/switch_cores.png

Enable LLVM parser (for Dwarf5 support)

To enable LLVM parser (with Dwarf5 support), please follow the next steps:

  1. Select Window->Preferences->Arm DS->Debugger->Dwarf Parser;

  2. Tick the Use LLVM DWARF parser option;

  3. Click the Apply and Close button.

../../_images/enable_llvm.png

Arm DS version

The previous steps apply to the following Arm DS Platinum version/build:

../../_images/arm_ds_version.png

Note

Arm DS Platinum is only available to licensee partners. Please contact Arm to have access (support@arm.com).

Feature Guide

Set up TAP interface

This section details the steps required to set up the tap interface on the host to enable model networking.

The following method relies on libvirt handling the network bridge. This solution provides a safer approach in which, in cases where a bad configuration is used, the primary network interface should continue operational.

Steps to set up the tap interface

To set up the tap interface, please follow the next steps (unless otherwise mentioned, all commands are intended to be run on the host system):

  1. install libvirt on your development host system:

    sudo apt-get update && sudo apt-get install libvirt-daemon-system libvirt-clients
    

    The host system should now list a new interface with a name similar to virbr0 and an IP address of 192.168.122.1. This can be verified by running the command ifconfig -a (or alternatively ip a s for newer distributions) which will produce an output similar to the following:

    $ ifconfig -a
    virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
    inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
    ether XX:XX:XX:XX:XX:XX  txqueuelen 1000  (Ethernet)
    RX packets 0  bytes 0 (0.0 B)
    RX errors 0  dropped 0  overruns 0  frame 0
    TX packets 0  bytes 0 (0.0 B)
    TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    virbr0-nic: flags=4098<BROADCAST,MULTICAST>  mtu 1500
    ether XX:XX:XX:XX:XX:XX  txqueuelen 1000  (Ethernet)
    RX packets 0  bytes 0 (0.0 B)
    RX errors 0  dropped 0  overruns 0  frame 0
    TX packets 0  bytes 0 (0.0 B)
    TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    $
    
  2. create the tap0 interface:

    sudo ip tuntap add dev tap0 mode tap user $(whoami)
    sudo ifconfig tap0 0.0.0.0 promisc up
    sudo brctl addif virbr0 tap0
    
  3. run the FVP model providing the additional parameter -t "tap0" to enable the tap interface:

    ./run-scripts/tc2/run_model.sh -m <model binary path> -d debian -b uefi -t "tap0"
    

    Before proceeding, please allow FVP model to fully boot.

  4. once the FVP model boots, the running instance should get an IP address similar to 192.168.122.62;

  5. validate the connection between the host tap0 interface and the FVP model by running the following command on the fvp-model via the terminal_uart_ap window:

    ping 192.168.122.1
    

    Alternatively, it is also possible to validate if the fvp-model can reach a valid internet gateway by pinging, for instance, the IP address 8.8.8.8 instead.

Steps to graceful disable and remove the tap interface

To revert the configuration of your host system (removing the tap0 interface), please follow the next steps:

  1. remove the tap0 from the bridge configuration:

    sudo brctl delif virbr0 tap0
    
  2. disable the bridge interface:

    sudo ip link set virbr0 down
    
  3. remove the bridge interface:

    sudo brctl delbr virbr0
    
  4. remove the libvirt package:

    sudo apt-get remove libvirt-daemon-system libvirt-clients
    

Running and Collecting FVP tracing information

This section describes how to run the FVP-model, enabling the output of trace information for debug and troubleshooting purposes. To illustrate proper trace output information that can be obtained at different stages, the following command examples will use the SMMU-700 block component. However, any of the commands mentioned, can be extended or adapted easily for any other component.

Note

This functionality requires to execute the FVP-model enforcing the additional load of the GenericTrace.so or ListTraceSources.so plugins (which are provided and part of your FVP bundle).

Getting the list of trace sources

To get the list of trace sources available on the FVP-model, please run the following command:

<fvp-model binary path>/FVP_TC2 \
        --plugin <fvp-model plugin path/ListTraceSources.so> \
        >& /tmp/trace-sources-fvp-tc2.txt

This will start the model and use the ListTraceSources.so plugin to dump the list to a file. Please note that the file size can easily extend to tens of megabytes, as the list is quite extensive.

The following excerpt illustrates the output information related with the example component SMMU-700:

Component (1439) providing trace: TC2.css.smmu (MMU_700, 11.23.28)
=============================================================================
Component is of type "MMU_700"
Version is "11.23.28"
#Sources: 299

Source ArchMsg.Error.error (These messages are about activity occurring on the SMMU that is considered an error.
Messages will only come out here if parameter all_error_messages_through_trace is true.

DISPLAY %{output})
        Field output type:MTI_STRING size:0 max_size:120 (The stream output)

Source ArchMsg.Error.fetch_from_memory_type_not_supporting_httu (A descriptor fetch from an HTTU-enabled translation regime to an unsupported
memory type was made.  Whilst the fetch itself may succeed, if an update to
the descriptor was attempted then it would fail.)

Executing the FVP-model with traces enabled

To execute the FVP-model with trace information enabled, please run the following command:

./run-scripts/tc2/run_model.sh -m <model binary path> -d debian -b uefi \
        -- \
        --plugin <fvp-model plugin path/GenericTrace.so> \
        -C 'TRACE.GenericTrace.trace-sources="TC2.css.smmu.*"' \
        -C TRACE.GenericTrace.flush=true

Multiple trace sources can be requested by separating the trace-sources strings with commas. By default, the trace information will be displayed to the standard output (e.g. display), which due to its verbosity may not be always the ideal solution. For such situations, it is suggested to redirect and capture the trace information into a file, which can be achieved by running the following command:

./run-scripts/tc2/run_model.sh -m <model binary path> -d debian -b uefi \
        -- \
        --plugin <fvp-model plugin path/GenericTrace.so> \
        -C 'TRACE.GenericTrace.trace-sources="TC2.css.smmu.*"' \
        -C TRACE.GenericTrace.flush=true \
        >& /tmp/trace-fvp-tc2.txt

Copyright (c) 2022-2024, Arm Limited. All rights reserved.