boards/x86/acrn: Rework board documentation
ACRN build and configuration is non-trivially complicated, and so far integration documentation has been mostly missing, and users have had to get by via copying from existing integration efforts with minor changes, leading to repeated mistakes and persistent confusion. This is an attempt to document the process from first principles, with an eye toward informing integrators (not me!) who might come by later to better automate things. Some of the content is going to look remedial to someone already familiar with e.g. ACRN configuration or EFI boot. This simply replaces the pre-existing docs, which were for earlier versions of ACRN where Zephyr was launched from the service OS instead of the now-standard pre-launch VM mode. Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This commit is contained in:
parent
c6f3887e84
commit
3da652f4cd
@ -1,144 +1,230 @@
|
||||
.. _acrn:
|
||||
Building and Running Zephyr with ACRN
|
||||
#####################################
|
||||
|
||||
ACRN UOS (User Operating System)
|
||||
#################################
|
||||
Zephyr's is capable of running as a guest under the x86 ACRN
|
||||
hypervisor (see https://projectacrn.org/). The process for getting
|
||||
this to work is somewhat involved, however.
|
||||
|
||||
Overview
|
||||
********
|
||||
Build your Zephyr App
|
||||
*********************
|
||||
|
||||
This board configuration defines an ACRN User OS execution environment for
|
||||
running Zephyr RTOS applications.
|
||||
First, build the Zephyr application you want to run in ACRN as you
|
||||
normally would, selecting an appropriate board:
|
||||
|
||||
ACRN is a flexible, lightweight reference hypervisor, built with real-time
|
||||
and safety-criticality in mind, optimized to streamline embedded development
|
||||
through an open source platform. Check out the `Introduction to Project ACRN
|
||||
<https://projectacrn.github.io/latest/introduction/>`_ for more information.
|
||||
.. code-block:: console
|
||||
|
||||
This baseline configuration can be used as a starting point for creating
|
||||
demonstration ACRN UOS configurations. It currently supports the following
|
||||
devices:
|
||||
west build -b acrn_ehl_crb samples/hello_world
|
||||
|
||||
* I/O APIC
|
||||
* local APIC timer
|
||||
* NS16550 UARTs
|
||||
Note the kconfig output in ``build/zephyr/.config``, you will need to
|
||||
reference that to configure ACRN later.
|
||||
|
||||
.. note::
|
||||
This ACRN board configuration is for illustrative purposes only.
|
||||
Because of its reliance on virtualized hardware provided by ACRN,
|
||||
it is not suitable for production real-time applications. Real-time
|
||||
response under ACRN requires direct access to the underlying
|
||||
hardware, so production applications should be derived from the
|
||||
board configurations that describe that underlying hardware.
|
||||
The Zephyr build artifact you will need is ``build/zephyr/zephyr.bin``,
|
||||
which is a raw memory image. Unlike other x86 targets, you do not
|
||||
want to use ``zephyr.elf``!
|
||||
|
||||
For example, if you wish to run an application under ACRN on an Up
|
||||
Squared, start with the Up Squared board configuration, not this one.
|
||||
Configure and build ACRN
|
||||
************************
|
||||
|
||||
Serial Ports
|
||||
------------
|
||||
First you need the source code, clone from:
|
||||
|
||||
The serial ports are assumed present at traditional ``COM1:`` and ``COM2:``
|
||||
I/O-space addresses (based at ``0x3f8`` and ``0x2f8``, respectively). Only
|
||||
polled operation is supported in this baseline configuration, as IRQ
|
||||
assignments under ACRN are configurable (and frequently non-standard).
|
||||
Interrupt-driven and MMIO operation are also possible.
|
||||
.. code-block:: console
|
||||
|
||||
Building and Running
|
||||
********************
|
||||
git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
|
||||
This details the process for building the :ref:`hello_world` sample and
|
||||
running it as an ACRN User OS.
|
||||
Like Zephyr, ACRN favors build-time configuration management instead
|
||||
of runtime probing or control. Unlike Zephyr, ACRN has single large
|
||||
configuration files instead of small easily-merged configuration
|
||||
elements like kconfig defconfig files or devicetree includes. You
|
||||
have to edit a big XML file to match your Zephyr configuration.
|
||||
Choose an ACRN host config that matches your hardware ("ehl-crb-b" in
|
||||
this case). Then find the relavent file in
|
||||
``misc/config_tools/data/<platform>/hybrid.xml``.
|
||||
|
||||
On the Zephyr Build System
|
||||
--------------------------
|
||||
First, find the list of ``<vm>`` declarations. Each has an ``id=``
|
||||
attribute. For testing Zephyr, you will want to make sure that the
|
||||
Zephyr image is ID zero. This allows you to launch ACRN with just one
|
||||
VM image and avoids the need to needlessly copy large Linux blobs into
|
||||
the boot filesystem. Under currently tested configurations, Zephyr
|
||||
will always have a "vm_type" tag of "SAFETY_VM".
|
||||
|
||||
#. The build process for the ACRN UOS target is similar to other boards. We
|
||||
will build the :ref:`hello_world` sample for ACRN with:
|
||||
Configure Zephyr Memory Layout
|
||||
==============================
|
||||
|
||||
.. zephyr-app-commands::
|
||||
:zephyr-app: samples/hello_world
|
||||
:board: acrn
|
||||
:goals: build
|
||||
Next, locate the load address of the Zephyr image and its entry point
|
||||
address. These have to be configured manually in ACRN. Traditionally
|
||||
Zephyr distributes itself as an ELF image where these addresses can be
|
||||
automatically extracted, but ACRN does not know how to do that, it
|
||||
only knows how to load a single contiguous region of data into memory
|
||||
and jump to a specific address.
|
||||
|
||||
This will build the application ELF binary in
|
||||
``samples/hello_world/build/zephyr/zephyr.elf``.
|
||||
Find the "<vm id="0">...<os_config>" tag that will look something like this:
|
||||
|
||||
#. Build GRUB2 boot loader image
|
||||
.. code-block:: xml
|
||||
|
||||
We can build the GRUB2 bootloader for Zephyr using
|
||||
``boards/x86/common/scripts/build_grub.sh``:
|
||||
<os_config>
|
||||
<name>Zephyr</name>
|
||||
<kern_type>KERNEL_ZEPHYR</kern_type>
|
||||
<kern_mod>Zephyr_RawImage</kern_mod>
|
||||
<ramdisk_mod/>
|
||||
<bootargs></bootargs>
|
||||
<kern_load_addr>0x1000</kern_load_addr>
|
||||
<kern_entry_addr>0x1000</kern_entry_addr>
|
||||
</os_config>
|
||||
|
||||
.. code-block:: none
|
||||
The ``kern_load_addr`` tag must match the Zephyr LOCORE_BASE symbol
|
||||
found in include/arch/x86/memory.ld. This is currently 0x1000 and
|
||||
matches the default ACRN config.
|
||||
|
||||
$ ./boards/x86/common/scripts/build_grub.sh x86_64
|
||||
The ``kern_entry_addr`` tag must match the entry point in the built
|
||||
``zephyr.elf`` file. You can find this with binutils, for example:
|
||||
|
||||
The EFI executable will be found at
|
||||
``boards/x86/common/scripts/grub/bin/grub_x86_64.efi``.
|
||||
.. code-block:: console
|
||||
|
||||
#. Preparing the boot device
|
||||
$ objdump -f build/zephyr/zephyr.elf
|
||||
|
||||
.. code-block:: none
|
||||
build/zephyr/zephyr.elf: file format elf64-x86-64
|
||||
architecture: i386:x86-64, flags 0x00000012:
|
||||
EXEC_P, HAS_SYMS
|
||||
start address 0x0000000000001000
|
||||
|
||||
$ dd if=/dev/zero of=zephyr.img bs=1M count=35
|
||||
$ mkfs.vfat -F 32 zephyr.img
|
||||
$ LOOP_DEV=`sudo losetup -f -P --show zephyr.img`
|
||||
$ sudo mount $LOOP_DEV /mnt
|
||||
$ sudo mkdir -p /mnt/efi/boot
|
||||
$ sudo cp boards/x86/common/scripts/grub/bin/grub_x86_64.efi /mnt/efi/boot/bootx64.efi
|
||||
$ sudo mkdir -p /mnt/kernel
|
||||
$ sudo cp samples/hello_world/build/zephyr/zephyr.elf /mnt/kernel
|
||||
By default this entry address is the same, at 0x1000. This has not
|
||||
always been true of all configurations, however, and will likely
|
||||
change in the future.
|
||||
|
||||
Create ``/mnt/efi/boot/grub.cfg`` containing the following:
|
||||
Configure Zephyr CPUs
|
||||
=====================
|
||||
|
||||
.. code-block:: console
|
||||
Now you need to configure the CPU environment ACRN presents to the
|
||||
guest. By default Zephyr builds in SMP mode, but ACRN's default
|
||||
configuration gives it only one CPU. Find the value of
|
||||
``CONFIG_MP_NUM_CPUS`` in the Zephyr .config file give the guest that
|
||||
many CPUs in the ``<cpu_affinity>`` tag. For example:
|
||||
|
||||
set default=0
|
||||
set timeout=10
|
||||
.. code-block:: xml
|
||||
|
||||
menuentry "Zephyr Kernel" {
|
||||
multiboot /kernel/zephyr.elf
|
||||
}
|
||||
<cpu_affinity>
|
||||
<pcpu_id>0</pcpu_id>
|
||||
<pcpu_id>1</pcpu_id>
|
||||
</cpu_affinity>
|
||||
|
||||
And then unmount the image file:
|
||||
Note that these indexes are physical CPUs on the host. When
|
||||
configuring multiple guests, you probably don't want to overlap these
|
||||
assignments with other guests. But for testing Zephyr simply using
|
||||
CPUs 0 and 1 works fine. (Note that ehl-crb-b has four physical CPUs,
|
||||
so configuring all of 0-3 will work fine too, but leave no space for
|
||||
other guests to have dedicated CPUs).
|
||||
|
||||
.. code-block:: console
|
||||
Build ACRN
|
||||
==========
|
||||
|
||||
$ sudo umount /mnt
|
||||
Once configuration is complete, ACRN builds fairly cleanly:
|
||||
|
||||
You now have a virtual disk image with a bootable Zephyr in ``zephyr.img``.
|
||||
If the Zephyr build system is not the ACRN SOS, then you will need to
|
||||
transfer this image to the ACRN SOS (via, e.g., a USB stick or network).
|
||||
.. code-block:: console
|
||||
|
||||
On the ACRN SOS
|
||||
---------------
|
||||
$ make -j BOARD=ehl-crb-b SCENARIO=hybrid
|
||||
|
||||
#. If you are not already using the ACRN SOS, follow `Getting Started Guide
|
||||
for ACRN Industry Scenario With Ubuntu Service VM
|
||||
<https://projectacrn.github.io/latest/getting-started/rt_industry_ubuntu.html>`_
|
||||
to install and boot "The ACRN Service OS".
|
||||
The only build artifact you need is the ACRN multiboot image in
|
||||
``build/hypervisor/acrn.bin``
|
||||
|
||||
#. Boot Zephyr as User OS
|
||||
Assemble EFI Boot Media
|
||||
***********************
|
||||
|
||||
On the ACRN SOS, prepare a directory and populate it with Zephyr files.
|
||||
ACRN will boot on the hardware via the GNU GRUB bootloader, which is
|
||||
itself launched from the EFI firmware. These need to be configured
|
||||
correctly.
|
||||
|
||||
.. code-block:: none
|
||||
Locate GRUB
|
||||
===========
|
||||
|
||||
$ mkdir zephyr
|
||||
$ cd zephyr
|
||||
$ cp /usr/share/acrn/samples/nuc/launch_zephyr.sh .
|
||||
$ cp /usr/share/acrn/bios/OVMF.fd .
|
||||
First, you will need a GRUB EFI binary that corresponds to your
|
||||
hardware. In many cases, a simple upstream build from source or a
|
||||
copy from a friendly Linux distribution will work. In some cases it
|
||||
will not, however, and GRUB will need to be specially patched for
|
||||
specific hardware. Contact your hardware support team (pause for
|
||||
laughter) for clear instructions for how to build a working GRUB. In
|
||||
practice you may just need to ask around and copy a binary from the
|
||||
last test that worked for someone.
|
||||
|
||||
You will also need to copy the ``zephyr.img`` created in the first
|
||||
section into this directory. Then run ``launch_zephyr.sh`` script
|
||||
to launch the Zephyr as a UOS.
|
||||
Create EFI Boot Filesystem
|
||||
==========================
|
||||
|
||||
.. code-block:: none
|
||||
Now attach your boot media (e.g. a USB stick on /dev/sdb, your
|
||||
hardware may differ!) to a Linux system and create an EFI boot
|
||||
partition (type code 0xEF) large enough to store your boot artifacts.
|
||||
This command feeds the relevant commands to fdisk directly, but you
|
||||
can type them yourself if you like:
|
||||
|
||||
$ sudo ./launch_zephyr.sh
|
||||
.. code-block:: console
|
||||
|
||||
Then Zephyr will boot up automatically. You will see the banner:
|
||||
# for i in n p 1 "" "" t ef w; do echo $i; done | fdisk /dev/sdb
|
||||
...
|
||||
<lots of fdisk output>
|
||||
|
||||
.. code-block:: console
|
||||
Now create a FAT filesystem in the new partition and mount it:
|
||||
|
||||
Hello World! acrn
|
||||
.. code-block:: console
|
||||
|
||||
Which indicates that Zephyr is running successfully under ACRN!
|
||||
# mkfs.vfat -n ACRN_ZEPHYR /dev/sdb1
|
||||
# mkdir -p /mnt/acrn
|
||||
# mount /dev/sdb1 /mnt/acrn
|
||||
|
||||
Copy Images and Configure GRUB
|
||||
==============================
|
||||
|
||||
ACRN does not have access to a runtime filesystem of its own. It
|
||||
receives its guest VMs (i.e. zephyr.bin) as GRUB "multiboot" modules.
|
||||
This means that we must rely on GRUB's filesystem driver. The three
|
||||
files (GRUB, ACRN and Zephyr) all need to be copied into the
|
||||
"/efi/boot" directory of the boot media. Note that GRUB must be named
|
||||
"bootx64.efi" for the firmware to recognize it as the bootloader:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# mkdir -p /mnt/acrn/efi/boot
|
||||
# cp $PATH_TO_GRUB_BINARY /mnt/acrn/efi/boot/bootx64.efi
|
||||
# cp $ZEPHYR_BASE/build/zephyr/zephyr.bin /mnt/acrn/efi/boot/
|
||||
# cp $PATH_TO_ACRN/build/hypervisor/acrn.bin /mnt/acrn/efi/boot/
|
||||
|
||||
At boot, GRUB will load a "efi/boot/grub.cfg" file for its runtime
|
||||
configuration instructions (a feature, ironically, that both ACRN and
|
||||
Zephyr lack!). This needs to load acrn.bin as the boot target and
|
||||
pass it the zephyr.bin file as its first module (because Zephyr was
|
||||
configured as ``<vm id="0">`` above). This minimal configuration will
|
||||
work fine for all but the weirdest hardware (i.e. "hd0" is virtually
|
||||
always the boot filesystem from which grub loaded), no need to fiddle
|
||||
with GRUB plugins or menus or timeouts:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# cat > /mnt/acrn/efi/boot/grub.cfg<<EOF
|
||||
set root='hd0,msdos1'
|
||||
multiboot2 /efi/boot/acrn.bin
|
||||
module2 /efi/boot/zephyr.bin Zephyr_RawImage
|
||||
boot
|
||||
EOF
|
||||
|
||||
Now the filesystem should be complete
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# umount /dev/sdb1
|
||||
# sync
|
||||
|
||||
Boot ACRN
|
||||
*********
|
||||
|
||||
If all goes well, booting your EFI media on the hardware will result
|
||||
in a running ACRN, a running Zephyr (because by default Zephyr is
|
||||
configured as a "prelaunched" VM), and a working ACRN command line on
|
||||
the console.
|
||||
|
||||
You can see the Zephyr (vm 0) console output with the "vm_console"
|
||||
command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
ACRN:\>vm_console 0
|
||||
|
||||
----- Entering VM 0 Shell -----
|
||||
*** Booting Zephyr OS build v2.6.0-rc1-324-g1a03783861ad ***
|
||||
Hello World! acrn
|
||||
|
||||
Loading…
Reference in New Issue
Block a user