Written by David Leach, Software Architect at NXP Semiconductor and member of the Zephyr Technical Steering Committee
Last month, the Zephyr Project announced the release of Zephyr RTOS 2.1. A long list of enhancements and bug fixes contained in Zephyr 2.1 can be found in the release notes.
· Normalized APIs across all architectures.
· Expanded support for ARMv6-M architecture.
· Added support for numerous new boards and shields.
· Added numerous new drivers and sensors.
· Added new TCP stack implementation (experimental).
· Added BLE support on Vega platform (experimental).
· Memory size improvements to Bluetooth host stack.
This release is the result of the hard work and skill of over 350 individuals engaged with the project over the last 3 months with over 1500 PRs merged and 532 issues closed. We would like to thank all those who engaged with the project both in front and behind the scenes to help improve the Zephyr Project for this release.
Sample boards that now have support
Improvements to Zephyr Project never stops. Work continues on the new TCP stack implementation, many different activities with Bluetooth, converting GPIO drivers to the new GPIO API, and many other enhancements and bug fixes.
We invite you to try out Zephyr 2.1. You can find our Getting started Guide here. If you are interested in contributing to the Zephyr Project please see our Contributor Guide. Join the conversation or ask questions on our Slack channel or Mailing List.
ByMaureen Helm, Chair of the Zephyr Project Technical Steering Committee
The Zephyr community converges every year at the Embedded Linux Conference Europe, and 2019 was no exception. This year we traveled to Lyon, France for an engaging week full of technical talks, spontaneous hallway conversations and hacking sessions, team dinners, and perhaps a nice glass of wine or two. It was a wonderful opportunity to get to know some of our newer members in person and finally put faces to familiar names and voices.
After the main conference, the Zephyr technical steering committee stayed on for two days of face-to-face meetings, including a few dial-ins from those who couldn’t make the trip. Compared to our weekly calls, the longer format F2F meeting allowed us to discuss and debate issues in greater depth, and make decisions about the technical direction of the project.
#1 – Mainline releases: Historically we have aimed for
quarterly releases, but will shift to a 4-month cycle in 2020. More details,
including dates, are in the Program
Management wiki page.
#2 – LTS releases: We clarified that LTS releases will be
maintained for two years, and LTS2 will be released approximately two years
after LTS1. We did not decide on a cadence beyond LTS2.
#3 – Toolchains: We agreed that multiple members have an
interest in supporting commercial toolchains and will kickoff a new
toolchain-focused working group.
#4 – User Experience: We brainstormed possible solutions to
common problems encountered by new developers.
#5 – Roles and Responsibilities: We debated a contributor ladder towards maintainership, how to distribute merge rights, and how to fill the release manager role for future releases. This conversation has continued in subsequent process working group meetings.
This blog originally ran on the Antmicro website. For more Zephyr development tips and articles, please visit their blog.
Solving problems that require real-time calculations and precise control typically calls for using an RTOS. While we have been working with a wide variety of RTOSs for various applications (like Contiki-NG for IoT, RTEMS for space applications, eCos for satellite equipment, FreeRTOS in many other fields etc), Antmicro’s RTOS of choice these days has been Zephyr, a Linux-Foundation driven, well-structured, vendor-independent and scalable real-time OS. We’ve ported and adapted Zephyr to many platforms, encouraged its use as a standard on RISC-V, promoted it in less standard contexts like FPGA devices.
So if you have a single device with a real-time requirement, typically it’s not that hard to decide how to approach the problem – just use Zephyr!
The problem starts when there are multiple heterogeneous devices that have to communicate in a standardised and robust way, performing a complex operation involving a network protocol while never leaving the “real-time” world. Scenarios like this are typical in the aerospace, automotive, robotics industries, and increasingly those industries are looking to reuse technologies known from the commercial/consumer market to leverage the massive scale offered by omnipresent, commodity tech.
For your everyday use case, the easiest way to connect multiple devices is of course Ethernet, but plain old Ethernet does not list real-time capabilities in its dictionary – how then can it be used in a real-time use case?
The set of standards that define Time Sensitive Networking is the answer to that problem. Leveraging the physical and logical foundations of Ethernet and extends it to cover real-time use cases by defining different aspects of time sensitive communication: clock synchronization, traffic shaping, scheduling, fault tolerance etc.
TSN seems then like a good fit, and sure enough, open source support for TSN is widely available in Linux. In the RTOS world however, there has previously not existed a proper implementation of TSN, readily available and tested on real hardware platforms, well, not until Zephyr 1.13!
Initial work: towards a TSN implementation in Zephyr
As a member of the Zephyr Project, Antmicro is always excited to add new functionalities to the OS, especially in fields that open it up for adoption in new use cases. Here, we were happy to work with another Zephyr project member, Intel on getting gPTP support added to Zephyr. “gPTP” stands for “generic Precision Time Protocol” and is responsible for clock synchronization. When we joined the project it was already in progress, but far from being finished. We implemented the missing state machines and fixed various bugs in the existing code.
The initial target was making Zephyr’s clocks synchronize with external Grand Masters.
Our focus was getting it to work on Microchip SAM E70 Xplained. At that time, the platform already had a Zephyr port (including the Ethernet driver), but it lacked drivers for the PTP clock.
After initial support was done and merged, we proceeded with configuring Zephyr nodes as Grand Masters, as well as ensuring operational Zephyr-to-Zephyr clock synchronization.
Qav: an important part of TSN
PTP is only a part of Time Sensitive Networking. Another important part of TSN is queue management.
The platform of our choice (SAM E70 Xplained) has multiple hardware queues built into its MAC controller, which allowed us to use the same platform to extend Zephyr’s TSN capabilities.
Antmicro implemented support for credit-based shaper algorithms in Zephyr, which are described in the 802.1Qav standard.
The work in that area required us to design an API to manage the Qav-capable Ethernet queues. Through this API/management interface, we made it possible to set and read various parameters, like idle slope, delta bandwidth, traffic class, etc.
Additionally, some status parameters were implemented. These are now shown in the regular networking shell in Zephyr for the supported network interfaces.
A network stack plays a critical role in an operating system like Zephyr. It is also constantly under very heavy development by various parties. Our work on the TSN/gPTP support was heavily influenced by all the changes in the networking subsystem. As can be expected in large development campaigns, these completely unrelated things would break our implementation repeatedly.
The reason for that was lack of more sophisticated testing of the setup. Sure, there were multiple unit tests which directly tested our stack implementation, but Time Sensitive Networking can be broken by seemingly minor changes in other parts of the networking stack.
Obviously, network protocol testing is difficult. You can either use synthetic tests that easily get outdated and don’t really reflect real life scenarios or you can create complicated physical network setups connected to a CI system – which is costly, difficult to maintain and creates only a single, static configuration.
A much better, more scalable solution is to use simulation. With Antmicro’s Renode open source simulation framework you can create script-defined complex configurations, allowing you to verify virtually every scenario imaginable.
In Renode’s 1.7 release, Antmicro added support for the SAM E70 platform, along with Ethernet with gPTP capabilities. With these new features we were able to create a CI setup testing upstream Zephyr in a virtual environment.
And thanks to an integration with the Robot Framework, it’s very easy to create new test cases in Renode. That’s why, for Zephyr, we decided to create a suite of tests verifying a range of aspects of a single application.
This lines up perfectly with the introduction of Renode Cloud Environment – a new CI system introduced by Antmicro, that you’ll be able to read about more on our blog soon. Here is a sneak peek of the TSN testing setup running in RCE.
First, Renode verifies if the board sends a PTP packet, which means that the PTP stack is started properly. Next, we analyze its reception and the proper reaction from the recipient’s OS. We analyze whether the compile-time configuration of the PTP stack is properly reflected in its runtime, and the highest level properties: whether the correct Grand Master node has been selected and whether the slave nodes are properly synchronizing their clocks.
The whole testing setup can be easily recreated with upstream Renode and Zephyr. For instructions, please refer to the TSN testing tutorial.
Building a TSN system?
If you’d like to use TSN is your system, and feel that an RTOS like Zephyr is a good fit for your needs, be sure to reach out to us at email@example.com – we’d be happy to help you apply TSN on existing and new hardware, and perhaps in simulation, for real-world use cases!
Written by Ivo Clarysse, CTO at Blue Clover Devices
This blog previously ran on the Blue Clover Devices website. You can view the website here.
IOT OS LANDSCAPE
These days there’s no lack of operating systems to choose from for embedded systems; Wikipedia counts about 100 of them. The Eclipse survey still shows Linux leading the pack, with Windows, FreeRTOS and Mbed OS being widely used as well.
For devices that have the necessary resources, full-blown operating systems like Linux (Android) or Windows dominate the field, but for more constrained devices, there’s a wide range of systems being used.
The Eclipse IoT Developer Survey 2019 shows more use of actual operating systems in IoT device firmware, as opposed to bare-metal programming or building on top of a minimal kernel.
Linux is widely used for IoT applications, but requires at least a Cortex-A MCU (or equivalent), and is not the preferred choice for more limited systems, such as Cortex-M.
FreeRTOS is quite popular in the embedded world and gets more support after the acquisition by Amazon in 2017. However, FreeRTOS is a bare operating system. Everything else such as drivers, file systems, crypto modules, network stacks, middleware, and a bootloader must be added from other sources.
ARM’s Mbed OS has the out-of-the-box integration with ARM’s Pelion Device Management going for it, making it a great choice to learn about IoT device provisioning, connection and management through LwM2M. Provided by ARM, it obviously does not support popular IoT platforms like ESP32 or RISC-V.
Mynewt has everything you could wish for in an operating system for resource constrained IoT devices, but the BSP support is fairly limited.
Zephyr originated from the Virtuoso DSP operating system which initially got rebranded as “Rocket” kernel, following its acquisition by Wind River Systems, but became Zephyr in 2016 when it became a Linux Foundation hosted Collaborative Project. Major sponsors and contributors of this open source collaborative effort include Intel, Nordic Semiconductor, NXP, SiFive, Synopsys and TI.
Like many other operating systems, Zephyr provides:
Secure bootloader (MCU Boot)
File systems (NFFS, LittleFS, NVS)
Middleware (including the MCUmgr OTA mechanism and LwM2M client functionality)
Zephyr is mostly licensed under the Apache 2.0 license, but drivers from Nordic and NXP are licensed under the permissible BSD-Clause-3 version, although some of the build tooling is GPLv2.
Amy Occhialino is the Chair of the Zephyr Project Governing Board and Director of Software Engineering at Intel. She has more than 20 years experience in technology and has recently shared insight into the social change in open source software at several conferences. She recently gave the keynote at The Consortium for Computing Sciences in Colleges and the Society of Women Engineers. In this blog, Amy shares some of that insight from her talks.
You may know Intel only as a hardware company, and in many ways this is true. Intel’s core business is semiconductor design and manufacturing. What may be news to you is that Intel has spent close to two decades working in the open source software community, collaborating on projects that enhance Intel Architecture and advocating for the beauty, elegance, and possibilities that exist within open source software development.
Intel software engineers are key contributors to the Linux kernel, are Linux kernel maintainers, and hold many software standards and leadership positions within open source projects and communities. Open source software development is an enormous commitment and investment at Intel. In fact, we went from sponsoring 12 open source projects to 200 spanning cloud, edge, and device growth over the last decade. The Zephyr Project is a great example of this.
Leading the future will require a wide range of perspectives, backgrounds, and ideas to effectively solve the world’s toughest challenges. We, as a technology community, are bound to fail if we do not demonstrate a commitment and passion to increasing and achieving full representative diversity within the open source software industry.
Our success will be threatened in three ways:
Market Failure: Without diversity-fueled creativity, innovation is stifled. The same perspective and thoughts are generated and reinforced, preventing new solutions from emerging.
Customer Failure: We will lose customers because we don’t listen to them, engage with them, understand them, and learn from them in a full perspective of ways.
Talent Failure: We will lose top talent because individuals with diverse backgrounds feel out of place in our culture and environment.
The Zephyr Project is committed to achieving an inclusive and diverse open source environment. We encourage any and all developers interested in the RTOS landscape to join the conversation and share your interests and talents with the community. Change doesn’t happen all at once, it’s incremental, one person doing one thing every day.
For me, I personally increase diversity in my own teams through my hiring practices, I have spent a decade creating support systems through my leadership of women’s groups, and I actively advocate for social change within Intel and my tech community. I am proud that Zephyr is part of this as an open source project with a high amount of diversity
Open source software gives us the potential conditions, but we must actively engage with it, and monitor it, for it to be what we want it to be.
This blog originally ran on the Antmicro website. For more Zephyr development tips and articles, please visit their blog.
Antmicro’s open source simulation framework, Renode, was built to enable simulating real-life scenarios – which have a tendency to be complex and require hybrid approaches.
That’s why, besides other things, the Renode 1.7.1 release has introduced an integration layer for Verilator, a well known, fast and open source HDL simulator, which lets you use hardware implementations written in Verilog within a Renode simulation.
When you are working on ASIC or FPGA IP written in an HDL, forming a part of a bigger system with unknowns both in the hardware and software, many things can go wrong on multiple levels. That’s why ultimately it’s best to test it within the scope of the full system, with drivers and test software, in a real-world use case. Simulating complete platforms with CPUs and all peripherals using actual HDL simulation, however, can be too slow for effective software development (and sometimes downright impossible, e.g. when access to the entire SoC’s HDL is not available). Renode models will give you better speed and flexibility to experiment with your architectural choices (as in the security IP development example of our partner Dover Microsystems) than HDL, but there might still be scenarios where you could quickly try to directly use complex peripherals you already have in HDL form before going on to model them in Renode. For these use cases Antmicro has enabled the option of co-simulating HDL in Renode using Verilator. Co-simulating means you’re only ‘verilating’ one part of the system, and may in turn expect a much faster development experience than with trying to perform an HDL simulation of the whole system.
In the 1.7.1 release of Renode you will find a demo which includes a ‘verilated’ UARTLite model connected to a RISC-V platform via the AXI4-Lite bus running Zephyr.
Integration layer overview
The integration layer was implemented as a plugin for Renode and consists of two parts: C# classes which manage the Verilator simulation process, and an integration library written in C++ that allows you to turn your Verilog hardware models into a Renode ‘verilated’ peripheral.
The ‘verilated’ peripheral is compiled separately and the resulting binary is started by Renode. The interprocess communication is based on sockets.
To make your own ‘verilated’ peripheral, in the main cpp file of your verilated model you need to include C++ headers applicable to the bus you are connecting to and the type of external interfaces you want to integrate with Renode – e.g. UART’s rx/tx signals. These headers can be found in the integration library.
// uart.h and axilite.h can be found in Renode's VerilatorPlugin
Next, you will need to define a function that will call your model’s eval function, and provide it as a callback to the integration library struct, along with bus and peripheral signals.
When you load such a platform in Renode and run a sample application, this is the output you’ll see. Keep in mind that the UART window displays data printed by the verilated peripheral.
You can also enable signal trace dumping by setting the VERILATOR_TRACE=1 variable in your shell. The resulting trace is written into a vcd file and can be viewed in e.g. GTKWave viewer.
Renode’s powerful co-simulation capabilities
Whether you are working on a new hardware block or you want to reuse the HDL code you have, Renode’s co-simulation capabilities allow you to test your IP in a broader context than just usual hardware simulation, connecting it to entire RISC-V, ARM or other SoCs even without writing any model.
You can use Renode’s powerful tracing and logging mechanisms to observe your peripheral’s behavior when used by an operating system of your choice, in an environment of your choice – be it a full-blown Linux-capable multi-core system or a small RTOS-ready SoC, or even a mix of those options.
Want to debug your driver via GDB but your target FPGA does not have a debugger connector? Or maybe it is just too small to contain the whole SoC you’d like to run? Perhaps you’d like to run a Python script to create a nice graph on each peripheral access? Renode has got you covered with all these features available out of the box.
If this sounds interesting, you can start using Renode’s co-simulation capabilities today or let us know about your use case directly so that we can potentially help you improve your simulation-driven workflow – all you need to do is get back to us at firstname.lastname@example.org.
Written by Nicolas Pitre, Senior Software Engineer at BayLibre
This blog post originally ran on the BayLibre website last month. For more details about BayLibre, visit https://baylibre.com/.
Conventional wisdom says you should normally apply small microcontrollers to dedicated applications with constrained resources. 8-bit microcontrollers with a few kilobytes of memory are still plentiful today. 32-bit microcontrollers with a couple of dozen kilobytes of memory are also very popular. In the latter case, it is typical to rely on a small RTOS to provide basic software interfaces and services.
The Zephyr Project provides such an RTOS. Many ARM-based microcontrollers are supported, but other architectures including ARC, XTENSA, RISC-V (32-bit) and X86 (32-bit) are also supported.
Yet some people are designing products with computing needs that are simple enough to be fulfilled by a small RTOS like Zephyr, but with memory addressing needs that cannot be described by kilobytes or megabytes, but that actually require gigabytes! So it was quite a surprise when BayLibre was asked to port Zephyr to the 64-bit RISC-V architecture.
Where to start
The 64-bit port required a lot of cleanups. Initially, we were far from concerned by the actual RISCV64 support. Zephyr supports a virtual “board” configuration faking basic hardware on one side and interfacing with a POSIX environment on the other side which allows for compiling a Zephyr application into a standard Linux process. This has enormous benefits such as the ability to use native Linux development tools. For example, it allows you to use gdb to look at core dumps without fiddling with a remote debugging setup or emulators such as QEMU.
Until this point, this “POSIX” architecture only created 32-bit executables. We started by only testing the generic Zephyr code in 64-bit mode. It was only a matter of flipping some compiler arguments to attempt a 64-bit build. But unsurprisingly, it failed.
The 32-bit legacy
Since its inception, the Zephyr RTOS targeted 32-bit architectures. The assumption that everything can be represented by an int32_t variable was everywhere. Code patterns like the following were ubiquitous:
Here the async pointer gets truncated on a 64-bit build. Fortunately, the compiler does flag those occurrences:
In function ‘mbox_async_free’:
warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
Therefore the actual work started with a simple task: converting all u32_t variables and parameters that may carry pointers into uintptr_t. After several days of work, the Hello_world demo application could finally be built successfully. Yay!
But attempting to execute it resulted in a segmentation fault. The investigation phase began.
While the compiler could identify bad u32_t usage when a cast to or from a pointer was involved, some other cases could be found only by manual code inspection. Still, Zephyr is a significant body of code to review and catching all issues, especially the subtle ones, couldn’t happen without some code execution tracing in gdb.
A much more complicated issue involved linked list pointers that ended up referring to non-existent list nodes for no obvious reason, and the bug only occurred after another item was removed from the list. This issue was only noticeable with a subsequent list search that followed the rogue pointer into Lalaland. And it didn’t trigger every time.
The header file for list operations starts with this:
Here we return the next pointer after masking out the bottom 2 flag bits. But 0x3U is interpreted by the compiler as an unsigned int and therefore a 32-bit value, meaning that ~0x3U is equal to 0xFFFFFFFC. Because node->next_and_flags is an u64_t, our (unsigned) 0xFFFFFFFC is promoted to 0x00000000FFFFFFFC, effectively truncating the returned pointer to its 32 bottom bits. So everything worked when the next node in the list was allocated in heap memory which is typically below the 4GB mark, but not for nodes allocated on the stack which is typically located towards the top of the address space on Linux.
The fix? Turning 0x3U into 0x3UL. The addition of that single character required many hours of debugging, and this is only one example. Other equally difficult bugs were also found.
The unsuspecting C library
One major change with most 64-bit targets is the width of pointers, but another issue is the change in width of long integer variables. This means that the printf() family of functions have to behave differently when the “l” conversion modifier is provided, as in “%ld”. On a 32-bit only target, all the printf() modifiers can be ignored as they all refer to a 32-bit integer (except for “%lld” but that isn’t supported by Zephyr). For 64-bit, this shortcut can no longer be used.
Alignment considerations are different too. For example, memory allocators must return pointers that are naturally aligned to 64-bit boundaries on 64-bit targets which has implications for the actual allocator design. The memcpy() implementation can exploit larger memory words to optimize data transfer but a larger align is necessary. Structure unions may need adjustments to remain space efficient in the presence of wider pointers and longs.
Test, test and test
One great thing about Zephyr is its extensive test suite. Once all the above was dealt with, it was time to find out if the test suite was happy. And of course it wasn’t. In fact, the majority of the tests failed. At least the Hello_world demo application worked at that point.
Writing good tests is difficult. The goal is to exercise code paths that ought to work, but it is even better when tests try to simulate normal failure conditions to make sure the core code returns with proper error codes. That often requires some programming trickery (read: type casting) within test code that is less portable than regular application code. This means that many tests had to be fixed to be compatible with a 64-bit build. And when core code bugs only affecting 64-bit builds were found, fixing them typically improved results in large portions of the tests all at once.
OK, but where does RV64 fit in this story?
We wrote the RV64 support at the very end of this project. In fact, it represented less than 10% of the whole development effort. Once Zephyr reached 64-bit maturity, it was quite easy to abstract register save/restore and pointer accesses in the assembly code to support RV64 within the existing RV32 code thanks to RISC-V’s highly symmetric architecture. Testing was also easy with QEMU since it can be instructed to use either an RV32 or an RV64 core with the same machine model.
Taking full advantage of 64-bit RISC-V cores on Zephyr may require additional work depending on the actual target where it would be deployed. For example, Zephyr doesn’t support hardware floating point context switching or SMP with either 32-bit or 64-bit RISC-V flavors yet.
But the groundwork is now done and merged into the mainline Zephyr project repository. Our RV64 port makes Zephyr RTOS 2.0.0 a milestone release — it’s the first Zephyr version to support both 32-bit and 64-bit architectures.
Written by Ioannis Glaropoulos, Software System Architect at Nordic Semiconductor and active member of the Zephyr Technical Steering Committee
Last month, the Zephyr Project announced the release of Zephyr RTOS 2.0 and we are excited to share the details with you! Zephyr 2.0 is the first release of Zephyr RTOS after the 1.14 release with Long-Term support in April 2019. It is also a huge step up from the 1.14 release, bringing a wide list of new features, significant enhancements in existing features, as well as a large list of new HW platforms and development boards.
On the Kernel side, we enhanced the compatibility with 64-bit architectures, and significantly improved the precision of timeouts, by boosting the default tick rate for tickless kernels.
Additionally, we are excited to welcome ARM Cortex-R into the list of architectures supported in Zephyr RTOS.
A major achievement in this release is the stabilization of the Bluetooth Low Energy (BLE) split controller, which is now the default BLE controller in the Zephyr RTOS. The new BLE controller enables support for multi-vendor Bluetooth v5.0 radio hardware with a single controller code-base, thanks to a layered modular architecture, where most of the controller code is hardware agnostic. The new controller also features improved scheduling of continuous scanning and directed advertising, and increased radio time utilization. The latter significantly improves the achievable communication bandwidth – among other use-cases – in BLE Mesh networking.
In the networking area, we introduced support for SOCKS5 proxy, an Internet protocol that exchanges network packets between a client and server through a proxy server. In addition, we added support for 6LoCAN, a 6Lo adaption layer for Controller Area Networks, and for Point-to-Point Protocol (PPP), which is used to establish a direct connection between two nodes. We, finally, added support for UpdateHub, an end-to-end solution for large scale over-the-air device updates.
A most sincere thank you to the more than 215 developers who contributed to this release. Not only did you add a wealth of new features during the merge window, you also rallied together as a community during the stabilization period across time zones, companies, architectures, and even weekends, to find and fix bugs, to make Zephyr 2.0 yet another great release! This release would not have been possible without your hard work!
The UltraScale+, a high-performance FPGA SoC designed for heterogeneous processing with 4 Cortex-A53 cores and 2 Cortex-R5 cores, is often used in Antmicro’s projects. For certain complex devices, the combined processing capabilities of the US+ FPGA SoC’s heterogeneous cores are ideal – with the R5 cores used for real-time processing, the A53s for running Linux with non-critical software, and the FPGA used for dedicated accelerators for large amounts of data, such as high-resolution video. A good example is our fast 3D vision system, X-MINE, currently being deployed in several valuable mineral mines across Europe.
For such AMP applications however, only FreeRTOS and bare metal are available as options to be run on the R5 core by default. Coming from a software-oriented and standards-driven perspective, Antmicro likes to work with the Linux-Foundation backed, vendor-neutral and scalable Zephyr RTOS, of which we are a member – and so porting Zephyr to the US+ was an obvious choice.
Why is Zephyr a good choice
Dedicated for all but the most resource-constrained devices, Zephyr can target a variety of use cases in real-time applications with the US+’ Cortex-R.
Zephyr allows for easy handling of multiple configuration options, APIs and external components, and is well suited to structured application development. We’d worked with many AMP Linux+RTOS applications on various platforms, including ones executed in TEEs, which makes us especially sensitive to mixing programming styles and code architecture, which differ immensely between Zephyr and more traditional RTOS.
Another benefit of Zephyr is that it targets some very serious protocol and standard implementations, being e.g. the first open source RTOS to introduce TSN support – by way of Antmicro’s contribution. The rising popularity of TSN in automotive and aerospace applications, and just about everywhere else, could be a very important reason to start using Zephyr in your TSN-capable product.
The Zephyr port
Just recently initial support for Cortex-R has been introduced in Zephyr, providing basic context switching and interrupts, as well as adding a testing platform in simulation.
Antmicro’s contribution, released to GitHub today, introduces the support for a first real hardware platform. Our choice was the Enclustra Mercury XU1 system-on-module, which is often used by ourselves and our customers, encapsulating the complexity of the UltraScale+ MPSoC in an easily swappable module. Antmicro has a standard devkit based on this SoM which you could use to recreate this demo, but of course it should be possible to run it with minor tweaks on any Zynq UltraScale+ MPSoC device.
It also helps that Antmicro has been in charge of developing the entire OS-level software stack for all the FPGA SoC modules from Enclustra, basing on Buildroot/OpenEmbedded and our deep cross-area HW/SW/FPGA expertise which helped build a very easy to use interface for thousands of customers purchasing SoMs from this vendor. For simplicity, we will leverage this building block, called Enclustra Build Environment, in this note, although of course that’s not a strict necessity.
How to run a demo
Zephyr can be run on Cortex-R5 either from Linux running on Cortex-A53, or using a JTAG adapter. Here we will use Linux to have supervisor control (power on, load firmware, power off) over the remote processor.
Zephyr comes with a number of demos you can run on the Mercury XU1 SoM, and we’ll focus on the philosophers demo here. It implements a solution to the Dining Philosophers problem, which in computer science is considered a classic multi-thread synchronization problem.
Building the Linux environment for Cortex-A53
The setup requires the Enclustra Build Environment – a tool by Antmicro that enables easy and fast build of Linux with necessary bootloaders and firmware. It provides a simple ncurses-based GUI and command line interface to fetch and build U-Boot, Linux and a Buildroot-based root file system. At present, EBE supports 10 modules of the two SoC families: Zynq-7000 and Zynq-UltraScale+ including the Mercury XU1 module that we used. To get you started with EBE, refer to its online documentation.
In order to use the Cortex-R5, you have to load the firmware to the Tightly Coupled Memory of the processor. For this, a few additions in the original devicetree, that are related to the remoteproc device, are required. The necessary devicetree parts can be found in this document (pages 15-16).
The application was built with Zephyr version 1.14.99 and ZephyrOS SDK version 0.10.2. The code is available on GitHub.
The philosophers demo can be built using the following bash commands:
Go to the zephyr repository: cd zephyrproject/zephyr Set up your build environment: source zephyr-env.sh Go to the location of the demo and build it:
Starting the Zephyr app from Linux
The Zephyr app should be copied to /lib/firmware in the root filesystem By default, the driver for the remote processor is compiled as a module. It can be loaded into the kernel using the following command:
Load and start the application for Cortex-R5:
The image below presents the result of running the application.
In the near future, We plan to enable Inter Processor Communication between Linux running on the A53 core and Zephyr on the R5 core. Both Zephyr and Linux support OpenAMP (Open Asymmetric Multi Processing) which is a platform that implements homogenous API for asymmetric multiprocessing.
Currently, Zephyr has only one OpenAMP demo. It targets the LPC54114 SoC which features a Cortex-M4 core. Adding a demo with Zephyr running on R5 and Linux on A53 would be the very first of this kind, so it’s definitely a worthwhile endeavor.
Benefits of AMP on US+
Asymmetric multiprocessing (AMP) can be really useful to get the best of both worlds, allowing you to get predictable, real-time responses where they matter, while keeping the ease-of-use and richness of a standard Linux OS. We have built many FPGA and regular SoC based Linux devices which benefited from e.g. running Web-based control servers and GUIs on Cortex-A cores, while keeping a critical functionality running on another CPU core (be it Cortex-M, A or R) with an RTOS. And in terms of the programming experience, Zephyr is a good match for the Linux you’ll be running on the main application core.
If you want to develop a complex application for Xilinx’s Zynq UltraScale+ MPSoC, and could use HW-SW co-design capabilities of Antmicro, whether it is designing dedicated PCBs, creating well-structured and modern FPGA code and/or integrating this with Linux or Zephyr – or both, don’t hesitate to contact us at email@example.com.
This is the 28 August 2019 newsletter tracking the final part of the Zephyr v2.0 development which was merged into the mainline tree on GitHub. The merge window is now closed and the project is working hard on getting a 2.0 release out on August 30.
Until the merge window opens again, only bug fixes, new documentation, and special cases (with TSC approval) will be merged.
This newsletter covers the following inclusive commit range:
85bc0d2f Revert “gen_app_partitions.py: make generated/app_smem_*.ld files deterministic”, merged 27 June 2019
4524035b release: Zephyr 2.0.0-rc1, merged 11 August 2019
Zephyr is now a 64-bit operating system (in the LP64 sense — the existing x86_64 support uses the x32 ABI, which e.g. has 32 bit pointers)
Support for 64-bit RISCV
Support for 64-bit “native POSIX”
Support for Bluetooth version 5.1
The new “split” link layer, which has been in development for over a year, was made the default setting. Certain low-resource boards, such as those using nRF51 SoCs, are still using the legacy L2.
Network management socket support (AF_NET_MGMT and related socket options)
Userspace support for various routines
Timeout support for connect()
sendmsg() support for both TCP and UDP
Configurable TLS credential loading for LWM2M
LWM2M IPSO objects for buzzers, on/off switches, push buttons, location, accelerometers
LWM2M connection monitoring object support
Configurable MQTT keepalive
Improvements and changes to router handling
PPP (point-to-point protocol) support
6LoCAN support and canbus Ethernet translators
DHCPv6 support for OpenThread
NEW AND SHINY
A shell framework for testing ADCs
A new framework was added for managing AT-based modems; the ublox-sara-r4 driver was converted to it
Support for the littlefs file system
CPU clock frequency can now be obtained from device tree; this affected device tree bindings and drivers widely in the tree
Faster context switch via per-thread page tables on x86
User mode may now induce kernel oops on x86
Initial support for SMP on ARC
Initial support for a trusted execution environment (TEE) on ARC
Unaligned access on ARC
gmtime (and an inverse, timegm_r) support
Support for version 6 of the LittleVGL graphics library
A log “frontend” API, logging/log_frontend.h
An NVS backend for the settings subsystem
i.MXRT usdhc disk access support
Massive include cleanup and re-work: various files in zephyr/include/ were moved. For now, out of tree users can make use of CONFIG_COMPAT_INCLUDES to keep using the old include locations, but this will go away “eventually”.
Various improvements to the built-in printf format string handler
A much faster 10 kHz default “tick” rate is set when the clock driver is tickless, greatly increasing the precision at which future events can be scheduled, among other things.
Much simpler page table generation on x86
The build system now looks for an “app.overlay” device tree overlay file in a Zephyr application’s root directory
The build system toolchain abstractions now cover the exact tools used for objdump, objcopy, and more
Various fixes and enhancements to the sanitycheck script used in CI
New char2hex(), hex2char(), bin2hex(), hex2bin() helpers
Patches by area (1303 patches total):
Continuous Integration: 18
Device Tree: 117
Firmware Update: 3
1ef7a858 arch: arm: allocate a wide stack guard for FP-capable threads
6a9b3f5d arch: arm: allocate a wide priv stack guard for FP-capable threads
360ad9e2 arch: arm: mpu: program a wide MPU stack guard for FP capable threads