Introduction
Async Rust is often praised for its ability to allow code to run tasks concurrently in an efficient manner, whether on massive servers or tiny microcontrollers. However, the promise of "zero-cost abstractions" is not entirely fulfilled, especially when every byte of binary size counts. In this article, we'll explore why Async Rust seems stuck in an MVP (Minimum Viable Product) state and how we might improve this situation.
The Problem of Binary Bloat
One of the biggest challenges with Async Rust is the binary bloat it causes. For example, the bar function using asynchronous futures generates 360 lines of MIR (Mid-level Intermediate Representation), compared to only 23 lines for a non-async version. This significant size difference can be problematic on embedded systems where memory is limited.
Real-World Examples
Consider an application on a microcontroller that needs to manage multiple sensors and actuators in real-time. Using Async Rust could theoretically simplify the code, but the increase in binary size due to async might exceed the microcontroller's storage capabilities.
Potential Optimizations
Proposals are underway to optimize this situation. For instance, an open Pull Request on the Rust repository attempts to reduce the size of generated futures. The idea is to streamline the transformation of async to a state machine during the MIR pass, before the code reaches the LLVM IR level.
Alternative Approaches
Another approach to circumvent the current issue is to use third-party libraries that better manage the size of futures. However, this often involves trade-offs in terms of performance or compatibility.
Conclusion
Although Async Rust remains technically in an MVP state, that doesn't mean it can't be used effectively. With appropriate optimizations and a clear understanding of the current limitations, it is possible to leverage its power while minimizing its drawbacks.
Let's discuss your project in 15 minutes.