Announcing heim project
Battery crate (which I’ll never stop bragging about apparently) was such a great experience — I’d learned much, got a ton of positive feedback and even had seen it’s being used in some cool projects already. All of it gave me an idea about what to do next — somewhat bigger thing for a Rust community.
If you are familiar with Python ecosystem, probably you had heard about psutil package — a cross-platform library for retrieving information about system processes and system utilization (CPU, memory, disks, network and so on). It is very popular and actively used package, which has analogs in other languages: gopsutil for Golang, oshi for Java, you name it.
Rust, of course, is not an exception here: we do have psutil, sysinfo, sys-info and systemstat crates.
Now, despite the tremendous work that had been done already by the authors of these crates, I’m excited to announce what I’ve been working on for the past three months: “heim” project — library for system information fetching.
“heim” translates from the Old Norse as “home”.
Get the information about your new home, little program!
What makes “heim” different?
I’ll start with the big and controversial thing, because it is very important.
It is async
Async support becomes a first-class citizen in Rust with recent Future
trait and upcoming async_await
feature stabilization;
yet ecosystem is small and fragile for now and mostly filled with various HTTP servers like actix
, hyper
or tide
.
Isn’t it about time to expand it?
So, why heim
is async? The main reason is because it can help to provide the requested information much faster. Consider this case: you need to fetch the opened network connections list in Linux.
Commonly used approach is to read /proc/net/*
files — widely known and existed for a last few decades way to communicate with kernel through a layers of indirection and weird (in my opinion) text formats. But here is a problem: not only it does not expose all the information possible, it quickly becomes a bottleneck when you have thousands of thousands connections, which is an usual case for any server.
Fortunately for Linux users, there exists a sock_diag
interface, which provides an extended information via the AF_NETLINK
socket and it works in magnitudes faster.
Same goes to processes list: in order to get all information about only one process running, you are required to read roughly a couple hundred files; and for each file you need will at least three syscalls (open
, read
and close
) and a specially crafted parser.
Unfortunately, there is nothing similar to sock_diag
exists (there was a task_diag
but it was not merged into the Kernel upstream), so at least async code can help us to parallel the data loading and do something else while waiting for results.
These are the cases where async support would shine — fetching the data in a most efficient and fastest way.
Of course, there is a caveat exists: it is not implemented yet :) heim
is a very young project and routines for network connections and processes are basically missing right now, but it stands on a good foundation and async-first concept plays an important role here.
Also, in my opinion, it is so much easier to use async code today compared to what it was a year ago. Check this example with runtime crate help:
#![feature(async_await)]
use heim::{cpu, Result};
#[runtime::main]
async fn main() -> Result<()> {
println!("Logical cores: {:?}", cpu::logical_count().await?);
println!("Physical cores: {:?}", cpu::physical_count().await?);
Ok(())
}
I can’t say that async-first approach would fit everyone, but it has its own benefits, which we can use.
Cross-platform
The first milestone for heim
is to fully support Tier 1 platforms — Linux, macOS and Windows for i686
and x86_64
. It is not completed yet, but it covers a lot already, check out the docs or see the details below.
Not all things were made available in all OSes, so heim
is copying the approach used by Rust libstd
, where fully cross-platform things are available through the public methods and platform-specific stuff is moved out into the extension traits, take the std::fs::Metadata
as an example:
use std::fs;
use std::io::Result;
use std::os::linux::fs::MetadataExt;
fn main() -> Result<()> {
let metadata = fs::metadata("/")?;
// this line is guaranteed to work on all platforms
dbg!(metadata.file_type());
// and this line will compile only for Linux targets
// and would throw the compilation error for other targets
dbg!(metadata.st_ino());
Ok(())
}
That way API forces developers to be aware of what things can be used for any target and what should be fenced with a conditional compilation attributes:
#![feature(async_await)]
use heim::{cpu, Result};
use heim::cpu::os::linux::CpuTimeExt;
#[runtime::main]
async fn main() -> Result<()> {
let cumulative_cpu_time = cpu::time().await?;
// Information about time spent waiting for I/O to complete
// is available only for Linux targets
dbg!(cumulative_cpu_time.io_wait());
Ok(())
}
Modular and idiomatic API
heim
API is split into a smallest functions possible and thanks to futures combinators, it’s up to you to choose what exact data do you want to fetch and what to do with it later. It tries to be as predictable and flexible as possible and mostly follows the Rust API guidelines.
#![feature(async_await)]
use futures::try_join;
use heim::{host, Result};
#[runtime::main]
async fn main() -> Result<()> {
let platform_future = host::platform();
let uptime_future = host::uptime();
// Will execute both futures in parallel!
let (platform, uptime) = try_join!(platform_future, uptime_future)?;
println!("Running on: {:?}", platform);
println!("System uptime is {:?}", uptime);
Ok(())
}
What can it do already?
Right now heim
can provide the following information:
- Logical and physical CPU cores amount, frequencies, statistics and times
- Disks partitions, I/O counters and free/used space for them
- Host system name and version, uptime and logged in users
- Memory and swap statistics
- Network interfaces and their I/O counter stats
- Running processes pids (pretty much nothing at the moment)
As an experiment, there exists the detection of virtualization system we are running in (based on a systemd-detect-virt
), it can detect if we are inside of the Docker container! Some other virtualization systems should be detected too, but it’s available only for Linux now.
Check out the documentation page, it contains all public API available, nicely split by system components.
What’s next?
Since it is a one man show right now, development is not as fast as it could be, but goes steady.
Two big components are not implemented yet: there is no way to fetch the network connections and running processes. Design decisions should be made and blocking PRs be merged before the work can be started.
Another quite important thing is missing: it does not work in a virtualized environments.
There are various solutions to this problem exists: psutil
suggests to set up global module attribute psutil.PROCFS_PATH
, which should point to the container procfs mount path, and gopsutil
searches for HOST_PROC
and HOST_SYS
environment variables with corresponding paths.
I’m not sure yet how heim
should handle this (an ideal option would be an automatic detection if possible), so if you have any ideas, I would love to hear your thoughts!
In addition, current implementation covers only modern platform versions, so there might be a chance that some kind of old Linux (say, with kernel version 2.6 or 3.x) would just throw out only the errors, same applies to macOS and Windows.
There is a lot to do, but I hope that it will be the best crate available to do such a stuff eventually.
In conclusion
Despite the rough parts described above, heim
can be used already with a little precaution, so if you are interested, give it a try and let me know how it goes.
As an extra, there exists a Proof of Concept clone of the Prometheus node_exporter,
which exposes heim
data as a Prometheus metrics; it is outdated a little bit and broken at the moment because of dependencies issues,
but still can suite as an example.
By the way, I’m open to new job opportunities.