The Rust Programming Language

40 readers
1 users here now

A place for all things related to the Rust programming language—an open-source systems language that emphasizes performance, reliability, and...

founded 2 years ago
MODERATORS
26
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/seino_chan on 2025-04-24 03:29:42+00:00.

27
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/WeeklyRustUser on 2025-04-23 21:09:02+00:00.


Currently the Write trait uses std::io::Error as its error type. This means that you have to handle errors that simply can't happen (e.g. writing to a Vec should never fail). Is there a reason that there is no associated type Error for Write? I'm imagining something like this.

28
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/disserman on 2025-04-23 15:39:45+00:00.


Good day everyone,

Let me present RoboPLC crate version 0.6.

RoboPLC is a framework for real-time applications development in Linux, suitable both for industrial automation and robotic firmwares. RoboPLC includes tools for thread management, I/O, debugging controls, data flows, computer vision and much more.

The update highlights:

  • New "hmi" module which can automatically start/stop a wayland compositor or X-server and run a GUI program. Optimized to work with our "ehmi" crate to create egui-based human-machine interfaces.
  • io::keyboard module allows to handle keyboard events, particularly special keys which are unable to be handled by the majority of GUI frameworks (SLEEP button and similar)
  • "robo" cli can now work both remotely and locally, directly on the target computer/board. We found this pretty useful for initial development stages.
  • new RoboPLC crates: heartbeat-watchdog for pulse liveness monitoring (both for Linux and bare-metal), RPDO - an ultra-lightweight transport-agnostic data exchange protocol, inspired by Modbus, OPC-UA and TwinCAT/ADS.

A recent success story: with RoboPLC framework (plus certain STM32 embassy-powered watchdogs) we have successfully developed BMS (Battery Management System) which already manages about 1 MWh.

29
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/MrJohz on 2025-04-23 20:34:40+00:00.

30
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/ksyiros on 2025-04-23 19:52:10+00:00.


We're releasing Burn 0.17.0 today, a massive update that improves the Deep Learning Framework in every aspect! Enhanced hardware support, new acceleration features, faster kernels, and better compilers - all to improve performance and reliability.

Broader Support

Mac users will be happy, as we’ve created a custom Metal compiler for our WGPU backend to leverage tensor core instructions, speeding up matrix multiplication up to 3x. This leverages our revamped cpp compiler, where we introduced dialects for Cuda, Metal and HIP (ROCm for AMD) and fixed some memory errors that destabilized training and inference. This is all part of our CubeCL backend in Burn, where all kernels are written purely in Rust.

A lot of effort has been put into improving our main compute-bound operations, namely matrix multiplication and convolution. Matrix multiplication has been refactored a lot, with an improved double buffering algorithm, improving the performance on various matrix shapes. We also added support for NVIDIA's Tensor Memory Allocator (TMA) on their latest GPU lineup, all integrated within our matrix multiplication system. Since it is very flexible, it is also used within our convolution implementations, which also saw impressive speedup since the last version of Burn.

All of those optimizations are available for all of our backends built on top of CubeCL. Here's a summary of all the platforms and precisions supported:

| Type | CUDA | ROCm | Metal | Wgpu | Vulkan | |


|


|


|


|


|


| | f16 | ✅ | ✅ | ✅ | ❌ | ✅ | | bf16 | ✅ | ✅ | ❌ | ❌ | ❌ | | flex32 | ✅ | ✅ | ✅ | ✅ | ✅ | | tf32 | ✅ | ❌ | ❌ | ❌ | ❌ | | f32 | ✅ | ✅ | ✅ | ✅ | ✅ | | f64 | ✅ | ✅ | ✅ | ❌ | ❌ |

Fusion

In addition, we spent a lot of time optimizing our tensor operation fusion compiler in Burn, to fuse memory-bound operations to compute-bound kernels. This release increases the number of fusable memory-bound operations, but more importantly handles mixed vectorization factors, broadcasting, indexing operations and more. Here's a table of all memory-bound operations that can be fused:

| Version | Tensor Operations | |


|


| | Since v0.16 | Add, Sub, Mul, Div, Powf, Abs, Exp, Log, Log1p, Cos, Sin, Tanh, Erf, Recip, Assign, Equal, Lower, Greater, LowerEqual, GreaterEqual, ConditionalAssign | | New in v0.17 | Gather, Select, Reshape, SwapDims |

Right now we have three classes of fusion optimizations:

  • Matrix-multiplication
  • Reduction kernels (Sum, Mean, Prod, Max, Min, ArgMax, ArgMin)
  • No-op, where we can fuse a series of memory-bound operations together not tied to a compute-bound kernel

| Fusion Class | Fuse-on-read | Fuse-on-write | |


|


|


| | Matrix Multiplication | ❌ | ✅ | | Reduction | ✅ | ✅ | | No-Op | ✅ | ✅ |

We plan to make more compute-bound kernels fusable, including convolutions, and add even more comprehensive broadcasting support, such as fusing a series of broadcasted reductions into a single kernel.

Benchmarks

Benchmarks speak for themselves. Here are benchmark results for standard models using f32 precision with the CUDA backend, measured on an NVIDIA GeForce RTX 3070 Laptop GPU. Those speedups are expected to behave similarly across all of our backends mentioned above.

| Version | Benchmark | Median time | Fusion speedup | Version improvement | |


|


|


|


|


| | 0.17.0 | ResNet-50 inference (fused) | 6.318ms | 27.37% | 4.43x | | 0.17.0 | ResNet-50 inference | 8.047ms | - | 3.48x | | 0.16.1 | ResNet-50 inference (fused) | 27.969ms | 3.58% | 1x (baseline) |

0.16.1 ResNet-50 inference 28.970ms - 0.97x
0.17.0 RoBERTa inference (fused) 19.192ms 20.28% 1.26x
0.17.0 RoBERTa inference 23.085ms - 1.05x
0.16.1 RoBERTa inference (fused) 24.184ms 13.10% 1x (baseline)
0.16.1 RoBERTa inference 27.351ms - 0.88x
---- ---- ---- ---- ----
0.17.0 RoBERTa training (fused) 89.280ms 27.18% 4.86x
0.17.0 RoBERTa training 113.545ms - 3.82x
0.16.1 RoBERTa training (fused) 433.695ms 3.67% 1x (baseline)
0.16.1 RoBERTa training 449.594ms - 0.96x

Another advantage of carrying optimizations across runtimes: it seems our optimized WGPU memory management has a big impact on Metal: for long running training, our metal backend executes 4 to 5 times faster compared to LibTorch. If you're on Apple Silicon, try training a transformer model with LibTorch GPU then with our Metal backend.

Full Release Notes:

31
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/hsjajaiakwbeheysghaa on 2025-04-23 18:53:49+00:00.


I've removed my previous post. This one contains a non-paywall link. Apologies for the previous one.

32
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/0xApurn on 2025-04-23 18:38:18+00:00.


I feel like systems programming is kinda a huge field. I came from web dev background and don't have a lot of ideas of what kinds of specialization of systems programming I want to get into. Can you share what you're working on and what excites you the most about it?

I don't think it needs to be system programming, but anything in rust is awesome. Trying to learn as much from the community!

33
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/nvntexe on 2025-04-23 17:13:07+00:00.


With CPUs pushing 128 cores and WebAssembly threads maturing, I’m mapping concurrency patterns:

Actor (Erlang, Akka, Elixir): resilience + hot code swap,

CSP (Go, Rust's async mpsc): channel-first thinking.

Fork-join / task graph (Cilk, OpenMP): data-parallel crunching

Which is best scalable and most readable for 2025+ machines? Tell war stories, esp. debugging stories deadlocks vs message storms.

34
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/Kobzol on 2025-04-23 13:14:29+00:00.


Wrote down some thoughts about how to interpret and use visibility modifiers in Rust.

35
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/dpytaylo on 2025-04-23 12:49:37+00:00.


Hello! I’ve had this idea stuck in my head that I can't shake off. Can Rust eventually stop supporting older editions?

For example, starting with the 2030 edition and the corresponding rustc version, rustc could drop support for the 2015 edition. This would allow us to clean up old code paths and improve the maintainability of the compiler, which gets more complex over time. It could also open the door to removing deprecated items from the standard library - especially if the editions where they were used are no longer supported. We could even introduce a forbid lint on the deprecated items to ease the transition.

This approach aligns well with Rust’s “Stability Without Stagnation” philosophy and could improve the developer experience both for core contributors and end users.

Of course, I understand the importance of giving deprecated items enough time (4 editions or more) before removing them, to avoid a painful transition like Python 2 to Python 3.

The main downside that I found is related to security: if a vulnerability is found in code using an unsupported edition, the only option would be to upgrade to a supported one (e.g., from 2015 to 2018 in the earlier example).

Other downsides include the fact that unsupported editions will not support the newest editions, and the newest editions will not support the unsupported ones at all. Unsupported editions will support newer editions up to the most recent rustc version that still supports the unsupported edition.

P.S. For things like std::i32::MAX, the rules could be relaxed, since there are already direct, fully equivalent replacements.

EDIT: Also, I feel like I’ve seen somewhere that the std crate might be separated from rustc in the future and could have its own versioning model that allows for breaking changes. So maybe deprecating things via edition boundaries wouldn’t make as much sense.

36
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/slint-ui on 2025-04-23 11:24:28+00:00.

37
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/Internal-Site-2247 on 2025-04-23 11:22:03+00:00.


i used to work on c/c++ for many years, but recently i focus on Rust for months, especially for writing windows kernel driver using Rust since i used to work in an endpoint security company for years

i'm now preparing to use Rust for more works

a few days ago i pushed two open sourced repos on github, one is about how to detect and intercept malicious thread creation in both user land and kernel side, the other one is a generic wrapper for synchronization primitives in kernel mode, each as follows:

[1]

[2]

i'm very appreciated for any reviews & comments

38
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/Shnatsel on 2025-04-23 11:20:00+00:00.

39
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/EtherealPlatitude on 2025-04-23 05:57:51+00:00.


Using egui, my app on Linux always launches to around 200MB of RAM usage, and if I wait a while—like 5 to 8 hours—it drops to 45MB. Now, I don't do anything allocation-wise in those few hours and from that point onwards, it stays around 45 to 60MB. Why does the first launch always allocate so much when it's not needed? I'm using tikv-jemallocator.

[target.'cfg(not(target_os = "windows"))'.dependencies]
tikv-jemallocator = { version = "0.6.0", features = [
    "unprefixed_malloc_on_supported_platforms",
    "background_threads",
] }

And if I remove it and use the normal allocator from the system, it's even worse: from 200 to 400MB.

For reference, this does not happen on Windows at all.

I use btop to check the memory usage. However, using profilers, I also see the same thing. This is exclusive to Linux. Is the kernel overallocating when there is free memory to use it as caching? That’s one potential reason.

linuxatemyram

40
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/ChiliPepperHott on 2025-04-22 20:35:21+00:00.

41
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/AdmiralQuokka on 2025-04-22 21:05:47+00:00.


todo!() is often used to mark an unfinished function. It's convenient, because it silences the compiler about mismatched return types. However, that doens't work if the return type is an "impl trait". Why not though? There wouldn't be any harm in pretending the never type implements all traits, right? Can't call non-existant methods on values that will never exist, right?

Is there a fundamental reason why this cannot be or is it just a current compiler limitation?

Example:

) -> impl Iterator { └─!is not an iterator the traitstd::iter::Iteratoris not implemented for!``

42
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/Skardyyy on 2025-04-22 17:52:27+00:00.


Hey, I just built a tool called mcat — kind of like cat, but for everything.

It:

  • Converts files like PDFs, DOCX, CSVs, ZIPs, even folders into Markdown or HTML

  • Renders Markdown/HTML as images (with auto-styling + theming)

  • Displays images/videos inline in your terminal (Kitty, iTerm2, Sixel)

  • Can even fetch from URLs and show them directly

Example stuff: sh mcat resume.pdf # turn a PDF into Markdown mcat notes.md -i # render Markdown to an image mcat pic.png -i # show image inline mcat file.docx -o image > img.png # save doc as an image

It uses Chromium and FFmpeg internally (auto-installs if needed), and it's built in Rust.

Install with cargo install mcat or check out the repo:

👉

Let me know what you think or break it for me 🙂

43
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/-Y0- on 2025-04-22 16:52:22+00:00.

44
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/karanonweb on 2025-04-22 06:53:39+00:00.

45
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/seanmonstar on 2025-04-22 14:14:35+00:00.


hyper is an HTTP library for Rust. This is a proposal to solve an issue when trying to forward cancelation of a body when backpressure has been applied. Feedback welcome, preferably on the linked PR!

46
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/kibwen on 2025-04-22 13:07:33+00:00.

47
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/DeepShift_ on 2025-04-22 12:11:38+00:00.

48
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/greyblake on 2025-04-22 07:24:51+00:00.

49
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/ByronBates on 2025-04-22 07:00:24+00:00.

50
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/thisdavej on 2025-04-22 04:04:49+00:00.


view more: ‹ prev next ›