The Rust Programming Language

40 readers
1 users here now

A place for all things related to the Rust programming language—an open-source systems language that emphasizes performance, reliability, and...

founded 2 years ago
MODERATORS
1
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/oconnor663 on 2025-04-25 21:54:53+00:00.

2
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/venturepulse on 2025-04-25 21:32:41+00:00.


While working on my web research, I ended up writing a small function to make newline characters consistent: either Unix (\n) or DOS (\r\n) style.

I noticed existing crates like newline-converter don't use SIMD. Mine does, through memchr, so I figured I'd publish it as its own crate: newline_normalizer.

Rust has been super helpful for me thanks to the amazing community and tools out there. I thought it’s time to start giving back a bit.

This crate is just a small piece, but it’ll eventually fit into a bigger text normalization toolbox I'm putting together. This toolbox would primarily help data scientists working in natural language processing and web text research fields.

3
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/KlausWalz on 2025-04-25 11:21:33+00:00.


Hello ! Rust is the first language with which I work on back-end high performance application. We are currently encountering a stack overflow problem on a remote machine, and one idea I got was to investigate the stack during integration test execution to maybe know which struct is "too big" (we have no recursion and neither infinite loops since the program never failed somewhere else than that specefic red hat machine).

However, I was never successfull to debug my program, I am almost forever giving up on debuggers. I tried LLDB with rust rover, with vsode and on terminal, nothing works, the breakpoints always get skipped. Almost every tutorial on this topic debugs very simple hello world apps (which I could debug too !) but never a huge monorepo of 15 nested projects like mine.

Currently, I am working with VSCode + LLDB, and the problem is that wherever I set my breakpoints, the program never stop, the test executes as if I did nothing. Can you please help me or at least send me a guide that can teach me how to setup correctly a debugger for a huge project ? For info, this is the task in tasks.json that I use to run my test :


    {
        "type": "lldb",
        "request": "launch",
        "name": "Debug test_integration",
        "cargo": {
            "args": [
                "test",
                "--no-run",
                "--lib",
                "--package=my_client"
            ],
            "filter": {
                "name": "my_client",
                "kind": "lib"
            }
        },
        "args": [
            "memory_adt::tests::my_test",
            "--exact",
            "--nocapture",
            "--test-threads=1"
        ],
        "env": {
            "RUST_BACKTRACE": "1"
        },
        "cwd": "${workspaceFolder}"
    },       

4
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/iamkeyur on 2025-04-25 15:14:55+00:00.

5
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/frstyyy on 2025-04-25 09:48:02+00:00.


So, I had an extra tablet laying around, it's not really that performant to do anything and so I wanted to use it as a media visualizer/controller for my pc.

I looked for apps or anything that would allow me to do what I wanted, I didn't find any (Okay I didn't really research extensively and I thought it would be a cool project idea, sorry for the clickbait ig) so I built a server in rust which would broadcast current media details in my pc over the local network using socketio and exposed a client webapp in my local network as well. I made it a cli tool such that users can bring their own frontend if they want to as well.

Currently, it only works for windows btw. Rust newbie here so I'm open to suggestions.

Repo: Media Controller

6
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/rikonaka on 2025-04-25 03:40:53+00:00.


First of all, I would like to thank the developers of libpnet. Without your efforts, these two software would not exist.

Secondly, I used rust to implement the pcapture library by myself, instead of directly encapsulating libpcap.

xxpdump repo link. pcapture repo link.

In short, xxpdump solves the following problems.

  • The filter implementation of tcpdump is not very powerful.
  • The tcpdump does not support remote backup traffic.

It is undeniable that libpcap is indeed a very powerful library, but its rust encapsulation pcap seems a bit unsatisfactory.

In short, pcapture solves the following problems.

The first is that when using pcap to capture traffic, I cannot get any data on the data link layer (it uses a fake data link layer data). I tried to increase the executable file's permissions to root, but I still got a fake data link layer header (this is actually an important reason for launching this project).

Secondly, this pcap library does not support filters, which is easy to understand. In order to implement packet filtering, we have to implement these functions ourselves (it will be very uncomfortable to use).

The third is that you need to install additional libraries (libpcap & libpcap-dev) to use the pcap library.

Then these two softwares are the products of my 20% spare time, and suggestions are welcome.

7
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/Whole-Assignment6240 on 2025-04-25 06:18:46+00:00.

8
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/LordMoMA007 on 2025-04-25 01:27:40+00:00.


I’m curious—can writing an idiomatic fibonacci_compile_time function in Rust actually be that easy? I don't see I could even write code like that in the foreseeable future. How do you improve your Rust skills as a intermediate Rust dev?

// Computing at runtime (like most languages would)
fn fibonacci\_runtime(n: u32) -> u64 {
 if n <= 1 {
 return n as u64;
 }

let mut a = 0; let mut b = 1; for _ in 2..=n { let temp = a + b; a = b; b = temp; } b


}

// Computing at compile time
const fn fibonacci\_compile\_time(n: u32) -> u64 {
 match n {
 0 => 0,
 1 => 1,
 n => {
 let mut a = 0;
 let mut b = 1;
 let mut i = 2;
 while i <= n {
 let temp = a + b;
 a = b;
 b = temp;
 i += 1;
 }
 b
 }
 }
}
9
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/GeroSchorsch on 2025-04-24 17:33:14+00:00.


I was interested in RISC-V and decided to write this basic emulator to get a better feel for the architecture and learn something about cpu-emulation along the way. It doesn't support any peripherals and just implements the instructions.

I've been writing Rust for some while now and feel like I've plateaued a little which is I would appreciate some feedback and new perspectives as to how to improve things or how you would write them.

This is the repo: ruscv

10
1
Bevy 0.16 (bevyengine.org)
submitted 1 day ago by bot@lemmit.online to c/rust@lemmit.online
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/_cart on 2025-04-24 20:08:51+00:00.

11
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/godzie44 on 2025-04-24 15:51:44+00:00.


BS is a modern debugger for Linux x86-64. Written in Rust for Rust programs.

After 10 months since the last major release, I'm excited to announce BugStalker v0.3.0—packed with new features, improvements, and fixes!

Highlights:

  • async Rust Support – Debug async code with new commands:

    • async backtrace – Inspect async task backtraces
    • async task – View task details
    • async stepover / async stepout – Better control over async execution
  • enhanced Variable Inspection:

    • argd / vard – Print variables and arguments using Debug trait
  • new call Command – Execute functions directly in the debugged program

  • trigger Command – Fine-grained control over breakpoints

  • new Project Website – better docs and resources

…and much more!

📜 Full Changelog:

📚 Documentation & Demos:

What’s Next?

Plans for future releases include DAP (Debug Adapter Protocol) integration for VSCode and other editors.

💡 Feedback & Contributions Welcome!

If you have ideas, bug reports, or want to contribute, feel free to reach out!

12
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/lets_get_rusty on 2025-04-24 14:02:44+00:00.

13
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/emilern on 2025-04-24 13:57:06+00:00.


Rerun is an easy-to-use database and visualization toolbox for multimodal and temporal data. It's written in Rust, using wgpu and egui. Try it live at .

14
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/rkstgr on 2025-04-24 12:03:34+00:00.

15
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/yu-chen-tw on 2025-04-24 06:41:35+00:00.


The syntax just looks like Rust, keeps same pros to Rust, but simpler.

It’s still in the early stage, inspired by many modern languages including: Rust, Go, Zig, Pony, Gleam, Austral, many more...

A lot of features are either missing or currently being worked on, but the design looks pretty cool and promising so far.

Haven’t tried it yet, just thought it might be interesting to discuss here.

How do you thought about it?

Edit: I'm not the project author/maintainer, just found this nice repo and share with you guys.

16
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/seino_chan on 2025-04-24 03:29:42+00:00.

17
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/WeeklyRustUser on 2025-04-23 21:09:02+00:00.


Currently the Write trait uses std::io::Error as its error type. This means that you have to handle errors that simply can't happen (e.g. writing to a Vec should never fail). Is there a reason that there is no associated type Error for Write? I'm imagining something like this.

18
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/disserman on 2025-04-23 15:39:45+00:00.


Good day everyone,

Let me present RoboPLC crate version 0.6.

RoboPLC is a framework for real-time applications development in Linux, suitable both for industrial automation and robotic firmwares. RoboPLC includes tools for thread management, I/O, debugging controls, data flows, computer vision and much more.

The update highlights:

  • New "hmi" module which can automatically start/stop a wayland compositor or X-server and run a GUI program. Optimized to work with our "ehmi" crate to create egui-based human-machine interfaces.
  • io::keyboard module allows to handle keyboard events, particularly special keys which are unable to be handled by the majority of GUI frameworks (SLEEP button and similar)
  • "robo" cli can now work both remotely and locally, directly on the target computer/board. We found this pretty useful for initial development stages.
  • new RoboPLC crates: heartbeat-watchdog for pulse liveness monitoring (both for Linux and bare-metal), RPDO - an ultra-lightweight transport-agnostic data exchange protocol, inspired by Modbus, OPC-UA and TwinCAT/ADS.

A recent success story: with RoboPLC framework (plus certain STM32 embassy-powered watchdogs) we have successfully developed BMS (Battery Management System) which already manages about 1 MWh.

19
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/MrJohz on 2025-04-23 20:34:40+00:00.

20
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/ksyiros on 2025-04-23 19:52:10+00:00.


We're releasing Burn 0.17.0 today, a massive update that improves the Deep Learning Framework in every aspect! Enhanced hardware support, new acceleration features, faster kernels, and better compilers - all to improve performance and reliability.

Broader Support

Mac users will be happy, as we’ve created a custom Metal compiler for our WGPU backend to leverage tensor core instructions, speeding up matrix multiplication up to 3x. This leverages our revamped cpp compiler, where we introduced dialects for Cuda, Metal and HIP (ROCm for AMD) and fixed some memory errors that destabilized training and inference. This is all part of our CubeCL backend in Burn, where all kernels are written purely in Rust.

A lot of effort has been put into improving our main compute-bound operations, namely matrix multiplication and convolution. Matrix multiplication has been refactored a lot, with an improved double buffering algorithm, improving the performance on various matrix shapes. We also added support for NVIDIA's Tensor Memory Allocator (TMA) on their latest GPU lineup, all integrated within our matrix multiplication system. Since it is very flexible, it is also used within our convolution implementations, which also saw impressive speedup since the last version of Burn.

All of those optimizations are available for all of our backends built on top of CubeCL. Here's a summary of all the platforms and precisions supported:

| Type | CUDA | ROCm | Metal | Wgpu | Vulkan | |


|


|


|


|


|


| | f16 | ✅ | ✅ | ✅ | ❌ | ✅ | | bf16 | ✅ | ✅ | ❌ | ❌ | ❌ | | flex32 | ✅ | ✅ | ✅ | ✅ | ✅ | | tf32 | ✅ | ❌ | ❌ | ❌ | ❌ | | f32 | ✅ | ✅ | ✅ | ✅ | ✅ | | f64 | ✅ | ✅ | ✅ | ❌ | ❌ |

Fusion

In addition, we spent a lot of time optimizing our tensor operation fusion compiler in Burn, to fuse memory-bound operations to compute-bound kernels. This release increases the number of fusable memory-bound operations, but more importantly handles mixed vectorization factors, broadcasting, indexing operations and more. Here's a table of all memory-bound operations that can be fused:

| Version | Tensor Operations | |


|


| | Since v0.16 | Add, Sub, Mul, Div, Powf, Abs, Exp, Log, Log1p, Cos, Sin, Tanh, Erf, Recip, Assign, Equal, Lower, Greater, LowerEqual, GreaterEqual, ConditionalAssign | | New in v0.17 | Gather, Select, Reshape, SwapDims |

Right now we have three classes of fusion optimizations:

  • Matrix-multiplication
  • Reduction kernels (Sum, Mean, Prod, Max, Min, ArgMax, ArgMin)
  • No-op, where we can fuse a series of memory-bound operations together not tied to a compute-bound kernel

| Fusion Class | Fuse-on-read | Fuse-on-write | |


|


|


| | Matrix Multiplication | ❌ | ✅ | | Reduction | ✅ | ✅ | | No-Op | ✅ | ✅ |

We plan to make more compute-bound kernels fusable, including convolutions, and add even more comprehensive broadcasting support, such as fusing a series of broadcasted reductions into a single kernel.

Benchmarks

Benchmarks speak for themselves. Here are benchmark results for standard models using f32 precision with the CUDA backend, measured on an NVIDIA GeForce RTX 3070 Laptop GPU. Those speedups are expected to behave similarly across all of our backends mentioned above.

| Version | Benchmark | Median time | Fusion speedup | Version improvement | |


|


|


|


|


| | 0.17.0 | ResNet-50 inference (fused) | 6.318ms | 27.37% | 4.43x | | 0.17.0 | ResNet-50 inference | 8.047ms | - | 3.48x | | 0.16.1 | ResNet-50 inference (fused) | 27.969ms | 3.58% | 1x (baseline) |

0.16.1 ResNet-50 inference 28.970ms - 0.97x
0.17.0 RoBERTa inference (fused) 19.192ms 20.28% 1.26x
0.17.0 RoBERTa inference 23.085ms - 1.05x
0.16.1 RoBERTa inference (fused) 24.184ms 13.10% 1x (baseline)
0.16.1 RoBERTa inference 27.351ms - 0.88x
---- ---- ---- ---- ----
0.17.0 RoBERTa training (fused) 89.280ms 27.18% 4.86x
0.17.0 RoBERTa training 113.545ms - 3.82x
0.16.1 RoBERTa training (fused) 433.695ms 3.67% 1x (baseline)
0.16.1 RoBERTa training 449.594ms - 0.96x

Another advantage of carrying optimizations across runtimes: it seems our optimized WGPU memory management has a big impact on Metal: for long running training, our metal backend executes 4 to 5 times faster compared to LibTorch. If you're on Apple Silicon, try training a transformer model with LibTorch GPU then with our Metal backend.

Full Release Notes:

21
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/hsjajaiakwbeheysghaa on 2025-04-23 18:53:49+00:00.


I've removed my previous post. This one contains a non-paywall link. Apologies for the previous one.

22
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/0xApurn on 2025-04-23 18:38:18+00:00.


I feel like systems programming is kinda a huge field. I came from web dev background and don't have a lot of ideas of what kinds of specialization of systems programming I want to get into. Can you share what you're working on and what excites you the most about it?

I don't think it needs to be system programming, but anything in rust is awesome. Trying to learn as much from the community!

23
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/nvntexe on 2025-04-23 17:13:07+00:00.


With CPUs pushing 128 cores and WebAssembly threads maturing, I’m mapping concurrency patterns:

Actor (Erlang, Akka, Elixir): resilience + hot code swap,

CSP (Go, Rust's async mpsc): channel-first thinking.

Fork-join / task graph (Cilk, OpenMP): data-parallel crunching

Which is best scalable and most readable for 2025+ machines? Tell war stories, esp. debugging stories deadlocks vs message storms.

24
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/Kobzol on 2025-04-23 13:14:29+00:00.


Wrote down some thoughts about how to interpret and use visibility modifiers in Rust.

25
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/rust by /u/dpytaylo on 2025-04-23 12:49:37+00:00.


Hello! I’ve had this idea stuck in my head that I can't shake off. Can Rust eventually stop supporting older editions?

For example, starting with the 2030 edition and the corresponding rustc version, rustc could drop support for the 2015 edition. This would allow us to clean up old code paths and improve the maintainability of the compiler, which gets more complex over time. It could also open the door to removing deprecated items from the standard library - especially if the editions where they were used are no longer supported. We could even introduce a forbid lint on the deprecated items to ease the transition.

This approach aligns well with Rust’s “Stability Without Stagnation” philosophy and could improve the developer experience both for core contributors and end users.

Of course, I understand the importance of giving deprecated items enough time (4 editions or more) before removing them, to avoid a painful transition like Python 2 to Python 3.

The main downside that I found is related to security: if a vulnerability is found in code using an unsupported edition, the only option would be to upgrade to a supported one (e.g., from 2015 to 2018 in the earlier example).

Other downsides include the fact that unsupported editions will not support the newest editions, and the newest editions will not support the unsupported ones at all. Unsupported editions will support newer editions up to the most recent rustc version that still supports the unsupported edition.

P.S. For things like std::i32::MAX, the rules could be relaxed, since there are already direct, fully equivalent replacements.

EDIT: Also, I feel like I’ve seen somewhere that the std crate might be separated from rustc in the future and could have its own versioning model that allows for breaking changes. So maybe deprecating things via edition boundaries wouldn’t make as much sense.

view more: next ›