this post was submitted on 26 Oct 2023
182 points (97.9% liked)
Physics
1341 readers
2 users here now
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
While they may be as secure, I would not call that the same level of random. I’ll agree they are equal in almost every use case, but truly random is still “more random” in comparison.
Though I’ll concede that if it can’t be proven to be truly random, it’s not of much use.
How do you measure the amount of "true randomness"? CSPRNGs can use very little entropy to generate large amounts of random data. Mathematically speaking there isn't any difference between that and what you call "true randomness" - if there was, they wouldn't be CSPRNGs.
Truly random would be something that is impossible to reproduce. While you are correct that we can approximate randomness, the final calculation can always be replicated if the initial inputs are known. Just because something is exceedingly difficult to replicate, doesn’t mean it is truly random.
Think of it like cleaning your pool. You have a vacuum, chemicals, the system circulates, maybe a skimmer or a net. You can get the pool to the point that it is acceptable to swim in, but you’re never actually swimming in a clean pool. In a similar manner, current random number generators get you to a point that you are (usually) fine assuming the number is random, but it never really is.
I know what you're trying to get at, but my point is this: Imagine you have two streams of data, one from a CSPRNG, and one from what you call "true randomness". How can you tell which one is which (as long as you're staying under the CSPRNGs limit from your initial entropy)?
If you can't tell me a way, there is no functional difference between these two options. So what advantage would true randomness hold?
I said this in another comment, but while I agree that there is virtually no functional difference, and in the vast majority of cases truly random and functionally random are equivalent, that doesn’t mean that something which is functionally random is truly random.
But it is truly random for all intents and purposes, since the input is truly random. Just because the process contains deterministic steps doesn't mean the input entropy isn't true entropy anymore.
And a pool is clean for all intents and purposes. There is still a distinction though. The fact that it is deterministic inherently makes it less random than true randomness.
The input is not deterministic.
If you take the original values used to determine the final “random number” and run them through the same sequence of calculations, you will always reach the same value.
We rely on the fact that the inputs are so numerous and/or difficult to replicate to deem the final value “random”. But that doesn’t mean that the value cannot be reached by a second party given perfect knowledge of the original state of all inputs.
True randomness, on the other hand, is impossible to calculate even with that perfect knowledge, because we aren’t relying on the state of inputs running through a calculation.
But that's my point: just because you apply deterministic steps to a truly random input doesn't make the output not truly random. You use real entropy as your starting point, which is literally exactly what you call "true randomness". This means the output has the same level of "true randomness" as your "truly random" input, because you mathematically don't lose entropy along the way.
To put it more simply: you're arguing from a philosophical perspective, not a mathematical one.
The input is not truly random though. If it was, we could just use that input, with no other steps, and have a truly random output. You’re confusing an unknown state with randomness.
No, it actually and literally is truly random. You'd need to know everything about the hardware itself and the environment around it in incredible detail (incl. the temperature of every individual small patch of material, air flow and the state of air in and around the case) to reliably predict the initial entropy for a given modern system, since tiny changes in e.g. temperature will completely change the input.
It's only a small bit of entropy, but enough to kick-start the RNG in a way that can reliably create high-quality entropy.
So you’re literally arguing that knowable inputs, however unlikely knowing those inputs might be, run through known deterministic calculations, results in a guaranteed unknowable output?
No, I'm arguing that the inputs aren't knowable to the required degree in the general case, which defines their entropy, and that entropy isn't mathematically lost, it's improved through deterministic calculations.
The same was thought about previous iterations on random number generators. The first I am aware of used an extremely precise time stamp, and ran the calculations on that. On the assumption that no one could possibly know the exact timestamp used. That was obviously untrue, which can be verified by the fact that such systems have been broken before.
Just because you can’t conceive of a way to know the values, does not make them unknowable. It just makes it improbable to happen.
And again, I’m not saying the random numbers we can produce now are currently breakable. But that doesn’t mean that a decade from now, or even a century, they will remain unbroken.
Say I'm restarting my phone, and it uses details like temperature fluctuations in CPU sensors as entropy. How would you know all the required values? Since I'm holding the phone in my hand, the temperature of my hand (and consequently body temperature) are relevant, not to mention the air around my phone. How would you find those values at the exact time the sensors are read?
You honestly think those values aren’t possible to estimate within a range then brute force?
That’s like asking “say I hit a button at a very specific time, how would you find that exact time?”
Yes, because 1) you'd need to know them with incredible precision, and 2) you can't brute force, because you only have one chance. Otherwise you can also brute force anything that's "truly random" as you put it.
That's the thing, it's not like that. It's more like "say I hit a button at a very specific time and roll hundreds of dice, how would I find that exact time and all the results of those dice".
Apart from the face that there are absolutely no “dice rolls” involved. They are known deterministic calculations. Because in order to add “dice rolls” you would need randomness. You can’t have a non deterministic calculation involved, because that isn’t how computers work.
You’re essentially saying “take a knowable input, add true randomness, output true randomness using nothing but knowable inputs!”
And you absolutely can brute force it. Why would you have a single chance? Because of arbitrary rules?
As for true randomness, you’re getting a range of “extreme low to extreme high” which isn’t currently brute forcible.
My dude, the "dice rolls" are all the small pieces of entropy that you'd have to know the environment to an unknowable degree for. They are conceptually absolutely involved, and if you can't understand that, we're done here, because you're handwaving over the difficult parts in a way that doesn't make sense.
Those “dice rolls” are not random though. The problem is you keep talking about these inputs as if they themselves are random. They aren’t. And just because you can’t fathom a way to know their values now, doesn’t mean they are unknowable. I point back to the timestamp issue, where at the time it was considered “enough” but was disproven as such years down the line.
Agreed.