Well, I work as a bartender, and here in Finland it's strictly against the law to serve alcohol to, or even allow a "visibly intoxicated person" to enter the premises (a law which almost every bar breaks at some point, intentionally or no), and I think I've witnessed multiple times myself how a customer's level of intoxication reveals itself only after you have served a drink to them and they've payed for it. Could it be called a Schrödrinker's cat?
Asklemmy
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
Not related to the Schrödinger question, but my advice for solving that problem would be to have some little robots trundling about with boxing gloves on. They can randomly harry each your walk-ins with a sudden flurry of blows. By seeing how these people handle the unexpected robotic assault, you should better be able to assess their level of inebriation.
Oh yeah, and maybe add some voice output to these automatons, so the machines can call the potential customers gay, and insult their fiscal levels (the go-to insults in any finnish bar).
Code is both great and terrible until it compiles.
In programming there is also the Heisenbug: as soon as you try to observe the bug, it disappears or changes its behavior.
I fucking hate Heisenbergs!
Hrm, weird reproducible bug. Ok let's hook up the ol' debugger and.... Where did the bug go? Shiiiiiiit.
It's mostly because many observation processes are invasive and change the nature of the system under test
My company is basically 30 startups in a trenchcoat. The bulk of our my org's application was written 5-10 years ago by like 4 dudes, none of whom work at the company anymore. Cowboy coding doesn't come close. We have so much legacy code and I alternate between "how the fuck does this work" in an impressed way and a horrified way anytime I look at it
Site reliability engineer here, your application is both alive and dead until the monitoring server pings its health status API.
As a bicyclist, I see that we have Schrödinger's Cyclist: Too poor to be able to afford a car like "normal" people, but also a rich elitist who can afford to commute by bike.
Also, Schrödinger's Bike Lanes: A conspiracy by car-hating politicians to punish drivers, but also an amenity that only rich elitists get in their neighborhoods.
For work I use a database written in COBOL. Reports are simultaneously running and frozen until I either get the report results or sufficient time has passed that I'm certain the system has crashed.
Isn't that the halting problem?
A textbook example, yes. And Today I Learned something!
Employee salaries in HR; they are both correctly paid(employer perspective often), underpaid (employee perspective often), and overpaid (company and co-worker perspective). Depending on how and how often you open the box, any of these views can be accurate.
As an animator, the client simultaneously knows everything about what makes a good animation, colour theory etc. and is utterly incapable of doing it themselves or providing any specific feedback beyond "I don't like this" or "make it feel more pink but don't actually make it pink."
This state persists until you introduce an invoice for all the extra work it'll take to redo all the stuff they agreed to two weeks ago, and then the waveform collapses and suddenly everything you sent them in the first place is fine.
I tried to get chatgpt to draw me a “coffee shop that feels pink without actually using the color pink”.
It failed (used the color pink):
Then I made the same request with the color green. It failed again, but I like this “non-green but actually green” coffee shop.
I also like the ridiculous position of those two chairs.
It's so wild that you felt it was appropriate to post ai slop in response to an actual artist venting career issues. Nightmare stuff.
Haha that's like a real "reading a book over someone's shoulder" kind of setup.
In computer programm single threaded programs are pretty predictable (apart from human errors). As soon as you have multi threading that goes out the window. Modern CPUs in most devices you use have what's called a scheduler that schedules when to let different things actually use the CPU so you can actually do multiple things at once. It's a super important concept for what we want to do with devices. But because of that you have no guarantee about when (or if) other threads of your own code will execute. Apart from truly insane edge cases, single threaded programs act pretty deterministically. Multi threaded ones do not. It's very similar to the "it's alive and dead until you check" idea because you just don't know. So much so that there are data types we use called things like Maybe where the result is either a success or a failure and you write code for both.
Also much like the cat in a box thing, programmers don't really view it as magic, it's just sort of a side effect of the uncertainty.
Is it actually non-deterministic or just too many variables and too much sensitivity to initial conditions influencing the scheduler's decisions for the programmer to reasonably be able to predict?
It is deterministic, it is just determined elsewhere.
If thread 1 is working on a task and needs the output of thread 2, it doesn't know what the output is. Of you move the tasks from thread 2 back into thread 1, then you have eliminated the point of multi threading.
Without getting philosophical, I'm going to say human behavior is non-deterministic. Because a human is using a computer you cannot reason about what may be running when. That's why I say it's non-deterministic. You can make an argument that a non real time computer not connected to the Internet could be considered fully deterministic, but it's really just a distraction. That's why I tried to make it clear I wasn't talking about "magical 'truly' random" things.
I'm not trying to get overly technical or philosophical lol. For example, PRNGs are deterministic, but it's sufficiently random that we treat it as random without worrying about whether it's "actually random." (But yes, there can be bugs where they actually are behaving too predictably and they actually aren't random. This is why I'm trying to keep the topic simple without getting lost in the details.)
When you account for not knowing what else is going on the system I'd say it's actually non deterministic. But not in a magical "truly random" sort of way, just that other things you don't personally have control over are going on. If this topic interests you then you may want to look into real time computing which is an area where you do have deterministic systems where you can more accurately guarantee how long something will take. This is important in dangerous activities. Think things like nuclear reactors where a process taking too long might mean not alerting another part of a system that something bad has happened. Like the part of the system that tells you if something is too hot not responding so you keep adding fuel. Compare this to your phone. If your phone is slow then, well, it's just annoying really.
At present there's only one fork on the above process. That didn't seem right to me.
I guess the best one for me may be elite university students are “just smarter” than others until I have to read their term papers.
For some reason it’s always the non-native English speakers who write well.
Just a guess, but I'd think that a smart person who is ESL will read more good books than their native language peers. When you write you imitate the style of the people you've read. The native speakers are reading comic books and the ESLs are reading the classics.
Again, mho
Probably a good take :)
As a Set Dresser/On set dresser - any set build before a director sees it/ wideshot films it.
How it generally works is we get a bunch of stuff and... Something. This something can be as exact as a blueprint (techpack) that clearly marks where furniture is supposed to go or as vague as a one sentence long description of what the set is supposed to be. We are usually given a bunch of options for virtually everything that is used. Then we make up the set.
Then the waveform goes nuts. The Heirachy goes Set Decorator, Production Designer, and then Producer. They will randomly visit or call in sometimes separately and whatever plans that existed immediately cease to matter. The set may completely change a random number of times back and forth as anyone above us in the hierarchy demands unless it countermands a specific demand made by someone above the demander in the hierarchy.
That is until shoot day. Once the Director has the floor all of that prep goes immediately out the window and the director may change whatever they please about the set and while there's usually too much time constraints to change everything it could mean getting rid of anything. The waveform only collapses to depict a singular reality once the wideshot is in the bag which means there is now a continuity that must (okay "must" is a strong word) be obeyed.
Autonomous vehicles are at times both amazingly advanced and bedshittingly idiotic.
I've ridden ~25k miles in them for work, and I trust them more than 95% of the drivers on the road. But I've also experienced them acting in ways that are still quite far from the way humans would.
Not quite Schrödinger's cat, but in programming we have Heisenbugs named after Schrödinger's peer.
It's when you have a bug/crash that is not reproducible when debugging it. Might be that you're reading some memory that you're not supposed to, and the debugger just sets it up differently. Maybe you have a race condition that just never happens with the debugger attached.
A person that has a lot of certs or a high title is both extremely smart or extremely unintelligent. You don’t know until you start talking with them about things more than surface level.
Until I actually show up to the EVGo charging station, it’s both online and offline. The only way to know for sure whether there’s a working charger is to drive there and plug my car in.
Projects will either be done next month or take at least a year to complete. Also, if you ask my team to calculate how long a project will take, and then ignore the estimate, the project will take infinite time because you are an insufferable moron.
Print jobs are both completed successfully and failed until someone checks the queue.
"The Computer never makes a mistake" is true and also probably responsible for people believing LLM-hallucinations uncritically
llm's are dangerous and should never be used; but an overwhelming majority use it nonetheless.
The Heisenbug. Once you try to observe this kind of software bug with your technical means, it simply goes away.
I'm not sure I understand the question
If you're looking for a "something is two opposites at once until met" then that's anywhere any unsureness exists. Lesson plans are decent and lacking until taught to students. Visual art is pretty and dismal until witnessed by another beholder. Speeches are rousing and dogshit til spoken at the mic.
If you're looking for a "something that's explained oversimplifiedly then a lot of people say they get it (and are wrong)" then that's like a subset of all misconceptions.
- Monads in programming. Lots of people say they "get it" after a simplified explanation, but actually don't get it (judging by blog posts that recite a simplified explanation, but actually don't get it).
- Tariffs. Lots of people learn middle school mercantilism (zero sum wealth) then guess that the economy is still import export balance, and that if we make people exporting to us more expensive then we get more of the zero sum pie. (Obviously wrong, and a basic macroeconomic lesson on consumer welfare in a system with a world price is useful)
- A lot of physics terms tbh. "I get momentum, that's when it's hard to stop when you're fast." Often they mean something closer to inertia. "I get the Heisenberg Uncertainty Principle. It's when seeing something changes it!" It's closer to uncertainty in the measurement of tiny things because of the physical implication of what we measure it using. (e.g. by reading a photon off of something, we know we're kinda inaccurate cuz the photon was discharged)
The contrast is either too little or too much and I won't know unless I look at the drawing again the next morning
Software both works perfectly (on the developers machine and most deployed instances) and fails dramatically (on some significant subset of deployed instances).
This makes the software both a success (since it works, and can generate revenue) and a failure (since it is unreliable, and may alienate paying customers).
The drill bit is good, until you get it to surface.
Schrodinger's Cat
Schrödinger's fatigue crack. With old enough steel, you don't know if there is a crack propagating until you see it.
You see, as a nuclear physicist...
We know that you're here, in this thread. Don't tell us what you're doing right now or we'll collapse the universe.
...particles get you off?
Our equivalent is nuclear radiations.
The container can both stand directly in front of me, and the system can still claim that it's waiting for loading in Malaysia.