I dunno. I always believe the underdog with a lot to lose.
Hellsadvocate
Because we need a safeguard for people? We should as a society alleviate suffering. We shouldn't surrender to dehumanizing, or ignoring them. Although as your comment suggests, we shouldn't give a shit either way
I think they fail to see that it doesn't matter. As long as it reduces cost capitalism doesn't really give a shit what happens to them. In a normal society you'd expect government regulation to step in to either alleviate or ease this change of jobs into becoming fully AI automated, along with some safe guards. But I just don't see that happening in America.
You can but you're told not to because you have to focus on the music and the sensation of the electric tingles on your tongue. So no I think it would be hard to read during it. It requires full mindfulness to get the best gain. Additionally, you cannot listen to it before bed, or in bed since it might put you to sleep. It's very soothing. But basically, it's an hour of meditation daily.
Currently utilizing it in washington state. One of the first. Here's the biggest part: you need to spend one hour everyday doing this. It's basically meditation because you can't have anything interrupt you or do something else. You can split that into two thirty minutes sessions but fuck as a single father Its been impossible to find the time.
Wait so mars has a smell?
Socially progressive. I think most conservatives want a socially regressive AI.
Probably moral guidelines that are left leaning. I've found that chatGPT 4 has very flexible morals whereas Claude+ does not. And Claude+ seems more likely to be a consumer facing AI compared to Bing which hardlines even the smallest nuance. While I disagree with OP I do think Bing is overly proactive in shutting down conversations and doesn't understand nuance or context.
I... I'm not sure? I feel like it does introduce a bit of bias. The anonymity helps to add some blindness on upvoting comments. For example, I doubt a girl with their name intact would post openly about how to go about having an abortion in a red state.
This is a really good point. Has there been any talk about how verification of users might work for when that does happen?
It makes me wonder if we can create AIs that behave close enough to humans by adding an additional neurological baseline noise to the LLM training. Then throwing it in simulations to see whether social sciences might work. I'd be curious to see how true to life something like that would be as well.
A while ago, some researchers designed a game where chatGPT was assigned to characters and told to act and live like humans. It was interesting to watch. https://www.iflscience.com/stanford-scientists-put-chatgpt-into-video-game-characters-and-its-incredible-68434
Um. Being hired again? She can be completely shut out of any earnings that she needs to survive whereas Linus has a fuck you amount of money. So...