Artificial Intelligence

11 readers
1 users here now

Reddit's home for Artificial Intelligence (AI).

founded 1 year ago
MODERATORS
551
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/aluode on 2024-02-24 04:14:46.

552
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/Civil_Collection7267 on 2024-02-24 07:28:46.

553
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/TheMblabla on 2024-02-23 19:58:09.

554
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/Stupid_hardcorer on 2024-02-23 09:59:22.


Stability AI emphasized several highlights of this version.

The foremost is text rendering capability.

On their official website, they consecutively showed three images containing text, not only is the text clear, but there are also no spelling errors.

Stability AI's CEO Mostaque also showcased images with text on X(Twitter):

Another highlight is multi-topic generation.

You can paint a picture based on a prompt containing multiple elements.

The third highlight is high image quality.

Also, the texture of generated comics and sketches has improved over previous versions:

Although Stable Diffusion 3.0 was initially showcased as a text-to-image AI generation technology, it will become the foundation for broader applications.

Over the past few months, Stability AI has also been developing 3D image generation and video synthesis capabilities.

Reference:

555
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/Xtianus21 on 2024-02-23 01:44:25.

556
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/brainhack3r on 2024-02-21 20:38:39.


The Gemini release was really interesting in that they sort of buried the lede by not mentioning the 99% accuracy of the context window.

The 128k context window of OpenAI will fall down pretty quickly and really is only 32k-64k if you care about your context actually being used.

Ideally you would just fit all your data into the 10M token context window but that's going to be about $5 as per my understanding.

That's going to get expensive quickly for a lot of applications.

The questions is how long will this be the case. If RAG is only about cost savings I can see it starting to fade away in use over the next 1-2 years and most people just wanting to push everything into the context window.

557
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/thisisinsider on 2024-02-23 00:38:20.

558
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/Earthboom on 2024-02-22 10:33:33.


Narrow AI needs data to scout for patterns to then regurgitate out for the next person that comes along and asks it a question.

AI developers have gathered as many free, legal or otherwise, texts as they could, they've also siphoned off tons of data from public forums such as reddit and Twitter for God knows for how long until they started to close the doors.

This formed a good base for historical knowledge until ~2022 give or take.

Now let's fast forward a bit. Public forums and social networks that house information have closed their doors to prevent losing traffic. Take reddit as an example. It's a known meme for those seeking tech answers they should just append "reddit" to their search and they'll probably find better and more accurate data than elsewhere.

Then you have places like source forge or hack reactor and even Twitter. All of these places rely on foot traffic but a good chunk of the foot traffic is just googling.

Now Bing and Google have AI and users stay on Google or Bing while they ask their question and the AI spits out curated search results.

But what happens when it's time to gather new info past 2023 and beyond? Can't gather texts because libraries don't want their books just ripped. Can't gather user posts because users aren't visiting social networks to ask questions anymore and the api of these social networks has been limited and gated on top of that.

AI degrades because it's stuck regurgitating old content over and over and the accuracy of its answers will go down for questions about newer content because their sources have shrunk and have been limited to trash.

Users will start going back to the old forums and social networks for answers and AIs are now the enemy of the free internet. Now there's security around protecting your data from data scrapers.

Or there's a boom of micro AIs from these smaller services. Imagine a shitty reddit AI answering your questions.

Which way do you think it'll go?

Personally, this is yet another nail in the old internet we grew up on. This is another step in the wrong direction with controlled, censored, and curated information.

559
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/jaketocake on 2024-02-22 17:01:00.

560
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/fotogneric on 2024-02-22 16:35:27.

561
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/Civil_Collection7267 on 2024-02-21 15:40:31.

562
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/Zeta-Splash on 2024-02-21 19:56:35.

563
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/iced327 on 2024-02-21 19:23:22.

564
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/jasonjonesresearch on 2024-02-21 15:30:25.


Peer-reviewed, open-access research article:

Abstract: A compact, inexpensive repeated survey on American adults’ attitudes toward Artificial General Intelligence (AGI) revealed a stable ordering but changing magnitudes of agreement toward three statements. Contrasting 2023 to 2021 results, American adults increasingly agreed AGI was possible to build. Respondents agreed more weakly that AGI should be built. Finally, American adults mostly disagree that an AGI should have the same rights as a human being; disagreeing more strongly in 2023 than in 2021.

565
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/KrySoar on 2024-02-21 11:21:58.


So now we are seeing AI Generated videos, do you think the graphics engine of games will be using AI to fully generate the games graphics with some sorts of prompts ? Of course it would need a lot of power and calculations but computers would be very powerful compared to nowadays and AI generation could be very precise if prompted accordingly or fed with related content.

566
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/punkouter23 on 2024-02-21 01:02:32.


2023 was the Year of ChatGPT and getting us answers that we must be the middle man to translate what it said and go back to our computer and interface with it to get what we need.

So the next step is for something to skip the human part and interface with the other machine for us right?

That seems to be the point of the rabbit right?

I am mostly interested in the coding side of things so that means the agent is trying to take the answers on how to code and directly putting them into the machine and attempting to test and verify.

567
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/techie_ray on 2024-02-20 23:02:01.


Sora explained simply with pen and paper in under 5 min (based on my understanding of OpenAI's limited research blog)

568
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/kanugantisuman on 2024-02-20 20:26:25.


We are the creators of Personal AI (our subreddit) - an AI platform designed to boost and improve human cognition. Personal AI was created with two missions:

  1. to build an AI for each individual and augment their biological memory
  2. to change and improve how we humans fundamentally retain, recall, and relive our own memories

What is Personal AI?

One core use of Personal AI is to record a person’s memories and make them readily accessible to browse and recall. For example, you can ask what the insightful thoughts are from a conversation, the name of your friend’s spouse you met the week before, or the Berkeley restaurant recommendation you got last month - pieces of information that evaporated from your memory but could be useful to you at a later time. Essentially, Personal AI creates a digital long-term memory that is structured and lasts virtually forever.

How are memories stored in Personal AI?

To build your intranet of memories, we capture the memories that you say, type, or see, and transform them into Memory Blocks in real-time. Your Personal AI’s Memory Blocks would be stored in a Memory Stack that is private and well-secured. Since every human is unique - every human’s Memory Stack represents the identity of an individual. We build an AI that is trained entirely on top of one individual human being’s memories and holds their authenticity at its core.

Is the information stored in the Memory Blocks safe and protected?

We are absolutely aware of the implications personal AIs of individuals will have on our society, which is why we aligned ourselves with the Institute of Electrical and Electronics Engineers’ (IEEE) standards for human rights. The safety of the customers is our number one priority, and we’re absolutely aware that there are a lot of complex unanswered questions that require more nuanced answers, but unfortunately, we cannot cover all of them in this post. We would, however, gladly clarify any doubts you have in DMs or comments, so please feel free to ask us questions.

At Personal AI, you as the creator own your data, now and forever. This essentially means that if you don’t like what’s in your private memories, you can remove it whenever you want. On the other hand, we will make sure that the data you own is secure. Currently, your data would be secured at rest and in transit in cloud storage, with industry standard encryptions on top of it. To illustrate this, imagine this encryption being a lock that keeps your data safe. And of course, your data is only used to train your AI, and will never be used to train somebody else’s AI.

Please join our subreddit to follow the development of our project and check out our website!

Useful links about our project

TheStreet ArticleProduct Hunt

Our Founders: Suman Kanuganti | Kristie Kaiser | Sharon Zhang

Pricing Models

For Personal & Professional Use: $400 Per Year

For Business & Enterprise Use: Starts at $10,000 / per AI / per Year

569
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/dragseon on 2024-02-19 21:50:09.

570
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/bobfrutt on 2024-02-19 14:23:28.


I don't know much about inner workings of AI but I know that key components are neural networks, backpropagation, gradient descent and transformers. And apparently all that we figured out throughout the years and now we just using it on massive scale thanks to finally having computing power with all the GPUs available. So in that sense we know what's going on. But Eliezer talks like these systems are some kind of black box? How should we understand that exactly?

571
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/Armand_Roulinn on 2024-02-19 02:45:14.

572
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/Parisian75009 on 2024-02-17 09:02:30.


In a tribunal, Air Canada claimed it wasn't responsible for what its chatbot said as the chatbot was a "separate legal entity".

573
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/jaketocake on 2024-02-17 06:04:36.

574
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/AI_Nietzsche on 2024-02-17 16:46:37.


Sure, there's always healthy competition in the AI space, but this feels...different. The way OpenAI countered Gemini with Sora just screams aggression. Makes you wonder if they're pulling out some secret sauce, some super-powered AI system behind the scenes. I Have never seen Google getting pounded like that ever and we're Only in February..god knows whats next

575
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/artificial by /u/_____awesome on 2024-02-17 16:22:34.


In the Ukraine vs. Russia conflict, there's a debate going on about whether it's a war crime to kill a soldier who tries to surrender to a drone. The question is, does this make all autonomous weapons basically walking (or flying) war crimes since you can't surrender to them? It's a tricky situation because these drones can't recognize a surrender, which seems to go against the rules of war. What do you think?

view more: ‹ prev next ›