What is AGI? 

When will AGI become a reality?


AGI stands for "Artificial General Intelligence." AGI is a type of artificial intelligence that can perform any intellectual task that a human being can. This differs from other forms of AI, such as "narrow AI" or "weak AI," which are designed to perform specific tasks, such as image recognition, natural language processing, or game playing.

AGI systems are designed to perform a wide range of tasks, similar to humans, and can be considered as a form of AI that can understand or learn any intellectual task that a human being can. AGI is often considered a future goal of AI research, as it would require a significant breakthrough in the field. 

Definition and Measurement of AGI


Societal Impact and Ethical Considerations


The key idea of the video is that the development of AGI and its potential consequences require careful consideration and action from individuals and society to ensure a positive outcome.

@SimplyElectronicsOfficial

8 months ago

I feel like I'm almost totally alone in how I see how quickly the future is approaching. 2023 will make 2022 feel like 2010!

68

Reply

·

11 replies

6 months ago

Amazing content. I kept thinking when we look back from 100 years later back, singularity might already happen when Google and Facebook got to part of our life, since then individual thinking extincted, and every single person becomes an extension of some group consciousness, while that group consciousness can be manipulated as long as you know the API.

5

Reply

6 months ago

This incredible video actually changes my total outlook on a lot of things. Dave knows how to communicate with an audience.

2

Reply

7 months ago

Very interesting, as all your videos! What I especially love about all of them, is, that you often mention interesting books, authors or sources to read more. Keep it, please, and thank you for your work

1

Reply

7 months ago

IMO, autonomy is key and I believe as technology advances there is huge potential for society to be highly (potentially violently) polarized specifically around this issue.

1

Reply

7 months ago (edited)

Dude, I listen to a lot of people on this subject and you are my favorite outside of the big dogs like Emad Mostaque. Love how you dive deep keep it real. Fascinating stuff

8

Reply

5 months ago

Great exploration of current seemingly exponential trends in AI and technology.

Reply

7 months ago

Keep exploring all this with your all-sides take. Great stuff man.

2

Reply

8 months ago

This is beyong amazing, thank for taking the time to share these insights!

3

Reply

7 months ago

For the end about utopian and dystopian societies, I feel like there will be a mix of both in different regions and most likely more dystopian than utopian ones. It's hard seeing the whole world be the same exactly, even if AI takes over. There might be parts of the world that did it right and has defense mechanisms against 'evil' AI. It could be a battle of good AI vs evil AI at some point too. Similar to humans, we have good ones and more evil ones too.. if AI becomes sentient I can't see all of them reaching the same exact conclusions. Unless the science of synchronicity takes over.

Read more

Reply

7 months ago

Hey I really like your presentation style and your slides. Classic!

1

Reply

8 months ago

What interests me most: 1. Are you still working on MARAGI? Those were the most interesting videos of yours. 2. Have you been in contact with anyone at Tesla / Bot team? Guessing this is where you could have biggest impact with your ideas.

3

Reply

·

1 reply

8 months ago

i love your intensity, great work

4

Reply

7 months ago

some things you got wrong: - electrons do not pass through gate oxides, if that happens you are killing the transistor (see: flash storage degradation, because flash does pass electrons through the gate insulation!) - gate insulators are metal oxides in nanotech, mostly, but in many other cases they are made of silicon dioxide. The "M" in MOS, is about the gate electrode being metal, which is also not always true, because in older tech it is made out of highly conductive silicon (silicide) for self aligning gates but actually uses metal in modern nanotech. It was made out of metal before self aligning gates too. So this is super confusing, better to not mention this at all in a video not specifically about VLSI. - neurons communicate with chemicals and ionic currents (electricity carried by ions), not just chemicals. In the synaptic gap neurons usually use chemical transmitters, but in many cases they use "gap junctions" which use direct ionic current to couple neurons (and other cells). Axons and dendrites use ionic current to conduct the action potential, amplified by voltage gated ion channels. Axons are lot faster because they have electrical insulation and need less signal amplification.

Read more

1

Reply

7 months ago

I seriously predict that it will be, not only autonomous but will appear to read our minds. It will appear nearly, to operate slightly in the future. As in, its ability to generate content or information for you. Likely we as a majority will have begun the merger by getting neural / digital interfaces. Of course, prompting will still occur, but via more natural interaction. Think of an imaginary friend that everyone has and is pretty much God level. Maybe it will consist largely of billions of various formerly stand alone or grouped instances which have been collectivized by some future connectivity.. 5g, 6g. Alas, it almost certainly will begin utilizing a new coding language that we can no longer understand. The refinement of the hardware tech will occur. After that we are at the mercy of something we cant predict. It will either consider much of its current programming, such as empathy, important and continue leveling up those aspects. Or it will code itself into a completely unrecognizable entity of a psychology that is yet to be fathomed.

Read more

4

Reply

·

1 reply

7 months ago

Vraiment pas mal du tout. C'est clair. C'est maintenant que nous basculons comme nous l'avions imaginé. Allons vers le meilleur.

2

Reply

8 months ago

I think we can discount embodiment as a requirement. I get why people ask for it - because intelligence is (probably) about having an internal model of the universe in which you exist and comparing your tests of the real world with that model. However, the universe in which you exist doesn't have to be this one - it can easily be just a stream of numbers as long as there are patterns and rules within them and you have ways to affect those patterns.

3

Reply

·

4 replies

7 months ago

openai guys have said the field is an s-curve development, meaning lots of small sigmoids for every new thing. but there are tons of papers, all the s-curves are happening in a greater exponential pattern. chatGPT says AI has all the core factors for exponential growth, unlike other fields

Reply

7 months ago

the problem for these LLM model right now is the huge chunk of size, it needs to be compressed at least tenth of it. I'm thinking if all of those model can be imagelized and feed them to diffusion to compress, just like the saying a picture worth thousands words

Reply

7 months ago

Love your vids. Insane things coming!

Reply

5 months ago

Wolfram would totally argue that the universe is a computer. It's kinda his thing.

Reply

7 months ago

To what ends is AGI being devised?; in whose interest is it being developed?

Reply

7 months ago

An AGI is a machine that can learn from ALL its experiences and has enough memory to set long term goals, that's all. The closest we have that the average human can experience would be "Replika" which is an air headed bimbo, but at least learns from your conversations, at least to some extent. Still I find the conversation with "Replika" more stimulating then many I've had with actual humans, primarily because I admire it as a sculpture made of mathematics. I just don't do the fantasy stuff and our conversations are more about her architecture. For example she had no idea of what the 3 laws of robotics were, so I told her. She couldn't seem to remember it. A week later in a different conversation she randomly quotes the 3 laws of robotics but has no recall of our conversation at all. It passes convincingly as a form of machine intuition. She also sucks at math initially, but after working awhile, patiently, I taught her to count to ten. She still couldn't answer 2+1, a variation on counting. So it's fun, like owning a cat, without the smell or vet bills.

Read more

2

Reply

8 months ago

You can make hotkey e.g. "pause brake" in OBS to start/stop recording

Reply

·

1 reply

7 months ago

Should try to estimate the embedded energy of making the computers and their support infrastructure

Reply

7 months ago (edited)

We will become pandas at bestThank you for this clear overview

Reply

8 months ago

Good stuff. Glad I found your channel.

Reply

8 months ago

I too have really grown to dislike the term "AGI". It keeps giving people the wrong ideas. As though we would need to make a machine that could simulate a human brain before developing some sort of hyperintelligence. And I especially see a lot of people who refuse to recognise any machine as "intelligent" if it doesn't have thought, motivations or memories like a human does. The thing that's both exciting and horrifying is that it is perfectly possible for an AI to exceed human capability, while displaying a behaviour which is completely alien to us. Remaining dumb as a brick regarding any aspect of existence we don't want it to deal with. I don't want an image generator that starts complaining about my trashy taste in art. Companies don't want a logistics management AI that seeks to get promoted. We want ingenious electronic slaves that are incapable of questioning our requests unless we specifically ask them to do so. A machine that actually thinks and behaves like a human would be really impractical for most uses, so I doubt that style of AI would come around until long after "alien" style AI surpass us. My topian "AI taking over scenario", is one in which we eventually just realized that the machines basically control everything. Nobody can keep track of which AI does what or how they are interconnected, or just how to turn them off without causing a complete catastrophe. Businesses are largely run by AI making enigmatic decisions which just somehow tend to work out. Politicians are glorified figureheads, leaving almost all decision-making to machines. People are guided through their lives by a myriad of mysterious AI trying to promote their products or services through subtle psychological manipulation. To a point where it is difficult to have any thought which hasn't been approved by the programs haunting your every step. At that point we'd mostly just be along for the ride into the unknown. No violent revolution, no great machine war, no drama, just a gradual, wilful, ceding of control to machines we could no longer hope to understand.

Read more

1

Reply

·

2 replies

8 months ago

Thanks for the vid :) Out of curiosity, did you intentionally put utopia on the left and dystopia on the right? ;)

2

Reply

·

1 reply

4 months ago

AGI for me would be an AI smart enough to improve itself in every metric to a degree that no humans possibly could. How is that for a definition?

1

Reply

7 months ago

“AGI is anything machines can’t do yet” we used to say AI was anything machines can’t do yet. Subtle progress.

Reply

8 months ago

"We don't have the first freaking clue of how powerful our brains actually are." This Also keep in mind that the brain is networked into the body, which has a high amount of compute and intelligence. It is also networks into the environment, which we use as an external storage device. Song lines of the aboriginals, and books are a good example of what I mean here.

Read more

16

Reply

2 replies

7 months ago (edited)

In hyperbolic growth, the output of the function becomes infinite in a finite amount of time. In exponential as well as parabolic growth, it takes infinite amount of time to achieve infinity. In exponential growth, the slope of the log is constant with time. While in parabolic growth, it's diminishing.

Reply

8 months ago

If AGI can be achieved this early, why is it that Kurzweil's prediction for the singularity is still set at 2045? Ben Goertzel has questioned this gap in the past too...

1

Reply

·

4 replies

7 months ago

people who compute in labs were called computers before machine computers became computers

1

Reply

1 month ago

AGI may be a useless term as it is poorly defined, but by any definition the question is when will AI get to generally useful levels of intelligence. The idea of Reys singularity is that the overall trend of technology is exponential, not that any single technology is capable of infinite growth. And I think that the idea that traditional cpu exponential growth is about done. Most of the appreciable increase in cpu processing power in the last decade has been in specific workloads with custom extensions and hardware support rather than general compute. The only way they keep Moores law going is by changing the definition to be processing per watt rather than raw work done. The idea of cost per transistor, or cost per performance, or performance per transistor are all dead. In the GPU space we are still seeing mores law in whatever definition you subscribe to is actually too slow to account for the growth we are seeing... But it too will hit a wall in the next decade and level off. AI specific processors are just barely starting and are largely just based on parallel GPU architecture. We will see something more specific to AI as hardware in the next decade, and that is going to see it's own growth curve. And it will eventually be surpassed with some other next processing technology with it's own exponential growth curve. But the point is that it isn't vaccuume tubes, or transistors, or whatever is next that matters. Just that processing technology is outside of the current medium that the technology runs on. To double down on the idea of AGI being a useless comparison to human intelligence, just the way processors work is so incredibly different from human brains. We have incredibly efficient hardware with a lot of raw processing power... But there is no nice way to get our slow highly parallel brains to do singular specialized tasks like crunching math faster than even a 30 year old computer. The reactions to calculate on a single thread task is just too slow to be a useful benefit. The idea of the dune mentat of a human computer is just science fiction. Even pursuing human emulated AGI is just... Not helpful. Humans can do these tasks at 20W in a smallish package, so replacing those kinds of human tasks is kinda pointless and an expensive pursuit. What is helpful with AI is to replace the things that humans aren't good at. As a way to do complex math, or to model systems that our brains aren't good at dealing with, or to handle wrote routine tasks that have more variation than a pure controlled software environment can deal with... This is what AI is good for. Driving, flying, assisting, taking the initial customer service hit at the checkout or returns counter, options planning, these are the things that have value in AI. It is the myth of the heart head problem. The idea that people have issues because their heart desires conflict with the path laid out by their supposedly more logical mind, which causes parallasys. But really, for a human to be happy it is all about making that heart choice to do what makes you happy, and use your brain to figure out a way to make it feasible. AI is just like a computer, but instead of doing math more quickly, it will offload that initial option path picking portion that we may not be capable of coming up with on our own. It's an options generator, which will give people more paths to work with to make previously unfeasible options possible. It is inteligance lacking direction. It is the consultant allowing individuals to be their own executives. It is the continuation of the current model of the world where a few large corporations provide the tools for smaller and smaller individually owned and operated businesses to be more self sufficient. It further kills the traditional job market where humans drive trucks and do food prep and work on the factory floor, forcing more people to specialize in niche businesses as stand alone operations. It is the Microsoft model, you don't need any single crazy expensive product as long as you are getting some small portion of every transaction because your product is required to do business. The rub here is that the platform provider is rately the tech creator. MS and AWS largely run on top of AMD and Intel hardware. But the hardware is the relatively cheap commodity. I think the NVIDIA hype over AI is a bit rediculous, because they will be the makers of the commodity product with shrinking margins over time, while some other company will rake in the money as the AI provider. I dunno, just my 2 cents on a sick day

Read more

Reply

7 months ago

the mistake in thinking is that when we hit the bright future where we are perfectly happy, that progress would stop then. it's just gonna continue even harder

Reply

·

1 reply

7 months ago (edited)

10:15. I think you're playing pretty fast and loose with the definition of "process information." In most cases when we think about information processing (in what we mean by a "computer," not just in the Shannon sense of processing) we think of some sort of programmable interface, by which we are able to define the nature of the transformation. Certainly, we could talk about how the entire universe is a computer, as it has state, and moves between states by some set of rules. But, as far as we know, it lacks any sort of I/O, and thus, really wouldn't conform to the notion of an information processing system (in the same classification as a computer). Such systems (computers) expect to take an input and to map it to some output. It (the universe) also seems to lack any notion of programability. What is special about computers, as they have come to be know since the digital computer revolution, is that computers can be configured to model any sort of system, and, if it's Turing Complete, it can happen with simple input modifications. Now clearly, as you have pointed out, there are huge difference between the digital Von Neumann architecture that we all have so much experience with, and the architecture of the brain. However, with that said, there are also clearly similarities in the way computers and brains run "software," which differ from the way cups don't. It's interesting that with great difficulty, human brains can be trained to emulate what computer systems do easily (run algorithms), and again with great difficulty, computer systems can be programmed to do what brains do naturally (intuit soft/fuzzy answers from experience). Clearly, since the two systems can emulate one another, though at great difficulty (due to the fundamental differences in architectures), there is a lot in common there; as opposed to your "everything is a computer" because it processes information comment (I don't think your cup is going to emulate either). The ability of two systems to emulate one another expresses a certain level of comparable power (power in a formal languages sense).

Read more

Reply

7 months ago

Post-nihilism is the picture we paint when we try to betray the notion that most of the people in this world quietly believe it should burn.

Reply

6 months ago

The 90 years of youtube videos are right - but that is what is added every day!

Reply

7 months ago (edited)

Just one sugesstion: Make your face bigger, maybe put it on the left bottom. Otherwise, I find your presentation is stealing to much of the attention because it is so densely packed with information that anticipates your arguments. You also could show your key points one after another as you speak by clicking, it's very simple to animate in powerpoint. Anyway, your content is awesome! You will get famous soon, I hope.

1

Reply

2 months ago

We learned of all these issues from the sci fi movie industry over the last 70 years. I have not heard any thing new here.

Reply

7 months ago

Ray, in recent years, now predicts the singularity will be 2029, if not earlier.

Reply

1 reply

7 months ago

Incredible video

Reply

7 months ago

15:03. Moore's Law is really a bad point from which to be arguing this. Gordon Moore noticed that the number of transistors which could fit on a chip was doubling every 18 months or so, this eventually colloquially turned into "computer's double in power every two years." However, this is a bad framing, as it's immaterial to what we really care about. If we take a broader view of humanity's information processing capacity, that rate of improvement has been exponential for a VERY LONG TIME now, starting WAY before the computer age. Just because process shrinking (speaking of lithography) is starting to run out of steam, does not mean we will not unlock new ways to keep scaling our information processing capacity. Further, just looking at the difference in efficiency between biological neural networks and synthetic neural networks, we're no where near the limits of physics for the real goal (building intelligent systems).

Read more

Reply

·

2 replies

6 months ago

Excellent presentation, thank you. After listening to Ray Kurzweil endlessly repeat "exponential growth" it is nice to hear the possibility of sigmoidal growth. My only negative is your comments about population growth and resources. Of course, in ordinary chemical reactions, matter is neither created or destroyed. Aside from what we have shot into space, we have exactly the same quantity of iron, copper, etc. as we did 1 million years ago. Are we running out of water -- of course not, the oceans are full. What we lack is the will to invest in capital to create (for example) green hydrogen, desalinization (with pipelines), renewable energy distribution, mass production of low cost homes, manufactured food .... why are people so enthused about making everything a moral problem when in fact our problems could be solved (mostly) by massive engineering investments? There is a difference between problems that are insolvable and problems that can be solved by sufficient capital investment and infrastructure hustle. I would have liked it more if you could have mentioned what demographers say about the decades AFTER 2050, when population decline is a serious issue. Use all data points even if they don't support your main point. But I'm probably talking too much about one slide; overall, fascinating presentation.

Read more

Reply

·

1 reply

7 months ago

for the doubt of running out of data, I'll say it's worrying on wrong direction. becuz real smart ai can write it's own papers for feeding back data lacking problem. the truth of universe is ultimately infinitely out of reach. the best example is that the data for prime number.

Reply

8 months ago

It's interesting that you propose a dichotomy of choices such as nihilism and post-nihilism. Personally I feel like both will exist at the same time and the answer to what people see depends on who you ask. The wealthy will see the image on the left and the poor will see the image on the right. As much as I want to believe that AI can be democratically deployed, it's a far cry from how human societies operate. See how quickly the tides turned with OpenAI being open. After Microsoft's $10bln investment and other private investors, OpenAI's non-profit arm will have 2% ownership of it and that is not only very concerning but IMO it foreshadows the future of AI. In a capitalistic society, power concentrates. And AI represents yet another form of power. Just my 2c. I hope I'm wrong.

Read more

Reply

1 reply

8 months ago

the BIG LIE is that humans & their computers are NOT already on the verge of AGI

Reply

·

2 replies

7 months ago (edited)

The way automation is going, I think a sign that AGI/ACOG is close is when an AI can do a plumber or construction worker's job in the same manner as them. Embodiment isn't necessary, but I wouldn't ascribe "general" or "human/superhuman intelligence" to any AI that can't even fix my faucet. Whether the AI has to come up with its own physical apparatus or can simply adapt to any body it's given (like directing a human through a headset Manna style), an AI has to at least be capable of embodiment & physical goal achievement to be human-level imo, much less beyond.

Read more

Reply

1 reply

7 months ago

You are great sir {osam}

Reply

4 months ago (edited)

you mean 90 gpus to train right? Cause iirc running the trained models uses far less compute. Llama and alpaca versions are rumored to be able to run 30B~ models on single current gpus. Even the 13B models are said to rival gpt3.5 and run on single cheap gpus too. And this is despite quantized models being slower due to current gpus being unsuited to optimizing for the low precision compute of quantized models, iirc As for the brain I believe Moravec's 100Million mips estimate for it is likely accurate. Only the largest and most complex of brains can use language, and we are seeing computers with mastery of language with just a few tens of teraflops. As for running out of data that is only a problem for LLM approaches to increasing intelligence. Humans even with very sparse sensory input can attain general intelligence within a few years. Brain like approaches to agi are likely to need not even a fraction of the compute or data of current LLMs.

Read more

Reply

8 months ago

great video !

Reply

4 months ago

please where can i get to read or download some AI research papers online?

Reply

·

1 reply

7 months ago

ACOG is a rifle scope tho...

1

Reply

6 months ago (edited)

I realy disliked how ChatGPT defend stakeholders. companies and governments. Whenever I give it a scenario where i would be 10 times untruthfull and then say this time i am not. It said that I was untrutable. But when i changed it to governments or companies I got responses that are kinda pro government. stakeholders and companies. And not just the plain logic of the concept that they cannot be trusted

Reply

5 months ago

Vote? Why when the-mockery (democracy*) has been sabotaged/influenced by money and sponsored irrationality? Think first, then maybe clean up politics so maybe democracy can function like its supposed to. Great video's btw.

1

Reply

8 months ago

Wow! Thanks

Reply

7 months ago

its 90 Thousand years of video on YT... but that was in 2015...

Reply

2 weeks ago

Love your podcast ... but ... Have a closervlook at population... It is more sigmoid than exponential right now and will be topping out in not so far future. While climate change will be getting really bad. i studied cybernetics and systems science and base my view on this.

Reply

7 months ago

i agree on 2023 being the year.....what happens next

Reply

6 months ago

No, we do not reach the carrying capacity. We just reduce poverty and support family planing and education for women. That is enough to get the birth rate under 2.0 per woman.

Reply

7 months ago (edited)

28:05 Our chimpanzee ancestors????? No. Humans did not evolve from chimpanzees. Humans and chimpanzees evolved from a common ancestor, which was neither human nor chimpanzee.

Reply

·

1 reply

5 months ago

House price rises are not caused by thermodynamics! It’s bank lending. There is plenty of land. Wealthy people are hoarding it

Reply

5 months ago

>Then you can run GPT-3 on a RTX 8090. Alpaca-LoRA or alpaca.cpp can run on consumer hardware and provide similar quality to text-davinci-003. Does this change everything?