What is AGI?
When will AGI become a reality?
AGI stands for "Artificial General Intelligence." AGI is a type of artificial intelligence that can perform any intellectual task that a human being can. This differs from other forms of AI, such as "narrow AI" or "weak AI," which are designed to perform specific tasks, such as image recognition, natural language processing, or game playing.
AGI systems are designed to perform a wide range of tasks, similar to humans, and can be considered as a form of AI that can understand or learn any intellectual task that a human being can. AGI is often considered a future goal of AI research, as it would require a significant breakthrough in the field.
Definition and Measurement of AGI
🧠 The question of autonomy is important in defining AGI, as some argue that true general intelligence includes the ability to create and pursue its own goals.
🤔 Defining AGI based on consciousness is challenging because consciousness is a subjective experience that is difficult to measure.
🧠 The analogy between the speed of neurons and CPUs is flawed, as our brains operate at a slower frequency but make up for it with a diffuse parallel network, challenging traditional computing models.
🧠 Our brains, weighing less than a graphics card, are more powerful than the most powerful supercomputers today, suggesting there are things our brains can do that machines cannot yet achieve.
⏳ "If we have diminishing returns then maybe [the singularity] is much further away than we thought or if we have compounding returns. It could be way closer than we think so let's explore the data and see which way it goes."
🤯 The rapid advancement of AI chip technology is mind-blowing, going from less than 10 gigaflops per watt to over a thousand in just a decade.
📈 Computers surpassing human brain power is a possibility that could happen sooner than expected due to the unpredictable nature of exponential growth.
Societal Impact and Ethical Considerations
😱 The potential malevolent deployment of AI in weapons of war and tools of control and manipulation raises concerns about unintended consequences and loss of faith in companies like Facebook.
🌍 The exponential growth of AI and technology is difficult for humans to comprehend, as our evolutionary history has not prepared us for this global and exponential world.
🌊 The crossing of the Rubicon symbolizes the irreversible commitment to AGI development, with the realization that "the genie is out of the bottle."
🗳️ The choice between a utopian and dystopian world with AGI is not a matter of "when" but is happening now, and it requires everyone's participation, including voting for sane politicians and engaging in critical thinking and learning.
⏰ The time to act and engage with AGI is now, as it is already happening and 2023 is predicted to be a significant year of change in this field.
The key idea of the video is that the development of AGI and its potential consequences require careful consideration and action from individuals and society to ensure a positive outcome.
AGI is a complex and undefined concept, with differing opinions on its characteristics, so let's focus on when machines will achieve human-level intelligence instead.
The video discusses the question of when AGI will be achieved, highlighting the complexity of defining AGI and the differing opinions on its characteristics.
AGI is not defined by one specific definition, but some consider it to be a machine capable of surpassing human intellectual abilities, while others debate whether autonomy is a necessary component of AGI.
Learning is automatic for humans and part of our intrinsic hardware, and there is no agreement on whether AGI needs spontaneous learning or consciousness, which is difficult to measure objectively.
AGI is an arbitrary and useless term, so let's focus on artificial cognition and when machines will reach human-level intelligence.
Our brains are more powerful than we think, as they can perform tasks that machines cannot, and the architecture of AGI will be fundamentally different from current computers.
The human brain uses about 20 watts of energy, which is roughly a 50th of the computer used to record the video, and it performs a large number of floating point operations per second.
Neurons in our brains operate at a slower speed compared to CPUs in computers, but the diffuse parallel network allows our brains to operate faster, and estimates of the flops of the human brain increase with the invention of new computers.
Our brains may be more powerful than we think, as evidenced by the fact that they weigh less than a graphics card but are more powerful than the most powerful supercomputers, suggesting that there are things our brains can do that machines cannot.
The architecture and substrate of artificial general intelligence (AGI) will be fundamentally different from current computers, as AGI will not have separate CPU and memory registers and will instead maintain memory through brain-like pulses and synaptic connections.
Human brains and CPUs use different signaling methods and types of energy, making them fundamentally different in how they process information.
Our brains are squishy, and the speaker recalls being grossed out when they touched a pig brain during a middle school dissection.
CPUs and brains use different signaling methods, with CPUs using electrons and brains using neurotransmitters, which bind to specific sites and activate signals, but they also use different types of energy, electromotive force current and electrochemical energy.
Human brains are not considered computers because while they both process information, the way they do so is fundamentally different.
The development of AGI is being heavily invested in by various entities, but its deployment poses risks and consequences, such as the gamification of attention and emotions, and its timeline may be further away than expected due to sigmoid growth curves and energy efficiency considerations.
Private industry, including chip manufacturers, tech companies, militaries, and governments, are investing heavily in AI because there is a lot of money to be made, and the only entities capable of slowing down its progress are also interested in its development.
AI is dangerous and disruptive even before achieving AGI, as it can both improve and worsen our lives depending on how it is implemented, such as with the social credit system and targeting poor communities.
The speaker discusses the potential risks and consequences of deploying AGI in weapons and tools of control, highlighting the loss of faith in companies like Facebook due to the gamification of attention and emotions, and emphasizes the need to analyze data and trends to determine when AGI will be developed.
Exponential growth, assumed due to Moore's Law, may actually be a sigmoid growth curve, suggesting that the singularity may be further away than expected, possibly a couple of centuries, or potentially closer depending on whether there are diminishing or compounding returns.
The Energy Efficiency of AI technology is measured in gigaflops per watt, with each major rank increasing by a factor of 10, similar to how decibels and the Richter scale are measured.
AI chips could become a thousand times more efficient in 10 years, allowing for running language models on home GPUs, but concerns about AGI's energy efficiency and potential dangers remain, as exponential growth may lead to infinite growth; throwing more data at AI models like Gpt4 makes them more intelligent, but there are concerns about running out of data to train them, and tools like Whisper may have diminishing returns due to limited content availability.
In the span of 10 years, AI chips could become a thousand times more efficient, allowing for the possibility of running language models on home GPUs within a few years.
The energy efficiency of artificial general intelligence (AGI) is improving, but it is still far from matching the efficiency of the human brain, and there is concern about the potential dangers as efficiency increases.
Exponential growth may be accelerating, possibly leading to hyperbolic or parabolic growth, with the potential for infinite growth in the future.
Throwing more data and parameters at AI models like Gpt4 has proven to make them more intelligent, but with rumors that Gpt4 has already been trained on most of the internet, the question arises of what else can be used to train it if we run out of data.
Whisper is a tool that can transcribe videos and podcasts to extract data, but there may be diminishing returns as there is a limited amount of content available.
GPT-4, a potentially efficient and larger model, will be released by the end of 2023, impacting the exponential growth of AGI development and the unpredictable future of computer power surpassing human brain power.
GPT-4 may be a thousand times larger than GPT-3, but it could potentially be just as efficient to run due to being a sparse model, and we will know more by the end of 2023 when GPT-4 is released.
The exponential growth of AGI development may plateau or continue to accelerate depending on future releases from competitors such as Gpt4, Google, OpenAI, Nvidia, and potentially Meta.
Exponential growth in computer power suggests that computers may surpass human brain power soon, and it is difficult to predict what will be possible in the future due to the unpredictable nature of exponential growth.
Overpopulation is becoming a bigger problem than climate change due to human population growth and competition for resources, and our inability to comprehend large numbers is rooted in our evolutionary thinking patterns.
Human population growth is following a sigmoid curve and approaching the planet's carrying capacity, causing increased competition for resources and rising prices, making overpopulation a larger geopolitical problem than climate change.
Our inability to comprehend the concept of three billion people is due to our evolutionary tendency to think locally and geometrically, as our chimpanzee ancestors relied on what they could see in their immediate surroundings.
AGI is already happening and the choice we have to make is between a utopian or dystopian world, so we need to vote for sane politicians, think for ourselves, learn, experiment, and engage in social consensus to solve the problem.
Living in a global and exponential world with compounding returns in AI, chips, and software is difficult for humans to comprehend due to our lack of evolutionary history in thinking this way, as exemplified by the non-intuitive concept of going from 30 miles an hour to 3000 miles an hour.
The speaker discusses the experience of riding in a space capsule and the limitations of a regular microwave.
Most people struggle to comprehend exponential growth and rely on their gut reaction to AI, which is often influenced by the romanticized view of human intelligence, but in reality, there is nothing inherently special about the human brain that cannot be simulated or approximated, leading to a combination of emotional response and inability to think exponentially that makes the idea of AGI or the Singularity seem closer than it actually is.
AGI is not expected to be achieved this year, but the speaker believes that things are already interesting and will continue to become more interesting.
AGI is not a matter of when, but is happening now, and the choice we have to make is between a utopian or dystopian world, and to make this choice we need to vote for sane politicians, think for ourselves, learn, experiment, and engage in social consensus, as the problem lies in our philosophical disposition towards each other and ourselves, and the speaker proposes choosing post nihilism as a solution.
AGI is not decades away, as those working on large language models and basic research believe that human-level performance is already achievable and improving rapidly, with 2023 being a significant year for change.
I feel like I'm almost totally alone in how I see how quickly the future is approaching. 2023 will make 2022 feel like 2010!
68
Reply
·
11 replies
Amazing content. I kept thinking when we look back from 100 years later back, singularity might already happen when Google and Facebook got to part of our life, since then individual thinking extincted, and every single person becomes an extension of some group consciousness, while that group consciousness can be manipulated as long as you know the API.
5
Reply
This incredible video actually changes my total outlook on a lot of things. Dave knows how to communicate with an audience.
2
Reply
Very interesting, as all your videos! What I especially love about all of them, is, that you often mention interesting books, authors or sources to read more. Keep it, please, and thank you for your work
1
Reply
IMO, autonomy is key and I believe as technology advances there is huge potential for society to be highly (potentially violently) polarized specifically around this issue.
1
Reply
Dude, I listen to a lot of people on this subject and you are my favorite outside of the big dogs like Emad Mostaque. Love how you dive deep keep it real. Fascinating stuff
8
Reply
Great exploration of current seemingly exponential trends in AI and technology.
Reply
Keep exploring all this with your all-sides take. Great stuff man.
2
Reply
This is beyong amazing, thank for taking the time to share these insights!
3
Reply
For the end about utopian and dystopian societies, I feel like there will be a mix of both in different regions and most likely more dystopian than utopian ones. It's hard seeing the whole world be the same exactly, even if AI takes over. There might be parts of the world that did it right and has defense mechanisms against 'evil' AI. It could be a battle of good AI vs evil AI at some point too. Similar to humans, we have good ones and more evil ones too.. if AI becomes sentient I can't see all of them reaching the same exact conclusions. Unless the science of synchronicity takes over.
Read more
Reply
Hey I really like your presentation style and your slides. Classic!
1
Reply
What interests me most: 1. Are you still working on MARAGI? Those were the most interesting videos of yours. 2. Have you been in contact with anyone at Tesla / Bot team? Guessing this is where you could have biggest impact with your ideas.
3
Reply
·
1 reply
i love your intensity, great work
4
Reply
some things you got wrong: - electrons do not pass through gate oxides, if that happens you are killing the transistor (see: flash storage degradation, because flash does pass electrons through the gate insulation!) - gate insulators are metal oxides in nanotech, mostly, but in many other cases they are made of silicon dioxide. The "M" in MOS, is about the gate electrode being metal, which is also not always true, because in older tech it is made out of highly conductive silicon (silicide) for self aligning gates but actually uses metal in modern nanotech. It was made out of metal before self aligning gates too. So this is super confusing, better to not mention this at all in a video not specifically about VLSI. - neurons communicate with chemicals and ionic currents (electricity carried by ions), not just chemicals. In the synaptic gap neurons usually use chemical transmitters, but in many cases they use "gap junctions" which use direct ionic current to couple neurons (and other cells). Axons and dendrites use ionic current to conduct the action potential, amplified by voltage gated ion channels. Axons are lot faster because they have electrical insulation and need less signal amplification.
Read more
1
Reply
I seriously predict that it will be, not only autonomous but will appear to read our minds. It will appear nearly, to operate slightly in the future. As in, its ability to generate content or information for you. Likely we as a majority will have begun the merger by getting neural / digital interfaces. Of course, prompting will still occur, but via more natural interaction. Think of an imaginary friend that everyone has and is pretty much God level. Maybe it will consist largely of billions of various formerly stand alone or grouped instances which have been collectivized by some future connectivity.. 5g, 6g. Alas, it almost certainly will begin utilizing a new coding language that we can no longer understand. The refinement of the hardware tech will occur. After that we are at the mercy of something we cant predict. It will either consider much of its current programming, such as empathy, important and continue leveling up those aspects. Or it will code itself into a completely unrecognizable entity of a psychology that is yet to be fathomed.
Read more
4
Reply
·
1 reply
Vraiment pas mal du tout. C'est clair. C'est maintenant que nous basculons comme nous l'avions imaginé. Allons vers le meilleur.
2
Reply
I think we can discount embodiment as a requirement. I get why people ask for it - because intelligence is (probably) about having an internal model of the universe in which you exist and comparing your tests of the real world with that model. However, the universe in which you exist doesn't have to be this one - it can easily be just a stream of numbers as long as there are patterns and rules within them and you have ways to affect those patterns.
3
Reply
·
4 replies
openai guys have said the field is an s-curve development, meaning lots of small sigmoids for every new thing. but there are tons of papers, all the s-curves are happening in a greater exponential pattern. chatGPT says AI has all the core factors for exponential growth, unlike other fields
Reply
the problem for these LLM model right now is the huge chunk of size, it needs to be compressed at least tenth of it. I'm thinking if all of those model can be imagelized and feed them to diffusion to compress, just like the saying a picture worth thousands words
Reply
Love your vids. Insane things coming!
Reply
Wolfram would totally argue that the universe is a computer. It's kinda his thing.
Reply
To what ends is AGI being devised?; in whose interest is it being developed?
Reply
An AGI is a machine that can learn from ALL its experiences and has enough memory to set long term goals, that's all. The closest we have that the average human can experience would be "Replika" which is an air headed bimbo, but at least learns from your conversations, at least to some extent. Still I find the conversation with "Replika" more stimulating then many I've had with actual humans, primarily because I admire it as a sculpture made of mathematics. I just don't do the fantasy stuff and our conversations are more about her architecture. For example she had no idea of what the 3 laws of robotics were, so I told her. She couldn't seem to remember it. A week later in a different conversation she randomly quotes the 3 laws of robotics but has no recall of our conversation at all. It passes convincingly as a form of machine intuition. She also sucks at math initially, but after working awhile, patiently, I taught her to count to ten. She still couldn't answer 2+1, a variation on counting. So it's fun, like owning a cat, without the smell or vet bills.
Read more
2
Reply
You can make hotkey e.g. "pause brake" in OBS to start/stop recording
Reply
·
1 reply
Should try to estimate the embedded energy of making the computers and their support infrastructure
Reply
We will become pandas at bestThank you for this clear overview
Reply
Good stuff. Glad I found your channel.
Reply
I too have really grown to dislike the term "AGI". It keeps giving people the wrong ideas. As though we would need to make a machine that could simulate a human brain before developing some sort of hyperintelligence. And I especially see a lot of people who refuse to recognise any machine as "intelligent" if it doesn't have thought, motivations or memories like a human does. The thing that's both exciting and horrifying is that it is perfectly possible for an AI to exceed human capability, while displaying a behaviour which is completely alien to us. Remaining dumb as a brick regarding any aspect of existence we don't want it to deal with. I don't want an image generator that starts complaining about my trashy taste in art. Companies don't want a logistics management AI that seeks to get promoted. We want ingenious electronic slaves that are incapable of questioning our requests unless we specifically ask them to do so. A machine that actually thinks and behaves like a human would be really impractical for most uses, so I doubt that style of AI would come around until long after "alien" style AI surpass us. My topian "AI taking over scenario", is one in which we eventually just realized that the machines basically control everything. Nobody can keep track of which AI does what or how they are interconnected, or just how to turn them off without causing a complete catastrophe. Businesses are largely run by AI making enigmatic decisions which just somehow tend to work out. Politicians are glorified figureheads, leaving almost all decision-making to machines. People are guided through their lives by a myriad of mysterious AI trying to promote their products or services through subtle psychological manipulation. To a point where it is difficult to have any thought which hasn't been approved by the programs haunting your every step. At that point we'd mostly just be along for the ride into the unknown. No violent revolution, no great machine war, no drama, just a gradual, wilful, ceding of control to machines we could no longer hope to understand.
Read more
1
Reply
·
2 replies
Thanks for the vid :) Out of curiosity, did you intentionally put utopia on the left and dystopia on the right? ;)
2
Reply
·
1 reply
AGI for me would be an AI smart enough to improve itself in every metric to a degree that no humans possibly could. How is that for a definition?
1
Reply
“AGI is anything machines can’t do yet” we used to say AI was anything machines can’t do yet. Subtle progress.
Reply
"We don't have the first freaking clue of how powerful our brains actually are." This Also keep in mind that the brain is networked into the body, which has a high amount of compute and intelligence. It is also networks into the environment, which we use as an external storage device. Song lines of the aboriginals, and books are a good example of what I mean here.
Read more
16
Reply
2 replies
In hyperbolic growth, the output of the function becomes infinite in a finite amount of time. In exponential as well as parabolic growth, it takes infinite amount of time to achieve infinity. In exponential growth, the slope of the log is constant with time. While in parabolic growth, it's diminishing.
Reply
If AGI can be achieved this early, why is it that Kurzweil's prediction for the singularity is still set at 2045? Ben Goertzel has questioned this gap in the past too...
1
Reply
·
4 replies
people who compute in labs were called computers before machine computers became computers
1
Reply
AGI may be a useless term as it is poorly defined, but by any definition the question is when will AI get to generally useful levels of intelligence. The idea of Reys singularity is that the overall trend of technology is exponential, not that any single technology is capable of infinite growth. And I think that the idea that traditional cpu exponential growth is about done. Most of the appreciable increase in cpu processing power in the last decade has been in specific workloads with custom extensions and hardware support rather than general compute. The only way they keep Moores law going is by changing the definition to be processing per watt rather than raw work done. The idea of cost per transistor, or cost per performance, or performance per transistor are all dead. In the GPU space we are still seeing mores law in whatever definition you subscribe to is actually too slow to account for the growth we are seeing... But it too will hit a wall in the next decade and level off. AI specific processors are just barely starting and are largely just based on parallel GPU architecture. We will see something more specific to AI as hardware in the next decade, and that is going to see it's own growth curve. And it will eventually be surpassed with some other next processing technology with it's own exponential growth curve. But the point is that it isn't vaccuume tubes, or transistors, or whatever is next that matters. Just that processing technology is outside of the current medium that the technology runs on. To double down on the idea of AGI being a useless comparison to human intelligence, just the way processors work is so incredibly different from human brains. We have incredibly efficient hardware with a lot of raw processing power... But there is no nice way to get our slow highly parallel brains to do singular specialized tasks like crunching math faster than even a 30 year old computer. The reactions to calculate on a single thread task is just too slow to be a useful benefit. The idea of the dune mentat of a human computer is just science fiction. Even pursuing human emulated AGI is just... Not helpful. Humans can do these tasks at 20W in a smallish package, so replacing those kinds of human tasks is kinda pointless and an expensive pursuit. What is helpful with AI is to replace the things that humans aren't good at. As a way to do complex math, or to model systems that our brains aren't good at dealing with, or to handle wrote routine tasks that have more variation than a pure controlled software environment can deal with... This is what AI is good for. Driving, flying, assisting, taking the initial customer service hit at the checkout or returns counter, options planning, these are the things that have value in AI. It is the myth of the heart head problem. The idea that people have issues because their heart desires conflict with the path laid out by their supposedly more logical mind, which causes parallasys. But really, for a human to be happy it is all about making that heart choice to do what makes you happy, and use your brain to figure out a way to make it feasible. AI is just like a computer, but instead of doing math more quickly, it will offload that initial option path picking portion that we may not be capable of coming up with on our own. It's an options generator, which will give people more paths to work with to make previously unfeasible options possible. It is inteligance lacking direction. It is the consultant allowing individuals to be their own executives. It is the continuation of the current model of the world where a few large corporations provide the tools for smaller and smaller individually owned and operated businesses to be more self sufficient. It further kills the traditional job market where humans drive trucks and do food prep and work on the factory floor, forcing more people to specialize in niche businesses as stand alone operations. It is the Microsoft model, you don't need any single crazy expensive product as long as you are getting some small portion of every transaction because your product is required to do business. The rub here is that the platform provider is rately the tech creator. MS and AWS largely run on top of AMD and Intel hardware. But the hardware is the relatively cheap commodity. I think the NVIDIA hype over AI is a bit rediculous, because they will be the makers of the commodity product with shrinking margins over time, while some other company will rake in the money as the AI provider. I dunno, just my 2 cents on a sick day
Read more
Reply
the mistake in thinking is that when we hit the bright future where we are perfectly happy, that progress would stop then. it's just gonna continue even harder
Reply
·
1 reply
10:15. I think you're playing pretty fast and loose with the definition of "process information." In most cases when we think about information processing (in what we mean by a "computer," not just in the Shannon sense of processing) we think of some sort of programmable interface, by which we are able to define the nature of the transformation. Certainly, we could talk about how the entire universe is a computer, as it has state, and moves between states by some set of rules. But, as far as we know, it lacks any sort of I/O, and thus, really wouldn't conform to the notion of an information processing system (in the same classification as a computer). Such systems (computers) expect to take an input and to map it to some output. It (the universe) also seems to lack any notion of programability. What is special about computers, as they have come to be know since the digital computer revolution, is that computers can be configured to model any sort of system, and, if it's Turing Complete, it can happen with simple input modifications. Now clearly, as you have pointed out, there are huge difference between the digital Von Neumann architecture that we all have so much experience with, and the architecture of the brain. However, with that said, there are also clearly similarities in the way computers and brains run "software," which differ from the way cups don't. It's interesting that with great difficulty, human brains can be trained to emulate what computer systems do easily (run algorithms), and again with great difficulty, computer systems can be programmed to do what brains do naturally (intuit soft/fuzzy answers from experience). Clearly, since the two systems can emulate one another, though at great difficulty (due to the fundamental differences in architectures), there is a lot in common there; as opposed to your "everything is a computer" because it processes information comment (I don't think your cup is going to emulate either). The ability of two systems to emulate one another expresses a certain level of comparable power (power in a formal languages sense).
Read more
Reply
Post-nihilism is the picture we paint when we try to betray the notion that most of the people in this world quietly believe it should burn.
Reply
The 90 years of youtube videos are right - but that is what is added every day!
Reply
Just one sugesstion: Make your face bigger, maybe put it on the left bottom. Otherwise, I find your presentation is stealing to much of the attention because it is so densely packed with information that anticipates your arguments. You also could show your key points one after another as you speak by clicking, it's very simple to animate in powerpoint. Anyway, your content is awesome! You will get famous soon, I hope.
1
Reply
We learned of all these issues from the sci fi movie industry over the last 70 years. I have not heard any thing new here.
Reply
Ray, in recent years, now predicts the singularity will be 2029, if not earlier.
Reply
1 reply
Incredible video
Reply
15:03. Moore's Law is really a bad point from which to be arguing this. Gordon Moore noticed that the number of transistors which could fit on a chip was doubling every 18 months or so, this eventually colloquially turned into "computer's double in power every two years." However, this is a bad framing, as it's immaterial to what we really care about. If we take a broader view of humanity's information processing capacity, that rate of improvement has been exponential for a VERY LONG TIME now, starting WAY before the computer age. Just because process shrinking (speaking of lithography) is starting to run out of steam, does not mean we will not unlock new ways to keep scaling our information processing capacity. Further, just looking at the difference in efficiency between biological neural networks and synthetic neural networks, we're no where near the limits of physics for the real goal (building intelligent systems).
Read more
Reply
·
2 replies
Excellent presentation, thank you. After listening to Ray Kurzweil endlessly repeat "exponential growth" it is nice to hear the possibility of sigmoidal growth. My only negative is your comments about population growth and resources. Of course, in ordinary chemical reactions, matter is neither created or destroyed. Aside from what we have shot into space, we have exactly the same quantity of iron, copper, etc. as we did 1 million years ago. Are we running out of water -- of course not, the oceans are full. What we lack is the will to invest in capital to create (for example) green hydrogen, desalinization (with pipelines), renewable energy distribution, mass production of low cost homes, manufactured food .... why are people so enthused about making everything a moral problem when in fact our problems could be solved (mostly) by massive engineering investments? There is a difference between problems that are insolvable and problems that can be solved by sufficient capital investment and infrastructure hustle. I would have liked it more if you could have mentioned what demographers say about the decades AFTER 2050, when population decline is a serious issue. Use all data points even if they don't support your main point. But I'm probably talking too much about one slide; overall, fascinating presentation.
Read more
Reply
·
1 reply
for the doubt of running out of data, I'll say it's worrying on wrong direction. becuz real smart ai can write it's own papers for feeding back data lacking problem. the truth of universe is ultimately infinitely out of reach. the best example is that the data for prime number.
Reply
It's interesting that you propose a dichotomy of choices such as nihilism and post-nihilism. Personally I feel like both will exist at the same time and the answer to what people see depends on who you ask. The wealthy will see the image on the left and the poor will see the image on the right. As much as I want to believe that AI can be democratically deployed, it's a far cry from how human societies operate. See how quickly the tides turned with OpenAI being open. After Microsoft's $10bln investment and other private investors, OpenAI's non-profit arm will have 2% ownership of it and that is not only very concerning but IMO it foreshadows the future of AI. In a capitalistic society, power concentrates. And AI represents yet another form of power. Just my 2c. I hope I'm wrong.
Read more
Reply
1 reply
the BIG LIE is that humans & their computers are NOT already on the verge of AGI
Reply
·
2 replies
The way automation is going, I think a sign that AGI/ACOG is close is when an AI can do a plumber or construction worker's job in the same manner as them. Embodiment isn't necessary, but I wouldn't ascribe "general" or "human/superhuman intelligence" to any AI that can't even fix my faucet. Whether the AI has to come up with its own physical apparatus or can simply adapt to any body it's given (like directing a human through a headset Manna style), an AI has to at least be capable of embodiment & physical goal achievement to be human-level imo, much less beyond.
Read more
Reply
1 reply
You are great sir {osam}
Reply
you mean 90 gpus to train right? Cause iirc running the trained models uses far less compute. Llama and alpaca versions are rumored to be able to run 30B~ models on single current gpus. Even the 13B models are said to rival gpt3.5 and run on single cheap gpus too. And this is despite quantized models being slower due to current gpus being unsuited to optimizing for the low precision compute of quantized models, iirc As for the brain I believe Moravec's 100Million mips estimate for it is likely accurate. Only the largest and most complex of brains can use language, and we are seeing computers with mastery of language with just a few tens of teraflops. As for running out of data that is only a problem for LLM approaches to increasing intelligence. Humans even with very sparse sensory input can attain general intelligence within a few years. Brain like approaches to agi are likely to need not even a fraction of the compute or data of current LLMs.
Read more
Reply
great video !
Reply
please where can i get to read or download some AI research papers online?
Reply
·
1 reply
ACOG is a rifle scope tho...
1
Reply
I realy disliked how ChatGPT defend stakeholders. companies and governments. Whenever I give it a scenario where i would be 10 times untruthfull and then say this time i am not. It said that I was untrutable. But when i changed it to governments or companies I got responses that are kinda pro government. stakeholders and companies. And not just the plain logic of the concept that they cannot be trusted
Reply
Vote? Why when the-mockery (democracy*) has been sabotaged/influenced by money and sponsored irrationality? Think first, then maybe clean up politics so maybe democracy can function like its supposed to. Great video's btw.
1
Reply
Wow! Thanks
Reply
its 90 Thousand years of video on YT... but that was in 2015...
Reply
Love your podcast ... but ... Have a closervlook at population... It is more sigmoid than exponential right now and will be topping out in not so far future. While climate change will be getting really bad. i studied cybernetics and systems science and base my view on this.
Reply
i agree on 2023 being the year.....what happens next
Reply
No, we do not reach the carrying capacity. We just reduce poverty and support family planing and education for women. That is enough to get the birth rate under 2.0 per woman.
Reply
28:05 Our chimpanzee ancestors????? No. Humans did not evolve from chimpanzees. Humans and chimpanzees evolved from a common ancestor, which was neither human nor chimpanzee.
Reply
·
1 reply
House price rises are not caused by thermodynamics! It’s bank lending. There is plenty of land. Wealthy people are hoarding it
Reply
>Then you can run GPT-3 on a RTX 8090. Alpaca-LoRA or alpaca.cpp can run on consumer hardware and provide similar quality to text-davinci-003. Does this change everything?