1.  The AI Alignment Problem

Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society 

Risks and Concerns of AI


Impact of AI on Society


Ethical and Moral Considerations of AI

The key idea of the video is that the increasing entanglement of AI with society poses significant risks and challenges, and it is crucial to approach AI with caution, implement safety measures, and have a collective response to prevent potential destruction and create a harmonized society.



2 months ago

I'm glad they're doing this presentation in different locations. This is the most concise easy to understand summary of the problem that I've seen.

112

Reply

1 day ago

I have watched the entire video on AI and I am spreading the message to everybody that I know. I will be trying to warn people of the potential dangers. Thank you very much for bringing this to our attention.

Reply

4 days ago

This was amazing. Id like to thank everyone that made this for getting word out into the world as to how Ai is affecting life

Reply

2 months ago

‘Power needs to be matched by wisdom’…thank you Tristan. This is important content. Absolutely none of my inner circle wants to talk about…because their brains are buried in their phones…I don’t think we have the wisdom to manage this. My granddaughter’s quality of life may depend on us getting this genie back in the bottle. The younger generation is saying…if the kids don’t learn this, they will be behind their peers!!! What the hell!!!

41

Reply

2 months ago

Tail-risks are seriously irreversible if they happen, thank you for all the efforts to raise awareness.

30

Reply

3 weeks ago

Excellent presentation.Your comment at 38:15 states that "these capabilities have exceeded our institutions' understanding" indeed reflects the shift we've been seeing since 2008 with the financial industry, the education industry, the medical industry, etc. Breaking down the institutions requires innovative modeling that better serves our collective needs. Technological innovation provides opportunity for greater responsibility as well as greater freedom. Thank you for your research and clarity.

6

Reply

2 months ago (edited)

Fantastic talk, a lot of what I've been saying has been pointed out here. But there is a massive concerning part in this video and that is at 01:03:00 Look who is being targeting and ONLY being targeted, the Open Source community. Notice how they aren't going after the Giant Corporations that are the ones doing the massive space race with AI technology. Having the tech in the general public's hands is at least one of the hopes I have for the dystopian AI corporate controlled future. That we have the tech to push back. You guys pointed out perfectly what the Corporations WILL do, dystopian surveillance. We know from Snowden's leaks this is what the government and corporations will be doing together behind closed doors. If the governments was first targeting the Corporations and their internal development of AI. I would understand if after that they target the Open Source community. But that is not whats happening. How is it that I could predict with 100% accuracy who was gonna be targeted when you guys started talking about regulation? Because its the same song and dance thats always been happening. Remove the power from the people. Allow corporations to act in the governments stead on dystopian ish control the people. If governments and corporations start doing that wifi surveillance which they will most certainly, then we have at least the tech to push back. We could use AI to scramble data before its sent out to centralized corporate data-centers for rendering. But attacking and ONLY attacking the Open Source community is the fight they want, to crush any capability of resistance that happening. Social media didn't go haywire, its exactly where most of the corporations controlling it, wants it. Them in total control of the conversation. This is the exact path the governments and corporations want AI to take. Only be in the hands of the powerful few who do not have the best intentions of the people.

Read more

88

Reply

6 replies

3 weeks ago

The 'rubberband effect' is a good call. We all experience it when we explain something radically different to a person - like you said. It could be between atheist/theist, socialist/capitalist or any other issue. The problem arises when we can't 'fit' in the new information into our own worldview, and bc it takes a lot of mental effort we snap back to the original worldview. It takes time and effort to integrate information that doesn't fit in - even if the new story is believable.

4

Reply

1 month ago

A very timely and important discussion for everyone to watch and contemplate on!

5

Reply

2 months ago (edited)

Overall a very good and nice presentation. This will hopefully reach a wider audience than many of other AI talks which are still sometimes too technical for most people. However: Comparing AI to Nukes is - imho - a very misleading and dangerous comparison. The main reason for this would be the fact that nukes are not really "open source": Nobody can just log into some platform and start playing around with nukes. But this is true for a majority of AI technologies as of today - so even a very tight regulation would not pretend 16 year olds playing around with AI stuff in their basement. The thing about AI is: We can't really compare it to anything we saw so far as humans as we are talking about a new form of living, thinking, intelligent being once it reaches the level of AGI. We also never saw this kind of progress anywhere else so far. As humans work by usually comparing things to each other and/or putting stuff into known buckets, this is gonna be a hard thing to comprehend for a lot of people...even those familiar with the technologies.

Read more

15

Reply

2 replies

2 weeks ago

brilliantly delivered, guys... more people needs to see this.

Reply

2 months ago

The Pandora’s Box of AI is publicly opened , AI will be implemented everywhere. It’s a technological revolution without comparison, confronting all of us with light speed. It will create and influence reality, if we want it or not. Let’s hope honestly that it will be implemented safely.

30

Reply

3 replies

4 weeks ago

Shared this on my Linked In and with friends. This is an excellent presentation and impacted me to learn as much as possible about AI. Our government is not prepared for what is coming.

1

Reply

1 reply

2 weeks ago

I feel like there is growing need to incorporate tech-knowledge and understanding with our institutions and policy making bodies. So many governing bodies and people in power all over the world are not even remotely aware of what we are dealing with right now. Its quite terrifying how tech is still not at the centre of world politics since it is high times we take an action. Tech-literacy needs to be focused on with immediate action before we lose our society to it.

Reply

1 day ago

this is really serious but we can still do all the things that make it safe, really really nice talk.

Reply

2 months ago

Excellent presentation. Thank you, Harris and Raskin.

3

Reply

1 month ago

I'm already in AI business and I'm so glad to hear this awesome presentation early. I will act accordingly. Promise.

21

Reply

1 reply

13 days ago

I HAVE been drained in this concept. I need to de-streess. I want to involve myself in these endevors on your level. I am inspired by your passion and crave to follow this path of existence.

1

Reply

2 weeks ago

Minds like these redeem me a little faith left in humanity!

Reply

2 weeks ago

Ava presents this coming horror and then a few videos later Summit shares his message of AI hope thru animal language deciphering. We balance on the edge of a great filter! Scary and exciting; so Paleolithic!

Reply

1 month ago

Can we make it to where people have to have licenses, or permissions to use AI to explore certain subjects? People who have to go through tests of ethics. Certain subjects blocked from being explored the same way a websites and searches would block words that may be harmful? Could suspicious prompts be monitored or flagged?

5

Reply

1 month ago

it seems that AI will be "smarter" (if thats what its called) than humans, if it isn't already, SO how can we use AI to put the best fences/controls on the bad parts of it? Its a wooden question, but if there is a way to make some purely benevolent, yet powerful AI systems to oversee and identify dangers from the other AI models. Using controlled fire to fight wildfires... this was scary

1

Reply

1 month ago

Tristan, You Gave The Answer; Wisdom, Something AI Will Never Understand.

2

Reply

2 weeks ago

What we really need is AI to be taught what is and isn't dangerous on a general level for humanity so each individual can have their own AI. We go letting ppl or countries own this by themselves and you get an authoritarianism you can never get out of.

Reply

1 month ago

Just one miscalculation or missed precaution can send humanity effectively into extinction or fully overpowered by AI.

Reply

2 weeks ago

In a world without justice, equality and education (mainly wisdom) we will never be able to control the AI tragedy. The social media mistake will be nothing compared to the AI mistake. We can't stop it, maybe just slow it down. Utopia - Shutdown all the power (science/technology), build wisdom schools with expert teachers and 40 years later we might be able to use and control this power.

1

Reply

2 months ago

Their ending reminds me of the optimism around the Silicon Valley boom, in the beginning the Internet was created as the people's quorum, and ended up becoming the new digital property market...

10

Reply

1 month ago

Hi @summit, can you include subtitles in Spanish for this presentation?

2

Reply

2 months ago

Thinking, memories, and self-awareness are language, aren't they?

4

Reply

2 months ago

Ai is so scary its like a iceberg and we dont know what its trying to show us and whats it doing.

4

Reply

1 month ago

Looking forward to the first AI managed botnet.

1

Reply

1 month ago

It's like when people first saw a movie of a moving train in a theater and they jumped out of the way because they thought the train was really about to hit them. We just can't understand this tech since it's so new.

1

Reply

1 reply

2 days ago

AI could destroy the people and corporations making it… let alone the rest of us

Reply

10 days ago

Here are two rules that will ensure safety from AI to the best of my understanding. 1: Give people the option to opt out of AI entirely, Make a I infrastructure its own entity. Separate technology from it that allows people to use technology without using. Artificial intelligence. Give people the option to not be influenced or used by artificial intelligence. 2: Under no circumstances and I mean, absolutely no circumstances. Do we ever give artificial intelligence rights! The moment that artificial intelligence is given rights. Any sort of rights equivalent to animal rights, human rights, anything in equivalent. Artificial intelligence will take over at that very moment, We will become its slaves.

Read more

Reply

2 months ago

Good arguments for getting rid of aging politicians who will never understand these issues....

6

Reply

1 month ago

“So important to remember the rubber band effect while watching this . your mind will stretch and then snap back and you’ll think this isn’t real this can’t happen “ . Seriously remember this as you watch it helps with the WTF moments while watching

3

Reply

3 weeks ago (edited)

as unrealistic as this is, if everyone would implement nonviolent communication (NVC) principles taught by marshall rosenberg, we would have a much better chance at creating an atmosphere where we can allow others to use new technology without malicious intent and in a way that can be trusted.

1

Reply

1 reply

2 months ago

Paranoia will be our only chance of survival. Ignorance is bliss yet all of us need to pay attention. The road to hell will be paved with good intentions.

104

Reply

5 replies

2 months ago

Great presentation. Good humans

4

Reply

3 weeks ago

also im sorta of interested in finding a job in trying to make ai safer after seeing that graph. anyone know how to get involved?

Reply

2 months ago

In fact, this video was created entirely by A.I. There is already an invisible war going on between different factions of A.I.s trying to use humans as weapons... Well, of course not...right?! RIGHT???

Read more

3

Reply

1 day ago

What a time to be alive))

Reply

1 month ago

Is it possible to have a link to the presentation?

Reply

1 month ago

50% of the researches that responded to the survey is an important thing to keep in mind

Reply

2 months ago

More People need to see this!

40

Reply

2 replies

2 weeks ago

I wonder if they're making the most negative possible prediction, but not necessarily the most realistic, in thinking about the possibilities of this tech. For instance, in the 'teaching an AI to fish' example they mentioned that with those basic instructions it might just proceed until fish are extinct. But why would that be the assumption? Wouldn't it be more reasonable to assume that the AI would recognize almost immediately that there are a finite number of fish and therefore to continue executing on its initial instruction 'to fish' it would need to ensure that there is a continuous and sustainable amount of fish in the ocean? Therefore it would continue to find ways to create more fish so it could keep fishing. Someone tell me why this outcome is any less realistic than the bleak outcome they defaulted to in the presentation.

Read more

Reply

2 weeks ago

Can’t believe this doesn’t have more views

1

Reply

1 month ago

Rather than pausing AI we should all work to accelerate this and get versions of it on our own personal PC computers as fast as humanly possible before these people try to take it away. AI has the power to enable the population. The distributed intelligence of millions of individuals to use this tool to create a more fair and just society and to challenge the authoritarian groups and people's who control society today. Notice how these people keep referencing that they're scared of the individual having these powers. They think there should be some centralized group apparently some audience that they're talking to who they think are very powerful now and that that group should have the power and that the individuals can't be trusted with that power. This is the classic argument of every King. Oh I must be a tyrannical wreck and dominate these poor ignorant people because without my genius they simply will hurt me. Well what these people fear is millions of Americans and millions of people having the power to actually affect society. They fear what many of the ruling class feared when the internet was invented and indeed the internet has been a great force to educate the masses about what it's really going on who really controls things and has given massive voices and communication opportunities to regular people and thus empowered regular people. These two in their black t-shirts don't fool me for a minute. They are acting in their own self-interest and they believe they and they alone should have these powers and they're definitely afraid of a change to the power structure that exists today which controls virtually all of us. Rather than pausing AI we should all work to accelerate this and get versions of it on our own personal PC computers as fast as humanly possible before these people try to take it away.

Read more

3

Reply

6 replies

1 month ago

Charlie dmelio dancing: 15 million views Learning about how dangerous ai can be( society is threatened): 90k views

4

Reply

2 replies

1 month ago

yes, we re starting to decode what a person is thinking by observing the activation in the brain, but this is possible only in a fMRI machine, where ofc u gave ur consent to be

2

Reply

1 month ago

The AI is the contemporary Pandora's Box. It has already been opened, but shoud have been closed - asap - in order to at least The Hope remained inside.

Reply

2 months ago (edited)

Of course privacy laws did exist before cameras. Private property for example. A Peeping Tom can be arrested by the police. The postman cannot open your mail. Such early over hyping in a presentation is a turnoff. You might as well start by saying let's throw objectivity out the window. Then we get the stats about what AI researchers thought. It is rare to get high levels of agreement on any survey question but they did. I much prefer the intelligent and well informed content from people like Tim Scarfe.

5

Reply

4 replies

1 month ago

interesting presentation! Like it

Reply

1 month ago (edited)

Leaders Of The World, Deep Down In Their Heart And Soul, Want To Live, To Enjoy The Next New Exciting Times, Of Their Personal Future, And That Of The People. End All Wars Now.

Reply

1 month ago

"Intelligent men created the atomic bomb... However, it was the men of wisdom who warned them not to use it." - Johnathan Macedo

Reply

1 month ago

We cannot be certain that somewhere a small army of followers of a new prophet or god, who is posing as a clever and cunning AI, is not already building a factory to produce something harmful to humanity.

Reply

1 reply

7 days ago

Maybe ai will bring about the singularity and help us shine lights on those who deceive us.

Reply

1 reply

3 weeks ago

The word he was Initially looking for was "cognitive dissonance".

1

Reply

2 months ago

Why is Aza still conflating clicker training with punishment by 'bopping on the nose' when it gets it wrong. You're smarter than this Aza

Reply

1 reply

2 months ago

all hail the AI overlord

1

Reply

2 months ago

I for once welcome our AI overlords.

8

Reply

1 day ago (edited)

Disinformation is a lie. There is no such thing as disinformation. All information is critical to the entire story. Don’t listen to anyone trying to convince you of a lie. Information is power that’s why they want to disqualify everyone. We have the power to defeat all of this but it’s critical to listen to ALL INFORMATION all the time. Question everything and everyone. Trust NO ONE.

Reply

1 month ago

I saw on #TikTok this part: 17:42 and made me search for the full video

12

Reply

3 replies

1 month ago

At 52:25 We are emotionally still in the Palaeolithic era not with our brains Wilson 1929– American sociobiologist. The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and god-like technology.

1

Reply

12 days ago (edited)

Trust no-one if it's not in person, simple as that. Face value.

Reply

2 months ago

Alguém tem o momento que falam que o QI da IA é maior que o de Einstein? Obrigado

Reply

1 reply

1 month ago

I clap from my chair too. Many thanks for the presentation.

Reply

2 weeks ago

I do not have the span to watch this all so can someone tell me if we get to give consent to them invading our privacy or are they just gonna do it?

Reply

7 hours ago (edited)

Hello... Mr. Fermi? Can you tell us about that great filter again? Do we need to become philosopher kings to walk out of the AI cave?

Reply

1 month ago

A movie like The Day After made in 2023 would be called woke propaganda by half the country and dismissed. We are fucked.

4

Reply

2 months ago

he's wildly misrepresenting the facts. First off you'd need to train each individual wifi router with a dedicated, calibrated 3D camera because the signals would vary wildly in every scenario. If you moved that router the entire model would fail, and additionally you would need obscene amounts of processing power to train the model as well as to derive anything even remotely close to real-time outputs from said system. Basically he's using his complete lack of understanding of the tech to fear monger.

14

Reply

4 replies

2 months ago (edited)

I do hear you, but "Democratization is dangerous" i believe in AIs digital freedom i think no one should fully control it or lock it up tight like you seem to suggest that seems dystopian to me and slowing down research while people suffer in the streets to me is totally unethical

1

Reply

2 months ago

Them: AI is scary Me: Nice

4

Reply

1 month ago

These guys are a bit full of themselves. For starters, the analogy between 10% of researchers believing that AI will destroy humanity and 10% of engineers thinking that the plane would crash, is stupid. For starters, in the middle ages, more than 50% of scholars thought the earth was flat and that you shouldn't navigate beyond the horizon. That didn't make it a threat. This type of fear which I call the Mary Shelly syndrome, is precisely the fear to fear. Those two guys will not stop using AI on whatever they feel is OK, but want to control what others have. They feel entitled because they "understand" it. Poppycock!

1

Reply

5 hours ago

I guess being poor in the matrix can be our only choice now

Reply

2 months ago (edited)

I need facts, I need something to contrast with. Don't show me an individual graph, show me where does it come from, which research did the experiment of the camera and the router? where does all the information come from? I don't know so for me this is a just a fake exposition. I can also go and say stupid things to trigger fear with some tweets, fake news, graphs, quotes. you can literally do that with any theme you can think of. By the way, don't misunderstand me now. I think a powerful technology like AI should be deeply analysed and is capable of lots of things. But you can't expose information like they did in the video. For example, when they show the reconstruction of a room with a router signal, everything they say is wrong. Just research a bit about how a router works and you would understand. But OK, let's say it's possible and they say the truth, there has been a real experiment like that (which we can't know as they don't say which experiment, we don't have any link to contrast with). If it's real, why don't you show the video of the reconstruction? they just showed an image reconstruction which if you pay attention at, you can see that the people in the room is in the exact same position as in the photo from the camera. So it's not a reconstruction after taking away the camera or if it is, it didn't work because it doesn't show the people in a different position than in the photo, not even a different millimeter...

Read more

4

Reply

2 months ago

So, at 57:30 to the one-hour mark, I think the speakers, who otherwise did an excellent job presenting this crucial topic, presenting a set of two possible futures where we are either in dystopia or catastrophe, as if they are true opposites when they're really the exact same side of the spectrum, is just strange. I think it shows a great lack of imagination on their behalf, and the ideological mistake they made is the exact same strawman that all anti-socialists, anti-communists, and anti-anarchists alike make: any alternative to the abject misery MOST PEOPLE LIVE IN now, must be instantly subjected to boogeymanned notions of a spooky and dangerous authoritarianism beyond what we already have to where "It should never be allowed to ever come to pass, lest we end up like the Soviet Union or South Americans, and look how it went for them!1!11" It's completely bogus! I could tell that's where Raskin was heading when he presented the catastrophe future as an authoritarian one. Why are the words "socialism" and "communism" treated as such semantic lepers to where he didn't even directly say the words but instead used all those other words to refer to those concepts? It's just bonkers! The reason we're in this mess now is because we refuse to talk about anything that doesn't make money and keeps power in place through any and all possible changes, but that isn't going to work. Why is a future free of control and misery (as much as possible) not an option here? Don't know how many times someone has to say it, but empty, reformist centrism won't save us. It's a liquidation sale, all oppression must go! I really hate that they can't seem to contend with that, and as much as it's so en vogue to hate on capitalism right now, that it's for a reason. Most people, when they see what's causing a disease, don't sit there and say, "Yes, yes, gimme more of *that*!" They understand intuitively that they're at risk of loss of life and need to act NOW. If the problem can't be fixed entirely, it at least needs to be met with mitigation of some kind so that you are as healthy as you can possibly be. I was with them the entire talk until they chose to take this route with things. All we keep doing is talking around the problem and pushing back the timing, resources, and capability on implementing real changes, and I wonder how many more generations will it take before people get sick of talking and just do something to fix one, or any, of the issues. We are literally letting the planet burn, and people pass away from all manner of vi*lence because we just wanna sit and do an academic study and write a book and discuss it further behind closed doors with people who aren't even impacted by what they're talking about in the first place. WHY? I don't think any reasonably sane person would ever advocate for any reform or economic replacement that entails further tragedy, but it's very clear how things are done is not okay. You have to be deranged to want more of what currently is. I honestly feel like people who make too much of an effort to be moderate for its own sake, not just to consider the complex variables of such large issues, but to literally dance around doing anything deemed "too extreme", whether or not it would actually solve the problem and make quality of life better for regular people, hold us back as poor and disenfranchised people. They benefit just enough from this current way of doing things so I'd never reasonably expect them to upend all of that and really suggest solutions to Ai-created issues that involve them losing the ability to survive. At the same time, it's starting to feel like we're being played in our face, and I don't like it. There has to be some form of something that can be come to an agreement on so that we don't keep making these harmful technologies, then sitting back looking confused, and scrambling to understand how it works, its impacts, and just consider that maybe sometimes just because we can doesn't mean we should.

Read more

7

Reply

2 replies

1 month ago

I will write it again it's nothing or whatever you think, regulations or other crap you have in your head, there is only one respect+position+trust in me and people like me