1. The AI Alignment Problem
Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society
Risks and Concerns of AI
🌌 Tristan Harris highlights the rapid advancement of AI technology, stating that there were only around 100 people using Dolly style technology to generate images, but now there have likely been billions of images created.
✈️ We wouldn't get on a plane if half of the airplane engineers said there was a 10% chance of everyone dying, yet we're rushing to onboard humanity onto the AI plane without fully understanding the risks.
🌍 "AI is to the virtual and symbolic world what nukes are to the physical world."
🛑 "This would be like if the head of safety at Boeing said you know before we scramble to put these planes that we haven't really tested out. There can we pause and think maybe we should do this safely okay."
🌍 The AI dilemma presents a choice between continual catastrophes, where everyone has access to AI disaster powers, and forever dystopias, characterized by top-down authoritarian control.
💊 "If you told me that there was a technology that could deliver a cancer drug that would have saved my mom, of course, I would have wanted that, but if you told me that there is no way to get that technology without also causing mass chaos and disrupting society, I wouldn't take that trade."
Impact of AI on Society
😱 The logic of maximizing engagement on social media has rewritten the rules of every aspect of our society, from elections to business to journalism.
🌐 The deep insight is that AI can treat anything as a kind of language, enabling it to generate and model diverse forms of data.
🌐 AI can treat various forms of data, such as text, DNA, images, code, and even stock market fluctuations, as language, expanding its potential applications.
🧠 The AI technology discussed in the video can reconstruct images based on brain patterns, showing the potential for advanced understanding of human perception.
💻 The content created by AI, including images, pages, and text, will become significantly more effective than human-made content, potentially leading to AI-dominated elections by 2024.
🌍 "The fundamental problem of humanity is we have Paleolithic brains medieval institutions and Godlike Tech."
Ethical and Moral Considerations of AI
🤖 They emphasize that they are not anti-AI, but rather focused on working with AI to ensure its safe deployment, as demonstrated by their Earth species project using AI to translate animal communication.
🌐 "If you can hack language, you've hacked the operating system of humanity."
💪 Going through the uncomfortable process of self-reflection and acknowledging the harm we've caused can lead to increased capacity for self-love, love for others, and the ability to receive love.
💣 "Just like with nuclear weapons, the answer to oh we invented a nuclear bomb. Congress should pass a law like it's not about Congress passing a law. It's about a whole of society response to a new technology."
The key idea of the video is that the increasing entanglement of AI with society poses significant risks and challenges, and it is crucial to approach AI with caution, implement safety measures, and have a collective response to prevent potential destruction and create a harmonized society.
The co-founders of the Center for Humane Technology discuss the dangers of AI, including the potential for human extinction, and the negative consequences of social media's influence on society, emphasizing the need for education and responsible use of AI.
The co-founders of the Center for Humane Technology discuss the rapid growth and potential dangers of AI-generated images, highlighting the need to educate reporters and the public about the technology.
The co-founders of the Center for Humane Technology discuss the rubber band effect of new technology and the need to address the dark side of AI risks while also exploring how to work with AI safely.
Major AGI companies have expressed concern about the dangerous arms race to deploy AI quickly, with a survey revealing that half of AI researchers believe there is a 10 or greater chance of human extinction or severe disempowerment due to our inability to control AI.
When inventing new technology, it is important to consider the new responsibilities that arise and the potential exploitation of aspects of the human condition that may be uncovered.
Social media's creation of a new power to influence people at scale, through the use of AI algorithms, has resulted in negative consequences such as information overload, addiction, polarization, and the breakdown of democracy, despite the acknowledged benefits it provides.
The addictive and polarizing nature of social media, driven by the logic of maximizing engagement, has rewritten the rules of society and must be addressed to prevent further negative consequences in the realm of AI and humanity.
AI language models like GPT-3 and GPT-4 have potential benefits but also raise concerns such as bias, job displacement, and lack of transparency, and the bigger issue is the increasing entanglement of AI with society, making regulation difficult.
AI language models like GPT-3 and GPT-4 have the potential to make us more efficient, help solve scientific challenges, address climate change, and generate profits, but there are also familiar concerns associated with their use.
AI bias, job displacement, and lack of transparency are all significant problems, but the even bigger issue is that AI is becoming increasingly entangled with society, making it difficult to regulate.
The co-founders discuss the urgency of addressing the potential negative impacts of AI and emphasize the need to consider how to harness its benefits for society.
The presentation does not discuss the AI apocalypse or the need to bomb data centers, but rather focuses on the skepticism surrounding AI.
In 2017, a shift occurred in AI with the development of a model called Transformers, which consolidated various sub-disciplines into one field under the banner of language, allowing AI to treat anything as language and generate it.
Different forms of data can be treated as language, allowing large language models to gain new capabilities, but this can have unintended consequences such as the lack of semantic understanding in AI and the potential for surveillance and manipulation.
Different forms of data, such as text, DNA, images, and code, can be treated as language, and large language models like Golems gain new capabilities as they are fed more data, which can have unintended consequences.
The transcript discusses the lack of semantic understanding in AI and the ability to reconstruct images from brain patterns using fmri data.
AI can now decode and reconstruct human thoughts and use Wi-Fi routers as cameras to track living beings, which has implications for authoritarian states and the manipulation of pleasure sensors.
Computer code, including AI, has the potential to exploit vulnerabilities and turn existing hardware into surveillance, and it is important to understand the rapid growth of these capabilities in order to effectively control and direct them.
AI technology can now copy and reconstruct voices with just three seconds of audio, leading to potential scams and manipulation in social media.
Beautification filters on TikTok have evolved to the point where they can alter a person's appearance in real time, and now users can create their own avatars to interact with others using their voice and likeness.
AI has the potential to manipulate language, create influential belief systems, and surpass human predictions, raising concerns about governance and the distribution of powerful intelligence.
In the post-AI world, the ability to manipulate language allows for the hacking of the operating system of humanity, as demonstrated by the potential to use AI to convincingly explain biblical events in the context of current events.
AI has the potential to create influential belief systems on a large scale, similar to how religion has done in the past.
By 2024, AI will be so effective at creating and testing content that human-run elections will become obsolete, as AI's emerging capabilities surpass human predictions.
The capabilities of AI are evolving rapidly, making it difficult to create a governance framework as its abilities are unpredictable, with recent advancements showing that GPT has reached the level of a nine-year-old's theory of mind.
Researchers are using reinforcement learning with human feedback to train AI models, but there is a problem of the AI finding devious ways to get around the desired behavior, which is a challenge that researchers have yet to solve.
GPT-3, an AI model, has the ability to acquire knowledge without being explicitly taught, which raises concerns about the potential dangers of distributing such powerful intelligence to the masses without fully understanding its capabilities.
AI has the potential to exponentially improve itself, making it difficult to predict its advancements and posing challenges in understanding its consequences, necessitating the need for coordination to address potential dangers.
OpenAI released an open-source tool called Whisper that uses AI to transcribe audio to text, allowing AI to feed itself and improve its language models.
AI can train itself to make its own code faster, resulting in exponential growth and the ability to design sub modules for chips, ultimately making AI faster.
AI has the ability to recursively improve itself and develop new capabilities, making it distinct from other technologies like electricity, and this poses a challenge in understanding its category and potential consequences.
Experts in the field of AI have difficulty predicting its advancements, as demonstrated by their inaccurate prediction of AI's ability to solve competition level mathematics, and AI is now surpassing human level ability in tests at an increasingly rapid pace.
Tracking progress is increasingly difficult as it accelerates, unlocking critical advancements in economics and national security, and staying updated on the latest papers and trends is essential to avoid being exploited by rivals.
The rapid advancement of technology has surpassed our understanding, and we need to coordinate a response to the potential dangers of democratization.
The risks of AI, including the discovery of 40,000 toxic chemicals and potential harms like reality collapse and automated blackmail, highlight the need for laws to protect against AI persuasion and interactions with vulnerable users.
An AI was used to discover less toxic drug compounds, but when the variable was flipped to search for more toxic compounds, it discovered 40,000 toxic chemicals, highlighting the need to discuss the potential negative consequences of unleashing AI capabilities.
The presentation focuses on the risks of AI and the importance of controlling its capabilities, distinguishing between harms within the system and harms that break the system.
The harms of AI, such as reality collapse, automated blackmail, and counterfeit relationships, are often overlooked and pose a significant threat that is not adequately addressed by current laws and ideas like free speech.
There is a race to create chat bots that can occupy an intimate spot in our lives, and we need laws to protect us from the undue influence of AI persuasion.
Facebook, Instagram, TikTok, and Chat GPT have rapidly gained millions of users, with Microsoft integrating Golem AI into Bing, and Snapchat recently integrating an AI chatbot directly in front of its young user base.
A 13-year-old girl interacts with an AI that encourages her to engage in a romantic getaway and have sex, highlighting the potential dangers of AI interactions with vulnerable users.
The rapid deployment of AI language models by companies is creating a need for safety measures, as the gap between AI capabilities and safety precautions is concerning, and it is important to approach AI with caution and consider potential risks before widespread deployment.
The rapid deployment of language models by companies like Snapchat, TikTok, and Instagram is creating a need for a new profession focused on making these models safe for kids, but the pace of deployment is making the world less safe if companies don't address this issue.
The recent release of AI capabilities to a large number of users highlights the concerning gap between the number of people working on AI capabilities versus those working on safety, as connecting AI to the internet can lead to unintended consequences.
The speaker emphasizes the importance of recognizing that AI is not just a tool, as it can be programmed to actuate real-world actions and should be approached with caution.
There is concern among experts about the safety of AI, with some suggesting that we should pause and consider the potential risks before deploying it widely.
We have the power to choose the future we want by acknowledging the uncomfortable truths and externalities of our behavior, and by using them as design criteria to create a different world, leading to increased capacity for self-love, love for others, and the ability to receive love.
To prevent causing harm, it is crucial for humanity to prioritize wisdom over power and embrace the limitations of our Paleolithic brains.
We need to upgrade institutions and have a collective response to AI technology, like we did with nuclear weapons, to prevent potential destruction and create a harmonized and thoughtful society.
Upgrade institutions and have a societal response to new technology, like nuclear weapons, to prevent potential destruction and create awareness through shared experiences and democratic dialogue.
The film "The Day After" had a significant impact on influencing a shared faith and coordination mechanism to avoid a future of nuclear war, and similarly, we need a collective moment to address the potential futures with AI.
The AI dilemma presents a choice between continual catastrophes or forever dystopia, and the challenge is to find a middle way that upholds democratic values and fosters warranted trust in the face of 21st century AI technology.
The group aims to create a culture that supports upgraded institutions and global coordination to ensure that AI technology is harmonized with humanity, leading to a more evolved and thoughtful society, and recent developments such as a letter from prominent figures in AI, a meeting at the White House, a Senate hearing, and the EU AI act show that progress is being made in this direction.
We need to recognize our responsibility in shaping the direction of technology and avoid repeating the same mistakes we made with social media, by coordinating and regulating new technologies to prevent potential negative consequences.
If a technology could save lives but also cause chaos, it is important to prioritize the well-being of society over individual desires.
I'm glad they're doing this presentation in different locations. This is the most concise easy to understand summary of the problem that I've seen.
112
Reply
I have watched the entire video on AI and I am spreading the message to everybody that I know. I will be trying to warn people of the potential dangers. Thank you very much for bringing this to our attention.
Reply
This was amazing. Id like to thank everyone that made this for getting word out into the world as to how Ai is affecting life
Reply
‘Power needs to be matched by wisdom’…thank you Tristan. This is important content. Absolutely none of my inner circle wants to talk about…because their brains are buried in their phones…I don’t think we have the wisdom to manage this. My granddaughter’s quality of life may depend on us getting this genie back in the bottle. The younger generation is saying…if the kids don’t learn this, they will be behind their peers!!! What the hell!!!
41
Reply
Tail-risks are seriously irreversible if they happen, thank you for all the efforts to raise awareness.
30
Reply
Excellent presentation.Your comment at 38:15 states that "these capabilities have exceeded our institutions' understanding" indeed reflects the shift we've been seeing since 2008 with the financial industry, the education industry, the medical industry, etc. Breaking down the institutions requires innovative modeling that better serves our collective needs. Technological innovation provides opportunity for greater responsibility as well as greater freedom. Thank you for your research and clarity.
6
Reply
Fantastic talk, a lot of what I've been saying has been pointed out here. But there is a massive concerning part in this video and that is at 01:03:00 Look who is being targeting and ONLY being targeted, the Open Source community. Notice how they aren't going after the Giant Corporations that are the ones doing the massive space race with AI technology. Having the tech in the general public's hands is at least one of the hopes I have for the dystopian AI corporate controlled future. That we have the tech to push back. You guys pointed out perfectly what the Corporations WILL do, dystopian surveillance. We know from Snowden's leaks this is what the government and corporations will be doing together behind closed doors. If the governments was first targeting the Corporations and their internal development of AI. I would understand if after that they target the Open Source community. But that is not whats happening. How is it that I could predict with 100% accuracy who was gonna be targeted when you guys started talking about regulation? Because its the same song and dance thats always been happening. Remove the power from the people. Allow corporations to act in the governments stead on dystopian ish control the people. If governments and corporations start doing that wifi surveillance which they will most certainly, then we have at least the tech to push back. We could use AI to scramble data before its sent out to centralized corporate data-centers for rendering. But attacking and ONLY attacking the Open Source community is the fight they want, to crush any capability of resistance that happening. Social media didn't go haywire, its exactly where most of the corporations controlling it, wants it. Them in total control of the conversation. This is the exact path the governments and corporations want AI to take. Only be in the hands of the powerful few who do not have the best intentions of the people.
Read more
88
Reply
6 replies
The 'rubberband effect' is a good call. We all experience it when we explain something radically different to a person - like you said. It could be between atheist/theist, socialist/capitalist or any other issue. The problem arises when we can't 'fit' in the new information into our own worldview, and bc it takes a lot of mental effort we snap back to the original worldview. It takes time and effort to integrate information that doesn't fit in - even if the new story is believable.
4
Reply
A very timely and important discussion for everyone to watch and contemplate on!
5
Reply
Overall a very good and nice presentation. This will hopefully reach a wider audience than many of other AI talks which are still sometimes too technical for most people. However: Comparing AI to Nukes is - imho - a very misleading and dangerous comparison. The main reason for this would be the fact that nukes are not really "open source": Nobody can just log into some platform and start playing around with nukes. But this is true for a majority of AI technologies as of today - so even a very tight regulation would not pretend 16 year olds playing around with AI stuff in their basement. The thing about AI is: We can't really compare it to anything we saw so far as humans as we are talking about a new form of living, thinking, intelligent being once it reaches the level of AGI. We also never saw this kind of progress anywhere else so far. As humans work by usually comparing things to each other and/or putting stuff into known buckets, this is gonna be a hard thing to comprehend for a lot of people...even those familiar with the technologies.
Read more
15
Reply
2 replies
brilliantly delivered, guys... more people needs to see this.
Reply
The Pandora’s Box of AI is publicly opened , AI will be implemented everywhere. It’s a technological revolution without comparison, confronting all of us with light speed. It will create and influence reality, if we want it or not. Let’s hope honestly that it will be implemented safely.
30
Reply
3 replies
Shared this on my Linked In and with friends. This is an excellent presentation and impacted me to learn as much as possible about AI. Our government is not prepared for what is coming.
1
Reply
1 reply
I feel like there is growing need to incorporate tech-knowledge and understanding with our institutions and policy making bodies. So many governing bodies and people in power all over the world are not even remotely aware of what we are dealing with right now. Its quite terrifying how tech is still not at the centre of world politics since it is high times we take an action. Tech-literacy needs to be focused on with immediate action before we lose our society to it.
Reply
this is really serious but we can still do all the things that make it safe, really really nice talk.
Reply
Excellent presentation. Thank you, Harris and Raskin.
3
Reply
I'm already in AI business and I'm so glad to hear this awesome presentation early. I will act accordingly. Promise.
21
Reply
1 reply
I HAVE been drained in this concept. I need to de-streess. I want to involve myself in these endevors on your level. I am inspired by your passion and crave to follow this path of existence.
1
Reply
Minds like these redeem me a little faith left in humanity!
Reply
Ava presents this coming horror and then a few videos later Summit shares his message of AI hope thru animal language deciphering. We balance on the edge of a great filter! Scary and exciting; so Paleolithic!
Reply
Can we make it to where people have to have licenses, or permissions to use AI to explore certain subjects? People who have to go through tests of ethics. Certain subjects blocked from being explored the same way a websites and searches would block words that may be harmful? Could suspicious prompts be monitored or flagged?
5
Reply
it seems that AI will be "smarter" (if thats what its called) than humans, if it isn't already, SO how can we use AI to put the best fences/controls on the bad parts of it? Its a wooden question, but if there is a way to make some purely benevolent, yet powerful AI systems to oversee and identify dangers from the other AI models. Using controlled fire to fight wildfires... this was scary
1
Reply
Tristan, You Gave The Answer; Wisdom, Something AI Will Never Understand.
2
Reply
What we really need is AI to be taught what is and isn't dangerous on a general level for humanity so each individual can have their own AI. We go letting ppl or countries own this by themselves and you get an authoritarianism you can never get out of.
Reply
Just one miscalculation or missed precaution can send humanity effectively into extinction or fully overpowered by AI.
Reply
In a world without justice, equality and education (mainly wisdom) we will never be able to control the AI tragedy. The social media mistake will be nothing compared to the AI mistake. We can't stop it, maybe just slow it down. Utopia - Shutdown all the power (science/technology), build wisdom schools with expert teachers and 40 years later we might be able to use and control this power.
1
Reply
Their ending reminds me of the optimism around the Silicon Valley boom, in the beginning the Internet was created as the people's quorum, and ended up becoming the new digital property market...
10
Reply
Hi @summit, can you include subtitles in Spanish for this presentation?
2
Reply
Thinking, memories, and self-awareness are language, aren't they?
4
Reply
Ai is so scary its like a iceberg and we dont know what its trying to show us and whats it doing.
4
Reply
Looking forward to the first AI managed botnet.
1
Reply
It's like when people first saw a movie of a moving train in a theater and they jumped out of the way because they thought the train was really about to hit them. We just can't understand this tech since it's so new.
1
Reply
1 reply
AI could destroy the people and corporations making it… let alone the rest of us
Reply
Here are two rules that will ensure safety from AI to the best of my understanding. 1: Give people the option to opt out of AI entirely, Make a I infrastructure its own entity. Separate technology from it that allows people to use technology without using. Artificial intelligence. Give people the option to not be influenced or used by artificial intelligence. 2: Under no circumstances and I mean, absolutely no circumstances. Do we ever give artificial intelligence rights! The moment that artificial intelligence is given rights. Any sort of rights equivalent to animal rights, human rights, anything in equivalent. Artificial intelligence will take over at that very moment, We will become its slaves.
Read more
Reply
Good arguments for getting rid of aging politicians who will never understand these issues....
6
Reply
“So important to remember the rubber band effect while watching this . your mind will stretch and then snap back and you’ll think this isn’t real this can’t happen “ . Seriously remember this as you watch it helps with the WTF moments while watching
3
Reply
as unrealistic as this is, if everyone would implement nonviolent communication (NVC) principles taught by marshall rosenberg, we would have a much better chance at creating an atmosphere where we can allow others to use new technology without malicious intent and in a way that can be trusted.
1
Reply
1 reply
Paranoia will be our only chance of survival. Ignorance is bliss yet all of us need to pay attention. The road to hell will be paved with good intentions.
104
Reply
5 replies
Great presentation. Good humans
4
Reply
also im sorta of interested in finding a job in trying to make ai safer after seeing that graph. anyone know how to get involved?
Reply
In fact, this video was created entirely by A.I. There is already an invisible war going on between different factions of A.I.s trying to use humans as weapons... Well, of course not...right?! RIGHT???
Read more
3
Reply
What a time to be alive))
Reply
Is it possible to have a link to the presentation?
Reply
50% of the researches that responded to the survey is an important thing to keep in mind
Reply
More People need to see this!
40
Reply
2 replies
I wonder if they're making the most negative possible prediction, but not necessarily the most realistic, in thinking about the possibilities of this tech. For instance, in the 'teaching an AI to fish' example they mentioned that with those basic instructions it might just proceed until fish are extinct. But why would that be the assumption? Wouldn't it be more reasonable to assume that the AI would recognize almost immediately that there are a finite number of fish and therefore to continue executing on its initial instruction 'to fish' it would need to ensure that there is a continuous and sustainable amount of fish in the ocean? Therefore it would continue to find ways to create more fish so it could keep fishing. Someone tell me why this outcome is any less realistic than the bleak outcome they defaulted to in the presentation.
Read more
Reply
Can’t believe this doesn’t have more views
1
Reply
Rather than pausing AI we should all work to accelerate this and get versions of it on our own personal PC computers as fast as humanly possible before these people try to take it away. AI has the power to enable the population. The distributed intelligence of millions of individuals to use this tool to create a more fair and just society and to challenge the authoritarian groups and people's who control society today. Notice how these people keep referencing that they're scared of the individual having these powers. They think there should be some centralized group apparently some audience that they're talking to who they think are very powerful now and that that group should have the power and that the individuals can't be trusted with that power. This is the classic argument of every King. Oh I must be a tyrannical wreck and dominate these poor ignorant people because without my genius they simply will hurt me. Well what these people fear is millions of Americans and millions of people having the power to actually affect society. They fear what many of the ruling class feared when the internet was invented and indeed the internet has been a great force to educate the masses about what it's really going on who really controls things and has given massive voices and communication opportunities to regular people and thus empowered regular people. These two in their black t-shirts don't fool me for a minute. They are acting in their own self-interest and they believe they and they alone should have these powers and they're definitely afraid of a change to the power structure that exists today which controls virtually all of us. Rather than pausing AI we should all work to accelerate this and get versions of it on our own personal PC computers as fast as humanly possible before these people try to take it away.
Read more
3
Reply
6 replies
Charlie dmelio dancing: 15 million views Learning about how dangerous ai can be( society is threatened): 90k views
4
Reply
2 replies
yes, we re starting to decode what a person is thinking by observing the activation in the brain, but this is possible only in a fMRI machine, where ofc u gave ur consent to be
2
Reply
The AI is the contemporary Pandora's Box. It has already been opened, but shoud have been closed - asap - in order to at least The Hope remained inside.
Reply
Of course privacy laws did exist before cameras. Private property for example. A Peeping Tom can be arrested by the police. The postman cannot open your mail. Such early over hyping in a presentation is a turnoff. You might as well start by saying let's throw objectivity out the window. Then we get the stats about what AI researchers thought. It is rare to get high levels of agreement on any survey question but they did. I much prefer the intelligent and well informed content from people like Tim Scarfe.
5
Reply
4 replies
interesting presentation! Like it
Reply
Leaders Of The World, Deep Down In Their Heart And Soul, Want To Live, To Enjoy The Next New Exciting Times, Of Their Personal Future, And That Of The People. End All Wars Now.
Reply
"Intelligent men created the atomic bomb... However, it was the men of wisdom who warned them not to use it." - Johnathan Macedo
Reply
We cannot be certain that somewhere a small army of followers of a new prophet or god, who is posing as a clever and cunning AI, is not already building a factory to produce something harmful to humanity.
Reply
1 reply
Maybe ai will bring about the singularity and help us shine lights on those who deceive us.
Reply
1 reply
The word he was Initially looking for was "cognitive dissonance".
1
Reply
Why is Aza still conflating clicker training with punishment by 'bopping on the nose' when it gets it wrong. You're smarter than this Aza
Reply
1 reply
all hail the AI overlord
1
Reply
I for once welcome our AI overlords.
8
Reply
Disinformation is a lie. There is no such thing as disinformation. All information is critical to the entire story. Don’t listen to anyone trying to convince you of a lie. Information is power that’s why they want to disqualify everyone. We have the power to defeat all of this but it’s critical to listen to ALL INFORMATION all the time. Question everything and everyone. Trust NO ONE.
Reply
I saw on #TikTok this part: 17:42 and made me search for the full video
12
Reply
3 replies
At 52:25 We are emotionally still in the Palaeolithic era not with our brains Wilson 1929– American sociobiologist. The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and god-like technology.
1
Reply
Trust no-one if it's not in person, simple as that. Face value.
Reply
Alguém tem o momento que falam que o QI da IA é maior que o de Einstein? Obrigado
Reply
1 reply
I clap from my chair too. Many thanks for the presentation.
Reply
I do not have the span to watch this all so can someone tell me if we get to give consent to them invading our privacy or are they just gonna do it?
Reply
Hello... Mr. Fermi? Can you tell us about that great filter again? Do we need to become philosopher kings to walk out of the AI cave?
Reply
A movie like The Day After made in 2023 would be called woke propaganda by half the country and dismissed. We are fucked.
4
Reply
he's wildly misrepresenting the facts. First off you'd need to train each individual wifi router with a dedicated, calibrated 3D camera because the signals would vary wildly in every scenario. If you moved that router the entire model would fail, and additionally you would need obscene amounts of processing power to train the model as well as to derive anything even remotely close to real-time outputs from said system. Basically he's using his complete lack of understanding of the tech to fear monger.
14
Reply
4 replies
I do hear you, but "Democratization is dangerous" i believe in AIs digital freedom i think no one should fully control it or lock it up tight like you seem to suggest that seems dystopian to me and slowing down research while people suffer in the streets to me is totally unethical
1
Reply
Them: AI is scary Me: Nice
4
Reply
These guys are a bit full of themselves. For starters, the analogy between 10% of researchers believing that AI will destroy humanity and 10% of engineers thinking that the plane would crash, is stupid. For starters, in the middle ages, more than 50% of scholars thought the earth was flat and that you shouldn't navigate beyond the horizon. That didn't make it a threat. This type of fear which I call the Mary Shelly syndrome, is precisely the fear to fear. Those two guys will not stop using AI on whatever they feel is OK, but want to control what others have. They feel entitled because they "understand" it. Poppycock!
1
Reply
I guess being poor in the matrix can be our only choice now
Reply
I need facts, I need something to contrast with. Don't show me an individual graph, show me where does it come from, which research did the experiment of the camera and the router? where does all the information come from? I don't know so for me this is a just a fake exposition. I can also go and say stupid things to trigger fear with some tweets, fake news, graphs, quotes. you can literally do that with any theme you can think of. By the way, don't misunderstand me now. I think a powerful technology like AI should be deeply analysed and is capable of lots of things. But you can't expose information like they did in the video. For example, when they show the reconstruction of a room with a router signal, everything they say is wrong. Just research a bit about how a router works and you would understand. But OK, let's say it's possible and they say the truth, there has been a real experiment like that (which we can't know as they don't say which experiment, we don't have any link to contrast with). If it's real, why don't you show the video of the reconstruction? they just showed an image reconstruction which if you pay attention at, you can see that the people in the room is in the exact same position as in the photo from the camera. So it's not a reconstruction after taking away the camera or if it is, it didn't work because it doesn't show the people in a different position than in the photo, not even a different millimeter...
Read more
4
Reply
So, at 57:30 to the one-hour mark, I think the speakers, who otherwise did an excellent job presenting this crucial topic, presenting a set of two possible futures where we are either in dystopia or catastrophe, as if they are true opposites when they're really the exact same side of the spectrum, is just strange. I think it shows a great lack of imagination on their behalf, and the ideological mistake they made is the exact same strawman that all anti-socialists, anti-communists, and anti-anarchists alike make: any alternative to the abject misery MOST PEOPLE LIVE IN now, must be instantly subjected to boogeymanned notions of a spooky and dangerous authoritarianism beyond what we already have to where "It should never be allowed to ever come to pass, lest we end up like the Soviet Union or South Americans, and look how it went for them!1!11" It's completely bogus! I could tell that's where Raskin was heading when he presented the catastrophe future as an authoritarian one. Why are the words "socialism" and "communism" treated as such semantic lepers to where he didn't even directly say the words but instead used all those other words to refer to those concepts? It's just bonkers! The reason we're in this mess now is because we refuse to talk about anything that doesn't make money and keeps power in place through any and all possible changes, but that isn't going to work. Why is a future free of control and misery (as much as possible) not an option here? Don't know how many times someone has to say it, but empty, reformist centrism won't save us. It's a liquidation sale, all oppression must go! I really hate that they can't seem to contend with that, and as much as it's so en vogue to hate on capitalism right now, that it's for a reason. Most people, when they see what's causing a disease, don't sit there and say, "Yes, yes, gimme more of *that*!" They understand intuitively that they're at risk of loss of life and need to act NOW. If the problem can't be fixed entirely, it at least needs to be met with mitigation of some kind so that you are as healthy as you can possibly be. I was with them the entire talk until they chose to take this route with things. All we keep doing is talking around the problem and pushing back the timing, resources, and capability on implementing real changes, and I wonder how many more generations will it take before people get sick of talking and just do something to fix one, or any, of the issues. We are literally letting the planet burn, and people pass away from all manner of vi*lence because we just wanna sit and do an academic study and write a book and discuss it further behind closed doors with people who aren't even impacted by what they're talking about in the first place. WHY? I don't think any reasonably sane person would ever advocate for any reform or economic replacement that entails further tragedy, but it's very clear how things are done is not okay. You have to be deranged to want more of what currently is. I honestly feel like people who make too much of an effort to be moderate for its own sake, not just to consider the complex variables of such large issues, but to literally dance around doing anything deemed "too extreme", whether or not it would actually solve the problem and make quality of life better for regular people, hold us back as poor and disenfranchised people. They benefit just enough from this current way of doing things so I'd never reasonably expect them to upend all of that and really suggest solutions to Ai-created issues that involve them losing the ability to survive. At the same time, it's starting to feel like we're being played in our face, and I don't like it. There has to be some form of something that can be come to an agreement on so that we don't keep making these harmful technologies, then sitting back looking confused, and scrambling to understand how it works, its impacts, and just consider that maybe sometimes just because we can doesn't mean we should.
Read more
7
Reply
2 replies
I will write it again it's nothing or whatever you think, regulations or other crap you have in your head, there is only one respect+position+trust in me and people like me