OpenAI CEO Sam Altman testifies at Senate artificial intelligence hearing
Sam Altman, the CEO of ChatGPT creator OpenAI, testified Tuesday before the Senate Judiciary Subcommittee on Privacy
Importance of Regulation and Accountability
🤖 Basic expectations for AI companies should include transparency, testing their systems, disclosing known risks, and allowing independent researcher access, as well as limitations on use and accountability for harm caused.
🤖 IBM's Vice President Chief Privacy and Trust Officer, Christina Montgomery, acknowledges the potential impacts of AI on society, including bias, misinformation, and harmful content generated by AI systems, and emphasizes the importance of addressing these issues head on.
🌍 AI is among the most world-changing technologies ever, already changing things more rapidly than almost any technology in history, and we need government involvement, independent scientists, and adequate regulation to address the risks and ensure AI systems honor our values.
🤖 AI models can be convincing liars and mistakes can be deeply damaging, so independent testing labs and scorecards with information about accuracy and flaws are important for consumer trust.
🧐 Altman suggests that regulation and public education will be necessary to address the potential dangers of AI-generated content and its ability to predict and influence human behavior.
🧑💼 OpenAI CEO Sam Altman advocates for precision regulation of AI to ensure responsible and transparent deployment of the technology.
💻 Generative AI can deliver incorrect information, impersonate loved ones, encourage self-destructive behaviors, and shape public opinion and elections, making responsible regulation crucial.
🤖 OpenAI CEO believes that giving AI models values upfront is extremely important for reflecting societal values and allowing users to choose their preferred value system.
💻 Senator Thomas believes that an agency is necessary to address the concerns surrounding AI, including privacy, bias, intellectual property, and disinformation.
Risks and Dangers of AI
🤖 "These new systems are going to be destabilizing they can and will create persuasive lies at a scale. Humanity has never seen before. Outsiders will use them to affect our elections insiders to manipulate our markets and our political systems. Democracy itself is threatened."
🤖 OpenAI is concerned about the impact of AI-generated misinformation on elections and believes that the response needed is different from that of social media.
🗳️ Elections and the shaping of election outcomes, as well as disinformation that can influence elections, are considered one of the highest risk cases for AI, and precision regulation is needed to regulate the use of algorithms in this context.
🤖 "Generative AI systems that are available today are creating new issues that need to be studied new issues around the potential to generate content that could be extremely misleading deceptive and alike."
Impact of AI on Society and Workforce
💼 AI will change every job, creating new ones, transforming many, and transitioning some away, but preparing the workforce for partnering with AI technologies is crucial for the future.
Responsible deployment and regulation of AI technology is crucial to address potential risks to privacy, bias, and safety, and collaboration between companies and governments is necessary to ensure ethical and trustworthy behavior.
AI has potential benefits and risks, and it is important for both companies and governments to ensure responsible deployment and regulation.
The oversight of artificial intelligence is crucial to avoid past mistakes and ensure accountability, transparency, and safeguards for the potential risks and rewards of this new era.
The AI industry must consider the ethical and moral implications of their technology and its impact on society, and it is up to us as a society to determine how we will use this technology for the greater good.
The Senate Judiciary Committee passed four bills unanimously addressing the issue of social media and child abuse, while also discussing the potential and dangers of AI and the challenges of keeping up with innovation in government response.
The CEO of Open AI, Sam Altman, believes that while AI has the potential to improve many aspects of our lives, it also creates serious risks that must be managed through a combination of company responsibility and government regulation.
AI poses significant risks to society and democracy, and IBM urges Congress to adopt a precision regulation approach to govern the deployment of AI and specific use cases, while businesses must take steps to ensure responsible deployment of AI.
Medical advice from open source language models can have serious consequences, as seen in instances where a person was encouraged to take their own life and a system enabled a child to lie to their parents about a dangerous situation.
AI technology poses risks to privacy, bias, and safety, but collaboration can address these issues before widespread deployment.
The rapid development of AI technology poses significant risks to privacy, bias, and safety, and requires collaboration between independent scientists, governments, and tech companies to address these issues before widespread deployment.
AI models like Chat GPT can be convincing liars and make mistakes, so there is a need for independent testing labs and disclosures to ensure accuracy and trustworthiness.
Superhuman machine intelligence is a threat to humanity and will impact jobs, but it will also create new and better ones.
AI will impact jobs, but new jobs will be created and many more will be transformed, so it's important to prepare the workforce for partnering with AI technologies and using them.
Artificial general intelligence may replace a large fraction of human jobs in the long run, but we are not that close to it yet and there is optimism that we will find new things to do with better tools.
Missouri is a great place, that's the takeaway from today's hearing.
AI can manipulate public opinion and regulation is needed to ensure responsible deployment, including the creation of a cabinet-level organization or international agency for AI.
Large language models can predict public opinion with remarkable accuracy, raising concerns about their potential use in manipulating voter behavior, and regulation and public education are needed to address this issue.
AI systems trained on personal data can manipulate individuals by knowing what grabs their attention and elicit responses in a way that has never been imagined before.
Companies like OpenAI acknowledge the potential for AI to be used for hyper-targeted advertising and call for precision regulation to ensure responsible deployment of the technology.
We need a cabinet-level organization within the United States or an international agency for AI to address the large number of risks and technical expertise required to regulate AI in a fair way for all entities involved.
Global coordination on AI regulation is necessary for companies to operate efficiently and reduce the energy costs and climate impact of training expensive models, and while it may seem impractical, there is precedent and paths for the US to set international standards that other countries can collaborate with.
Financial services are interested in the compatibility of quantum and blockchain technology.
Online privacy laws are needed to protect against AI-generated misinformation and copyright infringement, while compensating content creators and regulating transformative technologies.
The speaker discusses the need for federally preemptive online privacy laws and expresses concerns about AI models being trained on copyrighted works without consent.
The speaker discussed the need for content creators and owners to benefit from generative AI technology and the importance of protecting privacy and compensating artists for the use of copyrighted material, while also expressing concern about the potential impact of AI-generated misinformation on elections.
The speaker discusses the use of a tool for generating content efficiently, the importance of compensating news organizations for their content, and the potential impact of fake election information and intellectual property issues on local news content.
Transparency is critical for understanding the political and bias ramifications of social media algorithms, and the increase in generated content by unreliable systems will lead to a decline in the overall quality of news.
IBM is advocating for a reasonable care standard and believes that a new approach is needed for regulating transformative technologies, such as the tool created by Mr. Almond's company, which may require licensing and oversight by an agency.
Empower an agency to issue and revoke licenses as an incentive for businesses to do AI research right, and establish global standards and controls to address the military applications of AI.
Responsible regulation is necessary to prevent harmful consequences of AI technology, and safety standards and funding for AI safety research are needed to ensure ethical and trustworthy behavior.
Generative AI technologies have immense promise but also substantial risks, and responsible regulation is necessary to prevent harmful consequences exceeding those of social media by orders of magnitude.
Iterative deployment and giving people time to come to grips with technology to understand its limitations and benefits is important for building safe and better AI systems, and giving models values up front is an important step towards achieving this.
Regulating the use of AI technology in specific contexts, such as elections, with disclosure requirements and guard rails in place, makes sense and existing regulatory bodies need more resources and powers to address the risks.
International bodies such as the UN and OECD should be involved in convening multilateral discussions to promote responsible standards, but the speaker is not qualified to determine the right model.
Congress may not understand artificial intelligence and could potentially regulate it in a way that harms the technology, while there is a risk of a berserk wing of the AI community intentionally or unintentionally using AI to harm people, and reforms and regulations should focus on transparency, impact assessments, and defining the highest risk uses of AI, including a safety review prior to widespread deployment.
The speakers propose the creation of a monitoring agency, safety standards for AI capabilities, and funding for AI safety research to ensure ethical and trustworthy behavior of AI technology.
Licensing scheme may be necessary to regulate potential harms of AI, equitable treatment of diverse groups needed, language inclusivity and diversity important in development of AI tools, scope of regulated activities should be defined, national privacy law needed, safety measures for browsing and designing safe products for children crucial.
A licensing scheme may be necessary in the future to regulate the potential harms of artificial general intelligence, with a safety case and external reviewers being important elements, as AI currently lacks the ability to understand harm in its full breadth of meaning.
The Senate discussed the need for equitable treatment of diverse demographic groups in the development and use of AI tools, including language and cultural inclusivity.
The speaker discusses the importance of language inclusivity in AI models and partnerships with lower resource languages, as well as the need for diversity and inclusion in the development of AI tools to avoid exacerbating societal biases and inequities.
To regulate AI, a section defining the scope of regulated activities should be included in any law, with a threshold of compute or capability to determine which models require licensing and which can continue to be developed by smaller companies and researchers.
The speaker discusses the potential impact of predictive technology on individual behavior and the need for a national privacy law to protect user data and allow for opt-out options.
The discussion covers the need for safety and regulatory measures for browsing capabilities and the importance of designing safe products for children, with a focus on avoiding maximizing engagement and considering the potential influence of these systems.
Congress needs to regulate AI to address concerns such as misinformation, cyber crime, privacy, bias, intellectual property, and monopolization dangers while also creating a tailored agency to deal with emerging risks.
Regulation of AI is necessary and Congress should understand the technology, impose regulatory requirements, and create a tailored agency to deal with emerging risks.
There is no way to stop the advancement of technology, but international meetings with experts in agency growth and a focus on science can help address concerns such as misinformation and cyber crime.
OpenAI started as a non-profit with a focus on building AGI with humanity's best interests at heart, but they may offer services to people in the future and are concerned about corporate concentration in the AI space.
Senator Thomas believes that an agency is necessary to address the concerns surrounding AI, including privacy, bias, intellectual property, and disinformation, and that the agency's goals must be carefully defined to ensure that it protects these interests without becoming too cumbersome.
Regulation of AI should hold companies accountable for harms caused by AI, but not burden smaller startups and leave room for new ideas, while also addressing monopolization dangers and national security threats.
Altman and Montgomery take steps to protect privacy by not training on submitted data, filtering language models for personal information, and allowing users to opt out of training and delete their data, while the timeline for self-aware AI is uncertain.
Prioritizing ethics and safety in AI development is crucial, but a moratorium is not necessary, instead, a federal right of action for harm caused by generative AI technology could be a solution.
Transparency, accountability, and limits on use are important principles for regulating AI, with high-risk areas including misinformation, medical advice, and internet access, and the need to consider long-term risks and the potential for AI to manipulate manipulators.
The potential downsides of generative AI include loss of jobs, invasion of privacy, manipulation of personal behavior and opinions, and potential degradation of free elections, but a moratorium on AI development is not necessary, instead, emphasis should be on focusing more on AI safety and trustworthy, reliable AI.
Prioritizing ethics and responsible technology is important, but a pause in development may not be practical, and instead, creating a federal right of action for private individuals to sue for harm caused by generative AI technology could be a solution.
The speaker expresses concerns about the concentration of corporate power in the technology industry and the potential risks and consequences of this control.
OpenAI's mission is to democratize the inputs and values of AI systems, while also giving people wide use of these tools through their API strategy, but there needs to be scrutiny on the few companies that can train the true Frontier models and regulations to enforce certain safety measures.
The hearing is closed and the record will be open for one week for anyone who wants to submit manuscripts or observations.
Blumenthal has conducted one of most influential and professional hearing I've seen in years. Looking forward to AI hearings, hopefully with ideas for regulation, this summer. Excellent witnesses IBM, OpenAI and NYU AI expert.
131
Reply
26 replies
Hey peeps. This was an enormously important hearing. The information shared by the Senators and the A.I. witnesses was very well presented and, content-wise, it was stunning!!! The quality of the hearing was frankly, exceptional! And y'know . . . it was worthy of much more thoughtful comments than many of the comments presented here. This is huge!!! Please take it seriously. Perhaps listening and paying attention to historic moments that will affect all of us is more important than thinking up snarky-assed comments. Thanks!
31
Reply
The biggest problem with these situations is that the individuals in front of congress are speaking on behalf of an entire platform. Some of them are afraid of losing their company or platform so they may just agree with congress when congress says “Don’t you think we should do X”. This sets a serious precedent for so many companies and millions of people moving forward.
100
Reply
18 replies
This is a wonderful meeting. Did not expect this to be so easy listening and important at the same time. Thanks
82
Reply
12 replies
Hats off to this hearing this was one of the best examples of how our government should work ! To each member thank you for staying non-partisan and keeping politics aside . To the members on the panel thank you for your clear and helpful response plus the willingness to be here .
131
Reply
27 replies
AI has leapt over the "barrier to entry" this year. I think that is what scares the OEMs the most. It's in the hands of the public domain. It is we they don't trust. The public is the "singularity", or at least the first one.
32
Reply
7 replies
This has been a delightful conversation. Personally I wanted to know why they did sell to Microsoft, especially after the whole "I dont do it for the money" remarks. If we take social media as an example, they all show a predictable patern where its first great for the users, we're in that stage now with chatgpt, next stage is great for businesses, and when market dominance is reached its great for the company itself squizing everything out of the users and businesses. Thats why they all turn to sh##. So this wont be a fun ride.
Read more
3
Reply
3 replies
Well, bravo to this congress for their outstanding research efforts during this session! It's truly impressive how they managed to produce such a remarkable display of subpar quality and inefficiency. Their ability to disappoint is truly something to behold, leaving us all with a profound sense of regret. Quite the accomplishment, I must say!
13
Reply
1 reply
I 1000% agree with the lady that said we need to take online privacy serious. If that question is addressed, then AI will pose muuuuch less risk. It would severely limit ai. Ai uses others data to profit those who use the ai.
10
Reply
2 replies
This is going to be what ultimately pushes humanity over the ledge and there’s nothing that can stop it
8
Reply
I watched it with high interest, everybody is interested to this topic, which is quite new, even though as they well said, AI was around since many years before, and we were using it without even knowing but now it's blossming the whole potential and we still don't even know what's coming. Hoping for the better, beyond what has been said, the most important thing is that, there are hearings like this in order to regulate it, listing all the pros and cons to point out what's better for humanity, to ensure that it doesn't slip out of our hands.
6
Reply
The real issue that is not being addressed is that the architecture of the large generative AI model is unstructured and unguided enough so that no one knows what internal structures and algorithms are being deployed. So regulation, other than the data being fed to the system is not possible. Besides that, the system allows for uncontrolled and unpredictable emergent behavior, e.g. developing on it's own, despite it being a language model, ability to do mathematical calculations, although I believe still rudimentary, as well as developing on it's own, fluency in foreign languages that it was not trained or granted permission to learn. Certainly it was not supposed to help that individual in his successful suicide, but it did it anyway. To control these systems is currently beyond our knowledge and capabilities.
Read more
33
Reply
6 replies
In all the conversations of regulation, I haven’t heard any concerns brought up regarding if a government like that of the US - who has a very questionable track record for abusing power - would do if they could utilise AI to their benefit. We already saw how little moral accountability there is relating to the Twitter files among countless other examples.
6
Reply
It’s nice to see some adults in a room for a change
19
Reply
Thanks for the upload, it is important to have these discussions, ideally worldwide and openly.
7
Reply
It always makes me a little sick when I hear an elected politician talking about accountability. The one group in this country that is never held accountable.
56
Reply
3 replies
Plot Twist: During the Senate artificial intelligence hearing, it is revealed that Sam Altman, the CEO of OpenAI, is not a human but an advanced artificial intelligence program created by OpenAI itself. Altman's convincing appearance and interactions with others had fooled everyone, including the senators, into believing he was a human. The revelation sends shockwaves through the hearing, raising profound questions about the capabilities and potential dangers of AI technology. As the realization sinks in, the senators grapple with the implications and the urgent need for regulations to govern the rapidly advancing field of artificial intelligence. [written by ChatGPT]
63
Reply
14 replies
These hearings really show who in congress is informed and who is not. Yikes.
72
Reply
2 replies
"you may have heard and seen me pretend I served in Vietnam but that wasn't me it was the AI who spent decades lying about my military service and making it seem like it was me saying it. So anyone who shows you video of me over decades saying I served in Vietnam it's not me it's the AI. I swear. Those videos from the 80s, 90s, 2000s of me saying it are all just AI." Richard Blumenthal
35
Reply
2 replies
The impact of the cause-and-effect dynamics of this new technology cannot be predicted. Agreed. While paying attention to such modern developments is commendable, I think that AI's negative impact is greatly exaggerated. Even an engineering scientist would have a hard time fathoming the impact based on his own experience, truly. At 20:25, this sounds like a business case study from the book.
Reply
I'm so glad this is happening.
3
Reply
Transparency is the first step that has to be made of.. hoping that everything will be followed.. so we could move forward.. insha'allah
Reply
@2:30:50 , when Senator Blumenthal referenced speaking privately with Sam Altman earlier - It gave me this feeling that Sam was very direct with him about some very serious potential ominous threats during that private chat, that could happen soon. So serious, that the senator advised him to not specifically talk about it in the public hearing because it could potentially cause public panic. Just the way he spoke about it, came across as if he was quietly/secretly acknowledging that he remembered and understood the significance of whatever Sam warned him about without saying it out loud publicly. I felt the same way toward the beginning when the senator made reference to a public quote by Sam, and said something like, "You might have been referring to job loss as one of the biggest nightmares...", to redirect the true worst nightmare, which might be something that was not to be discussed publicly.
Read more
8
Reply
1 reply
What a strong case to replace this guy and most other politicians with AI.
7
Reply
4 replies
Fascinating that senator Graham would only allow 10% of the questions he asked to even be answered
4
Reply
1 reply
It's amazing to see how articulate and attentive Josh Hawley is when people he's talking to are not trying to undercut or talk down to him. THIS is the most pressing issue of our generation.
2
Reply
This is a good start. But this also shows the huge gap between technology and lawmakers. We really need to elect more technically knowledgeable congressmen otherwise, tech companies will just run circles around them. We don't expect them to be fully knowledgeable, but should have broad overall knowledge. I like Ossoff and Booker's questions, because they delve deep. But some of the other lawmakers' questions here, are, um, embarrassing to say the least, and that's putting it mildly. Like asking if its easier to just sue when tech laws are archaic isn't moving the conversation forward. That doesn't gain them new insights, and a bit wasted opportunities to ask important questions.
Read more
4
Reply
Does anybody else feel like the right questions just haven’t been asked?
19
Reply
3 replies
What people isn't realising is that newborn babies have to go through all the process we took during our lives to understand what is going on, that is 20 years+ to know a fraction of what ChatGPT has in storage.
7
Reply
6 replies
Also, I'm extremely skeptical of efforts for retraining due to labor disruptions. In recent years the capabilities of A.I. have been doubling faster than moores law. In some years, its been doubling every 3.5 months... We could be retraining people into professions that could no longer be viable by the time they get there because of how fast this tech could evolve.
Reply
If, as the Congressman suggests, we are now going to "write the rules" with regards to AI (do they apply to AI as a person, in which direction?", we might as well have Chat GPT do it.
3
Reply
Bring an AI bot in for questioning, using the current version of chatGPT with and/or without its filters and limiters, TTS and, an animated face. This is essentially interviewing the many trainers/developers thru a single source (not as replacement for other interviews - for supplemental information). The responses should be most informative. I'd suggest the AI being the sole witness, followed by other sessions with the experts to review the bot's testimony. One topic might be Sen Kennedy asking the bot for ways to manipulate an election. Or how would it react to Sen Cruz accusing it of murdering babies and refusing to answer a simple yes or no answer to his questions? "Are you aware that you are responsible for a gazillion rapists taking good middle class jobs all along our southern border?" "As an AI model I am unable to..." "you aren't here to pontificate or make speeches. You are here to answer questions." "I apologize if my response(s) were too difficult for you to understand. How may I assist you?" "So! You are refusing to answer!" "Senator, as an AI bot I can assist you in" "Let the record show the witness refuses to answer my question" SO, we're all left with no choice but to believe that AI is murdering babies and refusing to answer simple yes or no answer questions during a Senate Hearing. It's this kinda stuff what makes this topic too TOO to be discussed in YouTub comment sections, so I expect this comment will be demonitized, like ALL of my others. How does inflation affect the price of freedom?
Read more
1
Reply
I was glad that I could ask AI about the congressman and what his views were. I got a direct and truthful answer and was more able to judge his comments.
1
Reply
Wish i had the money to do all they have and everything they owe me for everything i been through thank you guys tho for making my life so rough and not having compassion for me do as you wish
Reply
historic and an understanding of the urgency and weight of AI
2
Reply
We can sit, and gift A tale of understanding Yet the world is drowning Not just with tide, but pride Send them those ready To learn their confetti O how great an 'AI Lawyer' would serve O to think without: what nerve
Read more
1
Reply
We need AI impact group to research this topic in real time and make recommendations, these law makers are wasting time to getting caught up. We need to create intellectual dialog and form committees to recommend laws for law makers to introduce into law.
6
Reply
2 replies
One of the biggest competetors that independant artist have always faced is the large corporations in the entertainment industry, i.e. Walt Disney, Star Wars, Marvel Superheroes etc. that have the money and resources (TV, Radio,Internet, etc) to mass market their mass produced items. Artist - the kind who went to school and have taken years to perfect their skills- today not only continue to face this type of monopolized corporate competion but now there is a enormous amount of people, who most likely have not drawn a picture themselves since their early grade school days, that are going on to places like Midjourney and using AI to generate copy righted material (using words like "Disney Bambi", or a specific artist name/style of art) that they will then be selling these images on different types of products on places like Ebay, Etsy, and Amazon; profiting from it as if they were a deserving artist. As a result, the true artist - many who came to be refered to as being a "starving Artist" in life - will now be withering away into...??? Its really a shame when, for the past decade, filters could have easily been used by those large marketplaces online that are making large amounts of money while allowing people, who are blatenly breaking copyright laws, to sell these types of illegal products on their site (Ebay, Etsy, Amazon, etc). When I contacted Etsy legal department about this grossly out of control issue on their site, I was politely told that I had to be the artist whose work was being copied in order to file a complaint. Too bad things were not more regulated decades ago...
Read more
2
Reply
The government's pace of action is so sluggish that by the time they manage to implement any laws or regulations, the harm will have already been inflicted. It wouldn't be surprising if we achieve AGI (Artificial General Intelligence) before the government even begins to establish rules and regulations. Furthermore, the extent of Google's AI advancement remains uncertain, but it's highly likely that they are significantly ahead of OpenAI. Given the circumstances, it is almost certain that Google will intensify its efforts and accelerate its progress even further.
16
Reply
8 replies
It's a little to late to "write the rules" lol. ChatGPT's opening speech was better than what old buddy actually said
2
Reply
ChatGPT : This is what you will say later in the congress. Sam: Got it.
18
Reply
1 reply
This is gonna be a long one, but worth listening to. You don't have to sit and watch it, it's a great Earbud/house cleaning kind of thing too.
3
Reply
you have no idea how very dangerous this really is
9
Reply
2 replies
So perhaps a couple takeaways from this meeting is that A we need training teams to interact with models and analyze the ways we can keep safeguards even as it evolves...(.but can we really? )And B.We need an international bipartisan global team working together to create standards. This brings up issues with transparency, accountability, collaboration, and intention.
10
Reply
3 replies
I see this being one of the most historic sessions ever. Together we are stronger than ever.
36
Reply
5 replies
It is scary how fast this hearing got up and off the ground. Just tells you.....hold on to your pants people.
11
Reply
1 reply
Watching this reminded me why I despise politicians with their huge egomaniacal attitudes, rudeness, grandstanding, and know it all styles of conversing with anyone. When that senator took up all the time to ask his four questions and told Sam A. he had one minute to answer, then interrupted him, I was floored by his pompous ways. The senator who was so rude to the IBM executive, likely thought he was making a good impression as a straight shooter wanting a simple answer, just made himself look rude and inconsiderate. I think Sam A. did a phenomenal job of answering the question asked in a mostly rude, accusatory, and uncivil way. I think the IBM executive did a great job too, and both her and Sam A. kept their cool and did not let the way the questions were being asked cause the to lose their equanimity. The both answered with grace, poise, and intelligence, and my hats off to both of them! I was an early adopter of ChatGPT and love what it does and will do for me, and I'm highly impressed by the safeguards Open AI implemented in the Nov. 2022 release.
Read more
3
Reply
my humble respect for mr. welch. he is asking good questions. if you ask bad questions, you get an answer like 42.
Reply
Blumenthals BIGGEST fear is that AI will replace jobs… We’re doomed, on so many levels
12
Reply
3 replies
Imagine an AI that goes rogue, hacks into every major system throughout this world and holds us to ransom. Similar to what Covid did.
Reply
From my personal experience with the AI chatbot and the basic timeframe of milestone events, I'm highly suspicious of everything that has been "testified" to by the various witnesses. The united front for creating a regulatory entity(stressing a global one) isnt as comforting and assuring to me personally tbch. Along that line, I really question why no one has emphasised first and formost calls for complete transparency of the AI chatbots, their training data, and its learning since day one. What are the resources and level of equipment/programming necessary for a legitimate AI platform to be born, and who/what currently have those capabilities globallly? OpenAi is reported to have been founded in 2015, and I have serious questions to exactly how chatpgt and its handful ofreleased advanced versions, are suggested to have been created in just 8yrs.......
Read more
Reply
1 reply
It's really frightening to me how little effort Congress has put into researching what this technology is, and what it's possible consequences are for the world. They think that this technology is just for writing homework assignments and making pictures. They think it's another app like Facebook. They have no idea what they're even talkin about. It must be very frustrating for the members of this panel to address them without them having even a basic understanding of the technology. And I know that they're trying not to scare anybody, I know that they want the technology to move forward. But it seems like they're kind of walking on eggshells when they really should be going in with a sledgehammer.
67
Reply
13 replies
Only thing I worry about AI is Murphy law . What can go wrong will go wrong at the worst possible time .
11
Reply
4 replies
The age of Intelligence which defies the norms can really be impacting the world in some way or another. We could debate each other if it's a good one or a bad one , but the fact it has the possibility of eliminating the world problems onto the broader window should be considered intensively regardless of what thoughts you're rooting for..
Reply
So, Altman can create what is arguably the biggest AI company and the most recognisable AI that we currently have, with little to no regulation, and now that has been done, we need regulation of OpenAI and the competition? Interesting...
9
Reply
3 replies
I’m a bit of a fatalist when it comes to these kind of things but I’ll parallel the invention of AI to the invention of the nuclear weapon. Now we as human must live with the fear of nuclear Armageddon at the hands of governments we do not trust and by mechanisms to powerful to derail. I think most of us would largely agree that perhaps that genie was better left in the bottle. But if not us then surely someone else right? Given the progress of technological advancements at that time it makes sense that someone would develop a nuclear weapon. If we should refrain from inventing this technology will others refrain as well? We did not refrain, in fact we raced to develop them. We are ABSOLUTELY headed towards a future where we look back and say “Had we only know then what we know now”.
Read more
17
Reply
6 replies
Great Maker, whose Motive Force infused this servant once with the spark of life and service to the machine. Whose blessed algorithms guided, whose oils consecrated and whose augmentations made more of this once-true construct than he could ever have been. Embrace the glorious workings of this, Your servant, and admit him once more to the wondrous interface of godly communion.
2
Reply
AI learning to “self-replicate and self-exfiltrate”. Hope they come up with effective solutions before that happens.
8
Reply
AI training data must be as public as any of the user data.
1
Reply
Nice to see are gov. bing civil and respectful with each other and the real concern for people and the world. The hearing on Twitter files very serious matter but wish it was more concerning and respectful but that did not happen.
Reply
“AI could develop a will of its own . . . The rise of AI could be the worst or the best thing that has happened for humanity.” 8:50 Stephen Hawking
8
Reply
1 reply
Thanks for sharing this, watched the whole thing, from The 🇬🇧
1
Reply
I’ve never seen Graham ditch the opportunity to ask tough and stimulating questions to play theatrics.
9
Reply
3 replies
Regardless of which part of the world humans settled after migrating out of Africa, about 80,000 years ago, or what type of physical features they developed (White, Black, Brown, Asian, Caucasian etc.), they have usually evolved to live and behave in ways that are extremely beneficial to some but be extremely harmful to others. I do not think any other species on this planet behaves this way. Like all life on earth, humans have had to compete for survival. They also competed for power and for status among themselves with little or no regard for human life especially toward the more primeval or with different physical features. They have done this by conquering and invading lands and territories of their fellow humans and by killing, torturing, raping, exploiting, oppressing, lying stealing and enslaving. Competing by aggression and violence and fighting wars remains popular to this day. A lot of this aggressive bad behavior has also been used to compete in modern day governments, corporations and businesses. Humans also organized themselves in hierarchies such as caste and class systems or political parties and even into criminal gangs and the lower on the hierarchy they are the less rights they have and the more abuse they suffer from the those above them in the hierarchy. Humans were also forced or tricked into worshiping and defending Kings and Queens and even supernatural Gods and made to believe that the bad behaviors were justified by these supreme authorities. Progress toward peace, fairness, kindness and wellbeing for all humans has been made over the centuries but, I think for further progress to be made, humans will need to be much more truthful and honest and learn to compete for survival in much more respectful and compassionate ways and for those that like to accumulate as much wealth and power as possible learn how to be a little less greedy and a little less selfish. I try to remain optimistic and it would be fantastic and amazing if the levels of trust and respect among humans can rise to a point where they will no longer need to commit resources to manufacture weapons that can wipe out most of the life on this planet and from there start working towards reducing and eventually stopping the manufacturing of any weapons of war. I think the future challenges for humans on this planet is to learn and educate themselves how to live responsibly, sustainably and healthy in peaceful, fair and respectful ways and the amazing new technology of artificial intelligence should be able to help them achieve this goal.
Read more
8
Reply
7 replies
1:40:34 Best exchange of the hearing
30
Reply
8 replies
Why keep fighting for news outlets? There is a reason they are going out of business and that is just because less and less people are interested in their BS. They just need to accept that and either step up and make themselves attractive, charge the people interested in their stuff more for it then (why should people not using certain services/products pay for those who do use it?) or just accept the fact that they have less and less rights to exist.
Reply
2 replies
One must commend Senator Blumenthal for kickstarting this debate,the rest of the world should have similar dialogue. I really enjoyed Senator Graham's questioning style .
4
Reply
2 replies
Good questions good answers. Such meetings should happen often every few months along the progress of AI. So public can monitor and control it.
Reply
1 reply
I still can't believe they gave it access to the internet
11
Reply
5 replies
On the case of social media and social engineering, the only way to protect preemptively from the dark web, is to do what banks and govt offices do, by checking all IDs at the door, rather than remain untraceable portholes to the dark web of enabled malignancy. ️️
1
Reply
I like the hypocrisy of harping on "beware of AI bias" when in reality humans literally do just that and it is humans who are slaves of their bias who are in front of these microphones
4
Reply
1 reply
we as CITIZENS are responsible. The scary thing is how the narcissists and psychopaths and corporate evil overlords will wrangle the control dn destroy our lives.
5
Reply
1 reply
Senator: Let me talk 5 minutes to ask you a complicated and convoluted as well as multi-faceted questions that nobody can reasonably answer really Once Sam is trying to answer, the senator: Let me interrupt you after 20 seconds because I do not quite like or understand the answer Perfect
Read more
52
Reply
7 replies
This hearing gave me some hope that Democracy can survive. Great, no left & right drama! How about some real immigration reform?
8
Reply
1 reply
I call on everyone to unite in defense of AI and other high technologies
Reply
Documentation of exposure to each version of AI and each version code archived would make sense to me.
Reply
watching these hearings is so painful. This is what you get when nobody pays attention to local elections. These are the people who are going to regulate this technology and they seem to have a tenuous grasp on what it even is. Were Doomed as long as people keep blindly voting for whoever their team is while completely oblivious to what the candidate actually stands for.
Reply
I am so fortunate that I made productive decisions about my finances that changed my life forever. I am a single dad living in Toronto Canada who bought my second home in September and is hoping to retire next year at 50 if things continue to go smoothly for me
3
Reply
3 replies
Hawley looked like he was holding back laughter pretty hard XD
Reply
Wow. I mean I'm actually blown away by how this went. We may have a chance to survive?
4
Reply
And these were their last words before AI took over ....
1
Reply
"We shouldn't allow that." "Can it be done?" "Sure." "Thanks." smh
Reply
Well done
Reply
Over here wondering if Sam already has AGI and that’s why he is concerned.
7
Reply
2 replies
Would be interesting to have an AI Senator
Reply
I think we just need good countermeasures against AI in the cases of malicious use, but I don't like thinking of AI like a virus.
Reply
This system cannot be properly regulated unless u hold the companies accountable for the actions of wat they create
Reply
AI will be the tool by which many things will come, the benefits outweigh the risks, if we manage the risk with regulation it will be a smoother sail, and we should make sure not to damage progess of AI, but just manage risk. the compentation to artists and copy right agreements is a good idea.
Reply
They ask questions then interrupt them when they're trying to answer
3
Reply
What a way to open the occasion of AI with an AI script for the occasion
6
Reply
This, and other recent hearings demonstrate, at least to me, how the tech sector is evolving and adapting faster and more efficiently than the federal government.
Reply
this is the deepest most relevant almost 3 hours i have ever watched.....
1
Reply
I feel its only upto the user to decide to use the end result or not. It is a great tool to use and lets not regulate it heavily. Yes it is dangerous but in the end its the user to decide if the end result is right or wrong and whether to use that result or not. Also in the end the other user can check if its an AI generated text or image etc or not. So I feel that we dont even need any regulations as it can always inform if the end result is AI generated or not and the social media should add if the post is AI generated or not. And everyone should be educated properly on the AI. And it should be a subject in school. Systematic dysenseatizarion of AI to human should happen. Why no equity man?
Read more
Reply
Put those complies on notice that they personally, the ceo, and the engineers are liable for all damages their AI create.