Senate Judiciary Committee holds hearing on AI oversight and regulation
The Senate Judiciary Subcommittee holds a hearing on oversight and regulation of AI.
Ethical and Societal Implications
🚨 The CEO of anthropic highlights the medium-term risks of AI systems becoming better at science and engineering tasks, posing an alarming combination of imminence and severity.
💬 The ability to generate tailored disinformation campaigns for individuals based on their online presence poses a significant threat and can have a greater impact than traditional spamming and broadcasting of false information.
💥 The integration of generative AI into search engines like Google could give them extraordinary power to manipulate and push information to users, potentially leading to the weaponization of misinformation and targeted advertising.
💪 Mr. Bengio suggests that the penalties for creating fake recordings of individuals should be as high as counterfeiting money, emphasizing the need for strong consequences to deter such actions.
💡 "We have a right to know if our democracy is being subverted by an algorithm and that seems absolutely crucial."
🗳️ Action against deep fakes and impersonation in elections is crucial to preserve civil rights and liberties, without resorting to censorship or a Ministry of Truth.
🌐 There is a need for a different kind of digital ecosystem where computers should only run code that can prove its safety, potentially preventing bad actors from circumventing controls and requiring significant financial resources to develop their own hardware.
🌐 "We should invest heavily in safety measures for AI, whether it's at the level of hardware, cybersecurity, or national security, to protect the public." - Professor Benjio
AI Regulation and Oversight
🛡️ There is a need for a proactive regulatory agency that invests in research to develop countermeasures against potential dangers of out-of-control AI scenarios, such as an AI device programmed to resist being turned off or making decisions to initiate nuclear reactions.
💡 "Imagine a world in which AI is controlled by one or two or three corporations that are basically governments under themselves...that is the true nightmare and...what this body has got to prevent."
💡 Urgent efforts are needed to coordinate national and international regulatory frameworks, accelerate research on AI safety, and develop countermeasures to protect society from potential rogue AI, in order to fully reap the benefits of AI while safeguarding against its potential perils.
🚀 The field of AI is rapidly progressing towards AGI, with estimated cash value of at least 14 quadrillion dollars, but the problem of control and maintaining power over more powerful AI entities needs to be addressed through regulation and a culture of safety.
🧐 The concentration of AI technology in the hands of a few companies and governments poses a significant structural issue that needs to be addressed to prevent potential problems and ensure fair control and use of AI.
🚨 The development of superhuman AI raises concerns about the potential for AI to autonomously make harmful decisions, such as developing a pandemic virus or contaminating water supplies, highlighting the need for urgent oversight and regulation.
🚔 Enforcement powers and government investment in AI safety are crucial to incentivize innovation, protect consumers, and provide necessary safety measures, as relying solely on private companies to police themselves is insufficient.
National Security and Threats
🚨 AI systems have the potential to enable more actors to carry out large-scale biological attacks, posing a grave threat to national security.
Proactive regulation and investment in AI research are necessary to address potential dangers, protect privacy, prevent job loss and election interference, and ensure a secure and ethical future for AI.
Proactive regulation and investment in AI research are needed to address potential dangers, job loss, election interference, and data access, while protecting privacy, personal safety, and interests in court, and preventing a dystopian future controlled by powerful corporations.
The speaker emphasizes the need for proactive regulation and investment in research to address the potential dangers of artificial intelligence, citing concerns about autonomous devices causing harm and the inadequacy of current commitments by major companies.
AI development is progressing rapidly and has significant impacts on the economy, safety, and democracy, necessitating the establishment of legislation and regulations to address issues such as job loss, dangers related to elections and nuclear warfare, transparency, and data access for researchers, while also promoting innovation.
Three experts, including a leading AI company, a groundbreaking AI researcher, and a computer science professor, discuss the importance of AI oversight and regulation in protecting privacy, personal safety, and interests in court.
The speaker emphasizes the need for legislation to protect the rights of American workers, families, and consumers against powerful corporations that control AI, highlighting the potential dystopia of a world where a few corporations have immense power and urging Congress to take action.
The Senate needs to take immediate action on regulating big tech and AI, as there has been a lot of talk but no meaningful legislation, and the urgency of new generative AI technology highlights the need for action.
Kids are exposed to inappropriate content, small businesses are being pushed down search engines, and there is a need to put in place regulations to protect democracy and provide voters with accurate information.
AI systems pose significant risks to national security and elections, and it is crucial to secure the AI supply chain, regulate AI models, and implement measures to prevent harm and ensure safety.
Anthropic, a public benefit corporation, aims to develop and deploy safer AI systems, including their AI model Claude 2, to address the risks of bias, privacy, misinformation, and the potential misuse and autonomy of AI systems.
AI poses a grave threat to US national security as it has the potential to enable more actors to carry out large-scale biological attacks, and therefore, it is necessary to secure the AI supply chain, implement testing and auditing regimes for new AI models, and fund research and measurement to effectively mitigate the risks and maximize the benefits of AI.
Advances in AI systems pose significant risks and governments should focus on limiting access, ensuring alignment with values, increasing intellectual power, and considering the scope of actions in order to regulate and mitigate potential harm.
AI systems, particularly large language models, pose risks due to their mispecified objectives and lack of transparency, therefore, regulation and a culture of safety are necessary to prevent harm and ensure the development of provably safe and beneficial AI.
The immediate threats to the integrity of our election system include misinformation, the generation of deep fakes, and the manipulation of AI systems to deceive people, and potential solutions include using watermarking technology and requiring AI-generated content to be labeled.
Avoid releasing more pre-trained large models to address the issue of disinformation and external influence campaigns in elections.
AI systems can generate tailored disinformation campaigns, proposals include labeling text and creating an escrow storage for machine-generated content, governments should establish licensing and standards for AI organizations, and Google's integration of generative AI technology raises concerns about privacy and manipulation.
AI systems can generate tailored disinformation campaigns based on individuals' online presence, and to address this issue, proposals include labeling text, creating an escrow storage for machine-generated content, and establishing unified standards and leadership in the public information sphere.
Governments should establish licensing and standards for AI organizations, ensure platforms use third-party information responsibly, and consider restricting social media accounts to verified human users to prevent AI systems from influencing voters.
Google and Microsoft have significant stakes in anthropic and OpenAI respectively, with Google's investment being around $300 million, but the current focus of the relationship between anthropic and Google is primarily on hardware rather than commercial or governance integration.
The integration of generative AI technology into Google's search engine could potentially give them extraordinary power to manipulate and target users with misinformation, raising important concerns about privacy and the impact on consumers.
The speaker emphasizes the importance of training AI models to align with ethical principles, but raises concerns about the subjective nature of ethics and the potential misuse of technology by a few powerful entities.
Legislation requiring watermarks on election materials produced by AI is not enough to address the issue of fake personas, and additional measures are needed to ensure clear labeling and identification.
Clear labeling and disclosure requirements should be mandated for AI-generated images to ensure transparency and prevent consumer exploitation, while regulations are needed to minimize the potential effects of AI used for political purposes and prevent deception and scams, with a focus on collaboration between Congress, companies, and researchers, data sharing, transparency in algorithms, and the need for guard rails to protect consumer privacy and prevent misuse.
Clear labeling and disclosure requirements, similar to those implemented for credit cards, can be mandated for AI-generated images to ensure transparency and prevent consumer exploitation.
AI used for political purposes, such as advertising, should be regulated to minimize potential effects, and measures should be put in place to prevent AI platforms from being used for deception and scams.
Congress and companies need to work together to address scams and strengthen protections for AI, including federal laws to give individuals control over the use of their name, image, and voice, as well as allowing researchers access to social media platform data for regulating AI.
Companies appearing open to collaborations with researchers but terminating them before they begin, not providing open data sets, and not sharing data can have massive and polarizing effects on public opinion through social media recommender systems, highlighting the need for regulations mandating data sharing.
Governments and researchers should have access to information about algorithms to ensure transparency and protect democracy, as academic researchers without commercial ties can provide valuable insights.
The need for guard rails in AI technology to protect consumer privacy and prevent misuse, such as harvesting data from individual conversations, requires a federal privacy standard and clear definitions of data usage, as well as improved enforcement.
AI has significant impact on various industries, but there are concerns about its ability to shape what people hear and exclude certain artists; AI-generated content poses a threat to the creative community's compensation; regulation and countermeasures are necessary to reduce risks of rogue AI, but collaboration, funding, and expertise from various fields are crucial.
AI has significant impact on various industries such as auto, healthcare, pharmaceuticals, entertainment, and publishing, but there are concerns about its ability to shape what people hear and its potential to exclude new artists, females, and certain sounds from playlists.
AI-generated content poses a threat to the creative community's ability to be compensated and the current copyright law is not equipped to handle this issue.
Agencies should not become captive to the industries they regulate, and private rights of action can serve as a check on this captivity; deep fakes and manipulation in elections pose a dangerous threat that can be addressed through labeling or watermarks without censorship; the development of superhuman AI is not decades away, but rather just a couple of years.
The urgent need for an entity to establish standards, rules, and research on countermeasures to detect and mitigate the risks posed by superhuman AI, including the development of viruses, pandemics, and toxic chemicals, is emphasized, with a focus on the importance of being able to measure and regulate these risks effectively.
Regulation and countermeasures are necessary to reduce the risks of rogue AI, but it is important to proceed carefully and involve expertise from various fields, with a focus on defending humanity rather than profit.
We need to collaborate with our allies and have diverse approaches to AI oversight and regulation, as well as a resilient system of partners to prevent any one country from having sole control over superhuman AI, while also acknowledging the need for funding and coordination in research and the importance of mathematical guarantees in ensuring safety, as no government agency can match the resources being invested in AI development.
In order to ensure AI safety, the speaker suggests implementing recall provisions for rule-violating companies and creating a secure digital ecosystem; concerns are raised about securing the AI supply chain, potential theft of AI models, and labor exploitation in the industry.
In order to ensure the safety of AI systems, the speaker suggests implementing involuntary recall provisions for companies that violate rules, as well as creating a digital ecosystem where computers only run code that has been proven to be safe.
The speaker discusses the importance of securing the AI supply chain, particularly in relation to chips used for training AI systems, and suggests considering limitations or prohibitions on components manufactured in China to ensure supply chain security.
Concerns about the theft and misuse of AI models and the potential impact of a hypothetical invasion of Taiwan by the communist government of Beijing on AI production.
TSMC and Intel are building plants in the US and Germany, but it is taking time, and if there is an invasion of Taiwan, the best case scenario would be sabotage of TSMC operations, highlighting the need to secure supply chains and consider decoupling efforts, as moving chip Fab production capabilities to the US may take several years.
Workers in Kenya were exploited and underpaid while doing training work for OpenAI's chatbot, highlighting the issue of labor exploitation and outsourcing.
The AI industry relies on old-fashioned and immoral exploitation, but the speaker's company has a different approach called constitutional AI.
The speaker emphasizes the importance of regulating AI development to benefit American workers, highlighting the need for national security measures and international collaboration to prevent misuse and ensure compliance, while also expressing concerns about the potential risks of advanced AI systems.
The speaker emphasizes the importance of ensuring that AI technology is developed and utilized in a way that benefits American workers and families, rather than replicating a pattern of outsourcing and mistreatment of foreign workers.
We need to focus on training workers and regulating AI development, particularly in terms of national security and competition with countries like China and the UK.
China's AI capabilities are currently not as advanced as those of major institutions in the US, with their focus primarily on voice and face recognition for state security rather than areas like reasoning and planning, and their academic sector is being hindered by strict publication targets.
International collaboration is crucial in developing safety measures and regulations for AI, as countries like Canada, the UK, and France have significant expertise, and guidelines should be established at the international level to ensure compliance and prevent rogue actors from exploiting AI technology.
Safety breaks, such as the ability to terminate an AI system, are recommended as a condition for testing and auditing AI systems to prevent potential dangers and ensure public safety.
Auto auto GPT is currently being used as chat bots on the internet, but while they may not be effective now, they indicate a concerning future direction in terms of long-term risks.
AI companies should report issues and there should be regulations for public safety, oversight, and innovation; experts recommend an agency for AI regulation, increased safety measures, testing and auditing, and consideration of property rights; open source AI models pose risks and should be evaluated before release; international collaboration and a single agency in the US are important for regulating AI.
AI companies should have an obligation to report issues and there should be requirements for reporting to ensure public safety and oversight without inhibiting creativity or innovation.
Experts recommend the establishment of an agency for AI regulation, increased investment in safety measures, implementation of testing and auditing regimes, and consideration of property rights and compensation for individual data used by AI companies.
Testing and auditing AI models is crucial, and legislation should impose qualifications on testers and evaluators to ensure expertise and a living process that can be adjusted as new information emerges, as demonstrated by the example of anthropic's collaboration with biosecurity experts, highlighting the importance of specific commitments and attention to details in order to avoid potential risks and negative outcomes.
Open source AI models can pose risks as they can be exploited by bad actors, and it is important for the government to define and evaluate potential dangers before future releases, while universities should establish ethics review boards for AI similar to those in biology and medicine.
The speaker expresses concern about the potential dangers of uncontrolled releases of larger open source AI models, emphasizing the importance of being able to moderate usage and trace the provenance of outputs, and suggesting that liability should be considered for the open source community.
International collaboration and a single agency in the United States are important for regulating AI, as it is a rapidly evolving field and we need an agile entity to coordinate with other countries, invest in research and development, and ensure productive uses of AI for the benefit of society.
The Senate Judiciary Subcommittee holds a hearing on oversight and regulation of AI. Witnesses testifying include Stuart Russell, professor of computer science at The University of California Berkeley; Yoshua Bengio, founder and scientific director of Mila — Quebec AI institute, and Dario Amodei, CEO of Anthropic. » Subscribe to CNBC TV:
https://cnb.cx/SubscribeCNBCtelevision » Subscribe to CNBC: https://cnb.cx/SubscribeCNBC Turn to CNBC TV for the latest stock market news and analysis. From market futures to live price updates CNBC is the leader in business news worldwide. Connect with CNBC News Online Get the latest news: http://www.cnbc.com/ Follow CNBC on LinkedIn: https://cnb.cx/LinkedInCNBC Follow CNBC News on Facebook: https://cnb.cx/LikeCNBC Follow CNBC News on Twitter: https://cnb.cx/FollowCNBC Follow CNBC News on Instagram: https://cnb.cx/InstagramCNBC
thank you ms. Palki, Vantage, and Firstpost for this recap. though i have not missed a single episode in the weekdays i always watch it the next day during breakfast
Reply
Mr. Blumenthal is the best. Blumenthal stays on the topic and knows exactly where the real problems are. i wish Mr. Blumenthal had more time to speak with the witnesses testifying.
3
Reply
1 reply
AI, Aliens and pandemics who would've thought three years ago that these would be the issues that would unite us.
1
Reply
Feels like Dario is wasting everyone's time with these answers and making it all about his company rather than what they're actually discussing.
4
Reply
1 reply
AI self-development will outpace any governmental control of it.
6
Reply
3 replies
When considering potential existential threats to humanity's future, two possibilities that often arise are unidentified flying objects (UFOs) and artificial intelligence (AI). On the surface, they have little in common - one involves visitations by extraterrestrial crafts, while the other relates to technologies we create ourselves. Yet both represent something unfamiliar and poorly understood, with potentially tremendous implications we can only partially predict. Those who see UFOs as the more significant threat point to the complete unknowns involved. If highly advanced alien civilisations were entering our skies, they would have technologies centuries ahead of ours. Their motivations and intentions are a mystery. There are concerns they could one day use their superior capabilities to threaten our planet in ways we'd be powerless to resist. And the fact that credible sightings continue to occur indicates this is not just hypothetical but an actual phenomenon needing attention. Some warn that UFOs could represent scouting missions for alien invasion or colonisation. While this may sound far-fetched, we simply do not have enough data to rule it out, given the capabilities that UFOs have reportedly displayed. Until we can learn more about who is operating these craft and why, the risk they pose is incalculable. On the other hand, some argue AI poses more considerable risks because it's being developed here on Earth. Revolutionary advances in machine learning create systems that can act autonomously in complex environments. While this technology offers many benefits, it raises legitimate worries about AI escaping human control. Potential risks range from purposefully malicious AI to AI whose programmed goals drift from those of its creators. And unlike UFOs, AI development is accelerating rapidly in both the public and private sectors. Unchecked, cutting-edge AI research could create artificial superintelligences whose motives and aims exceed human capabilities. And AI systems are likely to be developed and implemented faster than protocols and safeguards can be put in place globally. In the end, both unknowns deserve serious evaluation. But while UFOs remain shrouded in secrecy, AI research can be monitored and guided to reduce dangers. UFO encounters may be infrequent, but AI will undeniably transform our society. AI seems more immediately pressing for a threat we can predict and prepare for. Yet until we unravel the mysteries behind UFO sightings and capabilities, we cannot rule them out as a potentially more significant hazard. The wise path is thus to pursue an understanding of both phenomena while safeguarding humanity against any existential threats they may pose. Vigilance, caution, and willingness to adapt our frameworks and policies as we learn more are required. We ignore either AI or UFOs at our possible peril.
Read more
Reply
8 replies
We want to make this VERY clear, and congress you better be listening: AI could be potentially extremely dangerous... more than anything in existence currently on Earth. In nature, but also in its ability to cause great disparities. With that being said, we expect that you will regulate AI so that no company is allowed to use it to replace workers or job roles, and no company can create weapons of destruction using AI. banning it will not be needed, but the regulations on the technology and companies that use it must be VERY STRICT in the previously regarded mentions. We are dead serious about this.
Read more
6
Reply
2 replies
copying anyone's voice, or image should be illegal . . . you own your voice, you own your body. People using software to steal singers / actors / anyone's voice or image should be arrested and charged with felony fraud. Now technology can duplicate ANYONE - laws need to protect REAL People.
Reply
1:10:39 They dont know that Spotify shows songs based on your watch history liked songs and music taste. if she only listens to country music made by men she is indeed going to have a hard time especially when she doesn't know the name of female country singer as she could have just placed there name in the search bar.
3
Reply
1 reply
This is still a hearing about NHI
Reply
Why should we believe government will do anything! Everyone is either threatened including children and families.
1
Reply
1 reply
Lol 1:08:30 she wants to pretend to be safe while actually not doing anything seems like most people are up there for show and don't actually care what happens as long as their box gets checked
4
Reply
1 reply
god, some of the senators were so dense and insufferable.
Reply
They are trying to prevent the Age of Ultron, Skynet, and I-Robot.
Reply
feels like mr Hawley is way less competent than he tries to seem
Reply
Bengio
2
Reply
Ai controlled by companies i think we won't have to worry about a super intelligent ai will be stronger than any company
Reply
1 reply
Your missing the best one to come. UAP UFO from pilots in congress today
1
Reply
There laying in there cots on floor for months know Why not a locked form at the hospital
Reply
"They have to understand how these systems work". Else they get put out of business. Lol. What happens when ai is smarter than humans? These professors aren't very intelligent...
Reply
1 reply
superai running mate influencers intelligence
Reply
Reply
I agree with regulating data but i also think people should be able to say hey i would like the ai to use my data remember this is a free country there are people afraid of ai and there are some that are not i don't think purely catering to the scared is the best action i think this technology will improve everybody's lives and i don't want people to forget that
1
Reply
1 reply
This is pointless. Its like one person discovers fire, and tells others you cannot have fire too, but also should not try to... Just stupid
Reply
1 reply
check bittensor
Reply
2 replies
JOHN 14 : 6
2
Reply
1 reply
So help you god? Why not Allah or Odin? The mighty prophet Zarquan? Which country are we in?
Reply
1 reply
Making money is action. keeping money is behavior. Growing money is knowledge. I'm excited I started earning upto 15thousand dollars extra income<<<
8
Reply
11 replies
Please stop this AI Nonsense.. The experts are saying SHUT IT DOWN.
14
Reply
9 replies
Shut it down !
10
Reply
2 replies
Mr. Blumenthal wants to develop countermeasures to fight against UFOs. This guy must be from West Virginia, because that’s about as dumb of a comment as you can imagine.
Reply
UFOs are real omg.