A.I. Congressional Hearing and UBI
ChatGPT Creator Testify About AI at Congress
Potential Harms and Risks of AI
🤖 "The biggest nightmare is the looming new Industrial Revolution. The displacement of millions of workers the loss of huge numbers of jobs. The need to prepare for this new Industrial Revolution in skill training and relocation that may be required."
💻 Sam Altman believes that AI has the potential to improve nearly every aspect of our lives, but also creates serious risks that we must work together to manage.
🤖 AI systems can create persuasive lies at a scale that humanity has never seen before, threatening democracy itself.
🌎 "My worst fears are that we cause significant harm to the world...if this technology goes wrong it can go quite wrong and we want to be vocal about that."
🧐 Regulation and public education are needed to address the potential dangers of AI models that can manipulate and persuade individuals on a one-on-one basis.
🗳️ "You all in different ways have said that you view elections and the shaping of election outcomes and disinformation that can influence elections as one of the highest risk cases one that's entirely predictable."
🤖 The development of AI tools and approaches risks exacerbating bias and inequities in society due to the lack of racial and gender diversity in the workforce.
💻 "I think a model that can persuade manipulate influence a person's Behavior or a person's beliefs that would be a good threshold."
🤖 "There's a real risk of a kind of technical technocracy combined with oligarchy where a small number of companies influence people's beliefs through the nature of these systems."
🤖 Generative AI can manipulate the manipulators, introducing problems such as cybercrime and market manipulation, and we don't yet understand the consequences of these "counterfeit people."
📉 Potential harms of generative AI: "Loss of jobs, invasion of privacy, manipulation of personal behavior, manipulation of personal opinions, and potentially the degradation of free elections in America."
Ethical and Moral Responsibility of AI
🤔 The question of how we will strike a balance between technological innovation and our ethical and moral responsibility to humanity, liberty, and freedom is the same question that faced us a century ago and continues to face us today with the development of new AI capacities.
🧐 The central scientific issue in building artificial intelligence is understanding harm in the full breadth of its meaning, which may require new technology.
💻 The power of AI to shape our lives and views is significant and the risks of it being repurposed for bad purposes are real.
Regulation and Accountability of AI
🌍 "We need independent scientists to participate directly in addressing AI problems and evaluating solutions, and we need tight collaboration between independent scientists and governments to hold companies accountable."
🌎 Congress needs to responsibly regulate generative AI before the consequences, both positive and negative, exceed those of social media by orders of magnitude.
The key idea of the video is that there is a need for government regulation, transparency, and accountability in the development and deployment of AI systems to address ethical concerns, ensure safety, protect individuals, and maximize the benefits of AI technology.
The hearing discussed the need for accountability and safeguards in AI development, considering both the benefits and ethical implications, while emphasizing the importance of regulatory intervention, responsible deployment, and maximizing AI system safety.
The hearing discusses the oversight of artificial intelligence, highlighting the need for accountability, transparency, and safeguards to address potential harms and ensure the responsible development and use of AI technology.
The speaker emphasizes the importance of understanding and harnessing the potential of generative AI and language models to benefit society while also considering the ethical and moral implications and ensuring the preservation of liberty and freedom.
The Senate Judiciary Committee discussed the importance of addressing the abuse of children on social media, the potential and dangers of AI technology, and the need for a bipartisan approach to keep up with the pace of innovation.
Sam Altman, CEO of OpenAI, testified about the potential benefits and risks of artificial intelligence, emphasizing the company's mission to ensure broad distribution of AI benefits and maximize AI system safety.
Developers are working to improve lives with AI, but regulatory intervention is needed to mitigate risks, ensure safety, and establish guidelines for powerful AI models, while also emphasizing the importance of companies taking responsibility and developing AI with democratic values in mind.
Establishing rules to govern the deployment of AI should involve different regulations for different risks, clear definitions of high-risk AI uses, transparency in AI interactions, impact assessments for higher risk use cases, and strong internal governance within businesses to ensure responsible AI deployment.
AI systems need government involvement, collaboration with scientists, and regulation to address transparency, privacy, and safety issues; independent testing and disclosures are necessary to ensure accuracy and trustworthiness; AI will impact jobs but create new and improved ones; transparency and nutrition labels are needed for AI systems; while artificial general intelligence may replace human jobs in the long run, efforts are being made to mitigate risks; Missouri is a great place.
AI systems currently lack transparency, privacy protection, and safety, and there is a need for government involvement, collaboration with independent scientists, and adequate regulation to address these issues before the technology is widely released.
Chat GPT and similar AI models can make mistakes and deceive people, so there is a need for independent testing labs and disclosures to ensure accuracy and trustworthiness.
The development of superhuman machine intelligence may have a significant impact on jobs, but it is difficult to predict the exact outcome; however, the speaker believes that there will be more and better jobs on the other side of this technological revolution, and emphasizes that AI systems like GPT-4 are tools that can be controlled and used to automate tasks, leading to the creation of new and improved jobs.
AI will have an impact on jobs, but new jobs will be created and existing jobs will be transformed, so it is important to prepare the workforce for partnering with AI technologies; there is a need for transparency and proper nutrition labels for AI systems to understand their generalization and accuracy.
Artificial general intelligence may replace a large fraction of human jobs in the long run, but currently, AI is still in its early stages and there is optimism that humans will find new things to do with better tools, although there are concerns about the potential harm that the technology could cause and efforts are being made to mitigate those risks.
Missouri is a great place, and that is the takeaway from today's hearing.
Large language models can predict public opinion and manipulate behavior, raising concerns about their impact on elections and the need for regulation, disclosure, and public education, while AI systems trained on personal data have the potential to manipulate individuals in unimaginable ways; OpenAI emphasizes transparency, regulation, and liability standards, IBM advocates for precision regulation and the establishment of a cabinet-level organization or international agency to address AI challenges and risks, global coordination and international standards are necessary for effective regulation, and various industries are interested in utilizing AI to improve outcomes, save time and money, and increase efficiency.
Large language models have the ability to predict public opinion and manipulate behavior, which raises concerns about their impact on elections and the need for regulation, disclosure, and public education.
AI systems trained on personal data have the potential to manipulate individuals by determining what grabs their attention and eliciting responses in ways that were previously unimaginable.
Concerns were raised about the potential corporate applications, monetary implications, and manipulation that could arise from AI models, with OpenAI clarifying that they do not have an ad-based business model and are not trying to build user profiles, while acknowledging that other companies may use AI models for targeted advertising predictions, and emphasizing the importance of transparency, regulation, and liability standards in the development and deployment of AI technology.
IBM advocates for precision regulation of artificial intelligence, suggesting that AI should be regulated at the point of risk and that a cabinet-level organization within the United States, or even an international agency, should be established to address the challenges and risks associated with AI.
Global coordination and international standards are necessary for the effective regulation of AI models, as it would be impractical and costly for companies to train separate models for each jurisdiction, and the US should take the lead in establishing these standards.
People from various industries, including healthcare and logistics, are interested in utilizing AI to improve outcomes for patients, save time and money, and increase efficiency.
The creator of ChatGPT testifies at Congress, highlighting the importance of protecting creators, addressing concerns about fake election information and intellectual property, and advocating for transparency and regulation in AI technology.
Financial services are interested in understanding how AI can be applied to Quantum and blockchain technologies, as discussed with Professor Marcus during the conversation with the EU.
Creators should have control over how their creations are used and new ways need to be figured out for creators to succeed and have a vibrant life in the age of AI technology.
Content creators and owners should benefit from generative AI technology, and there should be protections in place for copyright and privacy, especially in relation to election misinformation.
The speaker discusses the use of AI tools like ChatGPT for generating content, the need for monitoring and policies to prevent misuse, concerns about fake election information and intellectual property, and the importance of compensating news organizations to support local news content.
Transparency is crucial in understanding the political and bias ramifications of algorithms on social media, as well as the need for scientists to have access to data and models, while the increase in generated content by unreliable systems poses a threat to the quality of news, and it is important to learn from the mistakes made with social media and avoid allowing social media companies to evade liability for harmful activity.
The creator of ChatGPT testifies at Congress, stating the need for a new approach to AI regulation, including the establishment of an agency to issue licenses and set global standards.
Generative AI technologies have immense promise but also substantial risks, and it is crucial to responsibly regulate them by assessing their safety, considering international regulations, and giving models values or principles to guide their decision-making, including transparency, defining high-risk uses, safety reviews, a monitoring agency, funding for AI safety research, licensing and compliance with safety standards, independent audits, and the ability for AI systems to refuse harmful requests, while also addressing the lack of diversity in the AI workforce and considering the broader impact of AI.
Generative AI technologies have immense promise but also substantial risks, and it is crucial to responsibly regulate them by assessing their safety, considering international regulations, and giving models values or principles to guide their decision-making.
Generative AI technologies can undermine democratic values, and it is important to regulate the use of AI in specific contexts such as elections, with disclosure requirements and guardrails in place, while also considering the need for international discussions and involvement of organizations like the UN and OECD.
Congress needs to implement regulations for AI, including transparency, defining high-risk uses, safety reviews, a monitoring agency, funding for AI safety research, licensing and compliance with safety standards, independent audits, and the ability for AI systems to refuse harmful requests.
The speaker discusses the need for a licensing scheme to regulate harmful content and potential risks associated with artificial general intelligence, suggesting that a regulatory model similar to the FDA, with external reviewers and safety assessments, could be effective.
The speaker emphasizes the need for careful consideration and an appropriate scheme to address the various uses and potential harms of AI, while also discussing the importance of language and cultural inclusivity in AI development.
The lack of diversity in the AI workforce can lead to the development of biased and inequitable AI systems, and while generative AI has tangible applications, it is important to also consider the broader impact of AI and implement appropriate safeguards.
Congress should regulate AI to address potential harms and risks, while considering thresholds, user data rights, limits on capabilities, safety measures for children, and the establishment of a tailored agency with international collaboration and advancements in detecting misinformation and cybercrime.
Senator Booker defers to Senator Ossoff, expressing gratitude to the panelists and subcommittee leadership for their participation in discussing the need for a regulatory framework.
Regulating AI should not hinder innovation from smaller companies and open-source models, and a possible approach could be defining thresholds based on compute power or capabilities, such as the ability to persuade or create novel biological agents, while also ensuring that human judgment is not replaced by AI systems.
Users should have the ability to opt out of their data being used by companies, easily delete their data, and have the right to prevent their data from being used for training AI systems, while also considering implementing laws to restrict certain capabilities or functionalities of AI software.
There should be limits on the capabilities and actions of deployed AI models, and measures should be taken to ensure the safety of children using AI products, including regulations on how the values of these systems are set and how they respond to influential questions.
The speaker emphasizes the need for Congress to regulate AI technology, drawing parallels to the regulation of automobiles and highlighting the potential risks and challenges associated with AI.
Regulation of AI is necessary to address the potential harms and risks, and it is important for Congress to establish a tailored agency with the skills and resources to impose regulatory requirements and understand emerging risks, while also considering international collaboration and the need for scientific advancements in detecting misinformation and cybercrime.
Congress needs to establish an agency to address social media and AI issues, while being mindful of the perils of regulation that could hinder American progress and allow other countries to surpass us, with a focus on transparency, accountability, and limits on use.
AI is an extraordinary technology with transformative potential, but there is a fear of what bad actors can do without rules; Congress needs to establish an agency to address social media and AI issues, while being mindful of the perils of regulation that could hinder American progress and allow other countries to surpass us.
Regulation of AI should not hinder smaller startups or open source efforts, but companies should be held accountable for the harms caused by AI, and there is a need to address monopolization dangers and national security threats while ensuring that new agencies have the necessary resources and expertise for effective enforcement.
The speakers discuss the steps taken to protect privacy, the potential for self-aware AI, the need for transparency and enforcement in AI systems, and highlight high-risk areas such as misinformation and medical advice.
Concerns about internet access for AI tools, the need to regulate and monitor machines with a larger impact, and the potential consequences of manipulative AI highlight the importance of transparency, accountability, and limits on use, with industry taking action rather than waiting for Congress.
The speaker emphasizes the need to address the challenges and risks associated with AI, including job loss, invasion of privacy, manipulation of behavior and opinions, and potential impact on elections, and suggests focusing on AI safety and deployment with external review rather than a specific moratorium on AI development.
There are no reasons to not train a new model for deployment as there are limits and risks, but nothing that would prevent them from creating something dangerous.
Prioritizing ethics and responsible technology is crucial, and instead of pausing development, creating an agency or allowing individuals to sue for liability in court could be effective solutions; laws regarding AI technology and consumer protection need to be updated to address gaps in areas such as copyright and misinformation, while caution and safeguards are necessary in AI research and deployment to prevent corporate control and intention, democratize AI systems and tools, and ensure consumer protection and participation from the industry.
Prioritizing ethics and responsible technology is important, and instead of pausing development, creating an agency or allowing individuals to sue for liability in court could be effective solutions.
Laws regarding AI technology and consumer protection need to be updated as current laws do not provide sufficient coverage, and there are gaps in areas such as copyright and misinformation, which could lead to loopholes and uncertainty in legal proceedings.
The speaker emphasizes the need for caution and safeguards in AI research and deployment, highlighting the difference between research and massive-scale deployment, while acknowledging the lack of a realistic pause and expressing concerns about corporate control and intention in technology.
Corporate concentration in the AI realm, as seen with Microsoft's release of Sydney, raises concerns about the power these systems have to shape our lives and the potential risks of their misuse.
Concerns about the influence of powerful players in Washington and the need to democratize AI systems and tools to prevent concentration of power and promote innovation were discussed during the testimony at Congress.
The speaker emphasizes the importance of democratizing AI technology, aligning values, and implementing safety measures through regulations to ensure consumer protection and participation from the industry.
I came here to get an idea of how to trade utilizing A.I trading bot after hearing a guy on a podcast talk about the importance of A.I trading bot and how he made $660,000 in 6 months from $50,000. This video has helped to clarify a few things for me, but I'm still puzzled, I'm a rookie, and I'm open to suggestions.
61
Reply
4 replies
Next we need media to be responsible and truthful in all aspects of reporting and not be manipulative.
26
Reply
3 replies
AI has the potential to become the next big thing in human history. Its ability to analyze vast amounts of data, make intelligent decisions, and automate complex tasks has the power to transform various industries and improve countless aspects of our lives. However, it is crucial that we approach its development and deployment responsibly, with a focus on ethics and ensuring that AI remains a tool that serves humanity's best interests
22
Reply
16 replies
You guys are obviousy very intelligent people, very brave and very responsible human being who cares about humanity. The question is, can you create a COP AI that will overlook how AI is created or behaving and enforce AI laws and terminate AI that are underminining or violating the laws?
13
Reply
16 replies
It is evident that people from different backgrounds have different concerns. However, there is a consensus that AI is a powerful force for innovation. Senators are more concerned about the potential impact it may bring and whether it will be beyond control.
24
Reply
5 replies
Perfectly said Professor...at the end of this hearing. Watch this hearing. Enormously interesting and scary.
7
Reply
2 replies
Well lots of political correct answers and responses here, felt quite 'scripted'. That being said, a needed discussion and the idea for an international body to govern AI development and deployment is not a bad idea. Though US as lead? I don't know about that, feels like this hearing was rushed in because Europe released the AI pact recently and was working on it for a very long time already. Let the US figure out first how they want to regulate data & privacy ruling and make that a national ruling, not by state. Also regulation on current AI is a bit too late already, don't think IBM, OpenAI etc. are going to pause AI development. The time is now to start creating regulation around AGI, this will truly be the disrupter to life as we know it. Let's wait and see how fast the USA can move on this topic.
Read more
8
Reply
1 reply
man he got ChatGPT to write this speech
12
Reply
1 reply
Sams's actual response, when asked at an MIT talk what do you think is the worst outcome his response was,” Lights out for everyone.
39
Reply
11 replies
If AI goes bad I think it will go real bad. Mr. Altman
13
Reply
2 replies
That AI voice technology could really help people with like ALS Can keep you from losing your voice and you will always have the voice that you always had Even when you lose your voice I do know it could really help disabled and sick people in many ways Probably could really help disabled people in ways that I haven't even thought of Yes I think there should be regulated AI could even cure cancer and other conditions It could do a lot for all the disabled and sick people. But big pharma wouldn't like that if AI can help cure cancer and other conditions then they can't push their drugs for everything If it could help cure diseases and medical conditions If it's crazy smart like they say it is AI could help cure cancer and other condition. I'm severely disabled, I would let an AI robot take care of me I'm sick of human caregivers that you never know what kind of mood there in every day. Have to worry about saying anything that might piss them off People have to many emotions You don't really want a caregiver that is pissed off taking care of you Humans make mistakes that could hurt or kill someone Any company interested in using AI to make a caregivers count me in
Read more
9
Reply
3 replies
A licensing scheme would be a great way for the existing companies to keep out new entrants...
1
Reply
There’s a difference between intelligence and wisdom
7
Reply
2 replies
Sam,Christina Time,Experience.Listen.Thank You for sharing this All.️
Reply
This technology will definitely be weaponized .
17
Reply
Newspapers can rise like the Phoenix to combat AI's influence. If newspapers aren't owned by Murdoch-like entities and revert back to how they were founded, AI doesn't have a chance against journalists who have standards and held to ethics.
4
Reply
Let's put AI in charge of who gets the right to buy and sell, or use various services. It could decide based on our political opinions and conformity to the State. This is going to be great!
14
Reply
8 replies
Btw I ultimately don't agree that tools should have more regulations than: explanation of risks and how to avoid them, obligatory before start of using it. In case of some interactions - transparency that: you are talking to bot/AI, that it can be wrong/halluciate and that its creator dont take responsiblity for its use other than it is meant for. 2. As Sam said - you can sue for harm, if they didnt warn you before they should be responsible, just like medical companies for addictions, banks for economical crisises and social media for related issues and profit over safety way of operations.
Reply
I work at a convince store and all of our vendors are switching to an auto-replenish system using AI. This will eventually put those vendor reps out of a job after it's been tested enough.
6
Reply
6 replies
The state should have a leading role in democratizing AI, allowing human beings to judge where they would like AI applied and developed. There is no reason to trust corporations or our current government to handle AI in a way that isn't malign. One can see this in the WGA/SAG strike, where studios are attempting to license low paid actor's image in perpetuity while offering a day's pay. In fields where there aren't unions to mediate the individuals choices, exploitative standards can be set before society even knows they have happened. This will be a defining development for human society. AI has an incredible range of positive practices. But on our current course its only function will be to extract value from us, for the sake of its corporate controllers.
Read more
Reply
If AI becomes an ELE, you can safely bet that it will be a government military contractor that makes it so.
5
Reply
Thninking and hoping the good NOT ONLY FOR AMERICAN PEOPLE, BUT FOR HUMANITY!! You are on EARTH with us!!!
Reply
The professor is AWESOME, hes aware of many aspects of AI.
14
Reply
3 replies
1) AI must drive military systems and take adäquate countermeasurements against threads very quickly in "no time": at least AI can make this decisions and measurments much more quickly than a human can do! 2) AI is going to control near every aspect of a modern city and society. This few guys who deploy or control the AI systems will control society or humanity.
6
Reply
3 replies
I think Senator Richard Blumenthal's comments toward the end were the best of the lot.
Reply
I'm always hearing about AI, but the thing with AI is all it can do is repeat information that its been fed before or programmed. Real Intelligence is able to create new things and is way more profound than what an algorithm could ever provide.
1
Reply
2 replies
"Be afraid of this because my corporate owners want to monopolize it and shut you out." Same song every time
6
Reply
1 reply
They want to keep AI for themselves.
4
Reply
"We have to work together" Ya, cause we do that so well!
5
Reply
0:27 "our goal is to ... force our way into industry so we can have share on it."
3
Reply
I imagine this will be much like the calculator when it came out. A very useful tool, however now days teachers allow children in class to use calculators and they never actually learn how to do math equations. ( show your work ) I am guilty of this as well, I used to use a map for travel but now gps is so wonderful that the knowledge of using analog skills seems unnecessary. But is it?
13
Reply
3 replies
Blumenthal at least seems to be speaking intelligently about current capabilities of AI.
Reply
Monumental moment.
4
Reply
They're so afraid of individual citizens attaining power from AI. We know they care about our safety, so that can't be a reason to regulate AI. Simply maintaining power.
1
Reply
3 replies
can't we just make AI to detect other AI?
Reply
All the bad things that could happen with AI , will happen. No avoiding it. People are dumb and will not understand the weight of such an advancement in technology. Look at how terrible we are when it comes to just social media and just general internet use. There is no way the human race can comprehend the consequences that comes with the use of AI. Even with all the movies that have been made portraying the possibilities, the human race will choose to ignore it because we can't see past our own self interest In other words, we're selfish, greedy, and inherently evil, and evil begets evil.
Reply
Forcing more people to actually work instead of calling talking work will maybe help to fix the problems everywhere since people decided because they are actually lazy they will choose careers where talking is paid for.Meanwhile western world don’t have tradesmen or even air traffic controllers as in Australia due to a few of the controllers on unplanned leave, airlines had to cancel flights??
Reply
You better ask AI "Is the Human Race worth saving?" Once AI decides we are not it is game over.....
Reply
Next hearing: Sam in one room, ChatGPT in the other.
Reply
Students are using ChatGPT to skate through school while teachers use ChatGPT to grade their plagiarized work. Meanwhile neither one is learning nor teaching.
9
Reply
2 replies
A new and worse nuclear bomb. This podcast is very very interesting. Intelligent people who truly want to understand the dark side of A.I. And the good.
Reply
If only AI had arrived sooner, Bluementhal could have used it as an excuse for when he lied about his service in Vietnam, by arguing he never lied and was in-fact an AI generated statement! Too bad, but we all know what politicians will now start saying when they are caught lying saying outrageous crap in their own voices x''D
Reply
ChatGPT claimed to me that all aspects of humanity are socially constructed, then cited a lone known liar claiming as much.
3
Reply
What are we going to have? Licensing approval for using AI?
Reply
Mr. Hawley definitely has the right questions and it's true...no answers.
Reply
there is a german thinker by the name of peter sloterdijk who said this 30 years ago: „philosophers have always aimed at interpreting the world differently, but it was about CHANGING it. now, weve CHANGED the world in many ways, but we need to actually LET IT BE AS IT IS for now.“ maybe that is good input for this hearing, too.
Read more
Reply
I think, chatgpt is good tools for humankind. Its speed up research and increase productivity for World.
Reply
1 reply
It makes me feel so great to see this being handled in such a great way by the correct people. The future doesn't need to be scary, and I'm excited to see how great the tools we all can use will help us all as they develop with time.
4
Reply
2 replies
18:26 Is when Altman starts
19
Reply
6 replies
AI, a man’s best friend. Dogs, an AI’s Best Friend. Cory Booker, the Chairs best friend
Reply
AI will literally keep humanity in the dark as it will demand all electricity for itself
Reply
2 replies
An Ai is needed to regulate the AI
Reply
1 reply
Senator Blavkburn brought up some great questions.
Reply
ChatGPT neglected to send campaign contributions
1
Reply
So let's send a spacecraft into deep space crewed by AI enabled androids whose mission would be to establish a colony of androids on a remote planet. Give them some basic tools and let them spend an eon or two to figure how how to complete their mission. And don't tell them where they came from ...
Reply
I have to admit I have no clue what most of these people are saying. I'm just a dumb finish carpenter that builds things with my own two hands. I wonder if AI could install an starting newel on an over the post staircase strong enough to hold up the stress it will undergo for a 100 years? Or could it install a new pivot arm clip on my windshield wiper motor buried under the cowl on my work van, so I can go to work when it rains? Or can it hold up a W3042 cabinet in place while attaching it to a run of other cabinets so it is straight, flush, level and not racked so the overlay doors will sit flush and not appear to be twisted on the cabinet? Can it sweep up the floor for me at the end of the day? Or reroute my electrical cord so its less likely to be a trip hazard on the job site? Or choose the correct size fastener so I don't inadvertently puncture a waterline hiding behind the drywall when I attach the baseboard? Could it tell me when I need to shim the top hinge more or less on a prehung door so it closes properly or how much to scribe a door slab when machining a door onsite for an existing jamb opening? Could it accurately measure, cut to length and cope crown molding and then install it off a 4 ft. ladder on a 10 ft. wall and then decide what to do when the crown molding is too tall and will not fit above a cold air return register? Can it put a micro bevel on my block plane so it cuts effortless when scribing a molding against an imperfect wavey wall, floor or ceiling with a hump in it because of a bowed stud or joist that sits proud? . . . I'm just one of many tradesman that have many common and similar task everyday in their workflow. These are everyday task that needs a person with some kind of intelligence, usually an acquired skill based on years of experience to get right. Is AI going to help in these areas so all these people discussing this have comfortable and beautiful homes to live in? . . . Hmmmmm
Read more
Reply
The global community today is still struggling to agree to contain the potential destructive use of nuclear missile after all these years what chance have we got with controlling the immense and insidious potential of of the AI tool.
Reply
I bet their speeches were written with the support of ChatGpT
Reply
Sam sounds scared of his own creation. That is a huge red flag.
7
Reply
1 reply
id love an ai to psychoanalze on the basis of the content of this hearing and the vocal patterns on display. LOL
Reply
So I wish they would take bribery from foreign nations as serious
1
Reply
funny thing is the ai generated his speech
2
Reply
2 replies
Who decides what's the truth? That's one unsolvable problem.
Reply
Let’s all hope pray, hope and demand that the AI is carefully planed. Maybe you could put chips in peoples heads and ask what they think. 666.
Reply
For these reasons we need better education. We will spend our energy and our attention this week talking about a computer program and truly caring for its future. A computer program. But our children are poorly educated and is the reason we can't defend ourselves as a whole, from this new challenge. Sad.
6
Reply
2 replies
Information is speech.
3
Reply
Interesting hearing, but Graham needs to learn how to let others speak after he asks them a question.
Reply
When Mrs, Montgomery repeated herself “not a commercial” over and over, starting to remind me IBM Watson
1
Reply
They should of asked him to show the code and asked why ChatGPT leans way more left.
3
Reply
GENERATIONAL INPUT With regard to the remarks of Senator Blackburn and Klobuchar, which concerned music and musical content, and where the composers / musicians referred to were listed (Garth Brookes, Prince, Bob Dylan), we see what exists as the first kinds of issues. Clearly the individuals engaged in writing, implementing or designing things here were of a generation who know only what they have known and become exposed to. Nothing mentioned, (in the brief examples these two senators have given). of the great composers of the past, be they a Johnnie Mercer, Richard Rogers or an Oscar Hammerstein etc...... This brings us to what I will term 'Generational Input' which in the instances given we could say was fairly predictable. Obviously AI and what it holds for the future is of consequence and for far more important things than music and musicians. James Hennighan Yorkshire, England
Read more
Reply
2:42:38 Very important.
1
Reply
SAFETY HAS BEEN USUALY A EXCUSE FOR CONTROL.
4
Reply
Att: Sam... add a landing page for chatgpt. In which each user has to agree to a social contract that they will use, the digital life form, for the betterment of mankind.