PCAST Working Group on Generative AI Invitations Public Enter
[ad_1]
As a part of my duties on the Presidential Council of Advisors on Science and Know-how (PCAST), I’m co-chairing (with Laura Greene) a working group learning the impacts of generative synthetic intelligence know-how (which incorporates in style text-based massive language fashions corresponding to ChatGPT or diffusion mannequin picture mills corresponding to DALL-E 2 or Midjourney, in addition to fashions for scientific functions corresponding to protein design or climate prediction), each in science and in society extra broadly. To this finish, we can have public classes on these subjects throughout our PCAST assembly subsequent week on Friday, Could 19, with displays by the next audio system, adopted by an intensive Q&A session:
- AI enabling science:
- AI and society:
The occasion can be livestreamed on the PCAST assembly web page. I’m personally very a lot trying ahead to those classes, as I imagine they are going to be of broad public curiosity.
In parallel to this, our working group is additionally soliciting public enter for submissions from the general public on find out how to determine and promote the helpful deployment of generative AI, and on how greatest to mitigate dangers. Our preliminary focus is on the difficult matter of find out how to detect, counteract, and mitigate AI-generated disinformation and “deepfakes”, with out sacrificing the liberty of speech and public engagement with elected officers that’s wanted for a wholesome democracy to perform; sooner or later we may additionally concern additional requests centered round different features of generative AI. Additional particulars of our request, and find out how to put together a submission, might be discovered at this hyperlink.
We additionally encourage submissions to some further requests for enter on AI-related subjects by different companies:
- The Workplace of Science Know-how and Coverage (OSTP) Request for Data on how automated instruments are getting used to surveil, monitor, and handle staff.
- The Nationwide Telecommunications and Data Administration (NTIA) request for touch upon AI accountability coverage.
Readers who want to know extra about present or ongoing federal AI coverage efforts may additionally have an interest within the following sources:
- The White Home Blueprint for an AI Invoice of Rights lays out core aspirational ideas to information the accountable design and deployment of AI applied sciences.
- The Nationwide Institute of Requirements and Know-how (NIST) launched the AI Danger Administration Framework to assist organizations and people characterize and handle the potential dangers of AI applied sciences.
- Congress created the Nationwide Safety Fee on AI, which studied alternatives and dangers forward and the significance of guiding the event of AI in accordance with American values round democracy and civil liberties.
- The Nationwide Synthetic Intelligence Initiative was launched to make sure U.S. management within the accountable growth and deployment of reliable AI and help coordination of U.S. analysis, growth, and demonstration of AI applied sciences throughout the Federal authorities.
- In January 2023, the Congressionally mandated Nationwide AI Analysis Useful resource (NAIRR) Activity Power launched an implementation plan for offering computational, information, testbed, and software program sources to AI researchers affiliated with U.S organizations.
[ad_2]