Nominations (submitted at this link) should include a letter of nomination and a brief citation to be used in the event that the nomination is successful. We welcome nominations for the prize, which will be accepted until June 30, 2023, and are seeking a strong and diverse pool of nominees. This will be the inaugural year for the prize, and I have agreed to serve on the prize committee. This prize seeks to recognize mathematicians at any career stage who, like Stein, have found exciting new avenues for mathematical exploration in subjects old or new or made deep insights which demonstrate promise to reshape thinking across areas. Stein, who passed away in 2018, is remembered for identifying many deep principles and methods which transcend their original context, and for opening entirely new areas of research which captivated the attention and imagination of generations of analysts. Stein (my former advisor) to honor his remarkable legacy in the area of mathematical analysis. This prize was endowed in 2022 by students, colleagues, and friends of Elias M. The current prize amount is US$5,000 and the prize is awarded every three years for work published in the preceding six years. Stein Prize for New Perspectives in Analysis is awarded for the development of groundbreaking methods in analysis which demonstrate promise to revitalize established areas or create new opportunities for mathematical discovery. In January 2023, the Congressionally mandated National AI Research Resource (NAIRR) Task Force released an implementation plan for providing computational, data, testbed, and software resources to AI researchers affiliated with U.S organizations.research, development, and demonstration of AI technologies across the Federal government. leadership in the responsible development and deployment of trustworthy AI and support coordination of U.S. The National Artificial Intelligence Initiative was launched to ensure U.S.Congress created the National Security Commission on AI, which studied opportunities and risks ahead and the importance of guiding the development of AI in accordance with American values around democracy and civil liberties.The National Institute of Standards and Technology (NIST) released the AI Risk Management Framework to help organizations and individuals characterize and manage the potential risks of AI technologies.The White House Blueprint for an AI Bill of Rights lays out core aspirational principles to guide the responsible design and deployment of AI technologies.Readers who wish to know more about existing or ongoing federal AI policy efforts may also be interested in the following resources: The National Telecommunications and Information Administration (NTIA) request for comment on AI accountability policy.The Office of Science Technology and Policy (OSTP) Request for Information on how automated tools are being used to surveil, monitor, and manage workers.We also encourage submissions to some additional requests for input on AI-related topics by other agencies: Further details of our request, and how to prepare a submission, can be found at this link. Our initial focus is on the challenging topic of how to detect, counteract, and mitigate AI-generated disinformation and “deepfakes”, without sacrificing the freedom of speech and public engagement with elected officials that is needed for a healthy democracy to function in the future we may also issue further requests centered around other aspects of generative AI. In parallel to this, our working group is also soliciting public input for submissions from the public on how to identify and promote the beneficial deployment of generative AI, and on how best to mitigate risks. I am personally very much looking forward to these sessions, as I believe they will be of broad public interest. The event will be livestreamed on the PCAST meeting page. To this end, we will have public sessions on these topics during our PCAST meeting next week on Friday, May 19, with presentations by the following speakers, followed by an extensive Q&A session: As part of my duties on the Presidential Council of Advisors on Science and Technology (PCAST), I am co-chairing (with Laura Greene) a working group studying the impacts of generative artificial intelligence technology (which includes popular text-based large language models such as ChatGPT or diffusion model image generators such as DALL-E 2 or Midjourney, as well as models for scientific applications such as protein design or weather prediction), both in science and in society more broadly.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |