US announces the creation of AI Safety Institute to enhance consumer protection and guide AI policy

US announces the creation of AI Safety Institute to enhance consumer protection and guide AI policy

Following a new directive from President Joe Biden regarding the reallocation of federal government tasks in AI development, Vice President Kamala Harris, while at the Global Summit on AI Safety in the UK, unveiled a set of initiatives in the machine learning domain. Among the significant initiatives is the establishment of the United States AI Safety Institute and the release of the first draft of policy recommendations for the federal government's use of AI.

VP Harris and President Biden have expressed a unified stance, emphasizing that leaders across all sectors — whether government entities, public organizations, or the private sector — bear a responsibility towards society for the safe deployment of AI. Harris particularly highlighted the potential peril posed by insufficiently regulated AI, including the prospects of large-scale cyber-attacks and the development of AI-based biological weaponry.

President Biden and I believe that all leaders, from government, civil society, and the private sector have a moral, ethical, and societal duty to make sure AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits.

Under this initiative, the creation of the US AI Safety Institute was announced, a body that will be entrusted with formulating recommendations, standards, and methodologies for testing and evaluating potentially hazardous AI systems. This announcement comes close on the heels of a sweeping executive order by President Biden aimed at reorienting the federal government's approach towards AI development.

The discourse also touched on an impending public discussion concerning the first draft of policy recommendations on AI usage within the government sector. These guidelines are anticipated to aid in crafting safety measures for applying AI across various public spheres.

Furthermore, Harris mentioned garnering support from 30 countries for a Political Declaration on Responsible AI Usage in military contexts. In addition, a virtual hackathon was announced to tackle threats associated with phone and internet fraud leveraging AI.

A focal point of the Biden-Harris administration's agenda is content authenticity verification. Collaboration with C2PA (Coalition for Content Provenance and Authenticity) and other industry associations is planned as part of this endeavor.

Harris iterated that the voluntary commitments from companies represent a step towards a secure future in the AI realm, affirming that their efforts in this domain are ongoing. She also underscored the importance of legislation in addressing these challenges while fostering innovation in AI.

These voluntary [company] commitments are an initial step toward a safer AI future, with more to come. As history has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over: The wellbeing of their customers; the security of our communities; and the stability of our democracies.

In a bid to protect consumers and ensure the responsible evolution of AI, the US is steering towards a structured framework, setting a precedent for a collaborative approach in navigating the complexities and potential risks associated with advancing AI technology.

More news
Tags: