Advertisement
New Zealand markets closed
  • NZX 50

    11,805.09
    -141.34 (-1.18%)
     
  • NZD/USD

    0.5941
    -0.0008 (-0.14%)
     
  • NZD/EUR

    0.5549
    +0.0009 (+0.16%)
     
  • ALL ORDS

    7,837.40
    -100.10 (-1.26%)
     
  • ASX 200

    7,575.90
    -107.10 (-1.39%)
     
  • OIL

    83.66
    +0.09 (+0.11%)
     
  • GOLD

    2,349.60
    +7.10 (+0.30%)
     
  • NASDAQ

    17,718.30
    +287.79 (+1.65%)
     
  • FTSE

    8,139.83
    +60.97 (+0.75%)
     
  • Dow Jones

    38,239.66
    +153.86 (+0.40%)
     
  • DAX

    18,161.01
    +243.73 (+1.36%)
     
  • Hang Seng

    17,651.15
    +366.61 (+2.12%)
     
  • NIKKEI 225

    37,934.76
    +306.28 (+0.81%)
     
  • NZD/JPY

    94.0360
    +1.5400 (+1.66%)
     

VP Kamala Harris Unveils “Safe, Secure & Responsible” AI Guidelines For Federal Agencies

Hollywood and other industries are still grappling with how to integrate AI into their workflow, but the Biden administration today started putting up concrete guardrails around the technology that could literally change everything.

“When government agencies use AI tools we will now require them to verify that those tools do not endanger the rights and safety of the American people,” stated Vice President Kamala Harris on Wednesday ahead of the Office of Management and Budget’s just released Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence memorandum.

More from Deadline

ADVERTISEMENT

“Last year at the AI Safety Summit in London, I laid out our nation’s vision for the future of AI,” the Vice President said. “A vision for a future where AI is used to advance the public interest, and I’m going to emphasize that advance the public interest,” the Veep added. “As a follow up to that presentation months ago now, I am proud to announce three new binding requirements to promote the safe, secure and responsible use of AI by our federal government.”

Building on recent policy initiatives, and the comprehensive Executive Order 14110 issued in October, the memo sent out early this morning to “heads of executive departments and agencies” from OMB boss Shalanda Young spotlights Strengthening AI Governance, Advancing Responsible AI Innovation, and Managing Risks from the use of AI (read the AI guidance memo issued today from the OMB Director here).

To blunt the risk of unleashing Matrix or Skynet futures, the specific parameters of the OMB proposals the VP is pitching are:

Strengthening AI Governance. Managing AI risk and promoting AI innovation requires effective AI governance. As required by Executive Order 14110, each agency must designate a Chief AI Officer (CAIO) within 60 days of the date of the issuance of this memorandum. This memorandum describes the roles, responsibilities, seniority, position, and reporting structures for agency CAIOs, including expanded reporting through agency AI use case inventories. Because AI is deeply interconnected with other technical and policy areas including data, information technology (IT), security, privacy, civil rights and civil liberties, customer experience, and workforce management, CAIOs must work in close coordination with existing responsible officials and organizations within their agencies. 

Advancing Responsible AI Innovation. With appropriate safeguards in place, AI can be a helpful tool for modernizing agency operations and improving Federal Government service to the public. To that end, agencies must increase their capacity to responsibly adopt AI, including generative AI, and take steps to enable sharing and reuse of AI models, code, and data. This memorandum requires each agency identified in the Chief Financial Officers Act (CFO Act)4 to develop an enterprise strategy for how they will advance the responsible use of AI. This memorandum also provides recommendations for how agencies should reduce barriers to the responsible use of AI, including barriers related to IT infrastructure, data, cybersecurity, workforce, and the particular challenges of generative AI. 

Managing Risks from the Use of AI. While agencies will realize significant benefits from AI, they must also manage a range of risks from the use of AI. Agencies are subject to existing risk management requirements relevant to AI, and this memorandum does not replace or supersede these requirements. Instead, it establishes new requirements and recommendations that, both independently and collectively, address the specific risks from relying on AI to inform or carry out agency decisions and actions, particularly when such reliance impacts the rights and safety of the public. To address these risks, this memorandum requires agencies to follow minimum practices when using safety-impacting AI and rights-impacting AI, and enumerates specific categories of AI that are presumed to impact rights and safety. Finally, this memorandum also establishes a series of recommendations for managing AI risks in the context of Federal procurement.

“The head of each covered agency must also consider the financial, human, information, and infrastructure resources necessary for implementation, prioritizing current resources or requesting additional resources via the budget process, as needed to support the responsibilities identified in this memorandum,” reads the OMD memo, date March 29, 2024. “CAIOs bear primary responsibility on behalf of the head of their agency for implementing this memorandum and coordinating implementation with other agencies.”

“President Biden and I intend that these domestic policies will serve as a model for global action,” VP Harris stated as the policy rollout began early this AM. The international audience may be the long term goal, but the administration has been focused the past year on AI in America.

AI graphic
AI graphic

Back in October, the Biden inked Executive Order 14110 had among its aims that AI companies like ChatGPT owners OpenAI LP and the non-profit OpenAI Inc or Alphabet would share their safety test results with the U.S. government. Executive Order 14110 didn’t demand AI companies to identify AI-generated content, but it did command the Department of Commerce to develop standards for authentication and watermarking.Though enforcement remains vague, Biden’s order also declared the federal government would develop best practices for companies of all stripes to “mitigate the harms and maximize the benefits” of AI.

No coincidence that such a labor friendly administration would adopt such a stance.

AI was a major sticking point in the WGA and SAG-AFTRA negotiations and strikes against the studios last year – disputes that Biden gave the nod to, from a distance. AI is also a major factor in the current ongoing talks between the AMPTP and IATSE and the Teamsters. At the same time, the studios have been utilizing AI already in early steps on the small and big screens. Since the beginning of the year, various AI companies have taken meetings with the studios or streamers over further incorporating the tech into movies and TV series, sources say.

That’s all happening behind closed doors in Tinseltown, but today’s initiative out of DC intends to let the sunlight in – – at least in bureaucratic circles

“Each agency (except for the Department of Defense and the Intelligence Community) must individually inventory each of its AI use cases at least annually, submit the inventory to OMB, and post a public version on the agency’s website,” the Young memo notes. “Embracing innovation requires removing unnecessary and unhelpful barriers to the use of AI while retaining and strengthening the guardrails that ensure its responsible use,” the correspondence says.

While not quite the AI Bill of Rights that has been speculated for some time now to be coming from the administration, and an election wild card to some extent, the new policy of self-described “concrete safeguards” is aiming to be in place across much of the federal government by December 1. Yes, the memo coming from Washington DC acknowledges some agencies or groups could receive a one-time one-year extension if they “cannot feasibly meet the minimum requirements.” Still, to move closer to as many agencies as possible reaching that December 1 goal, the government plans on hiring at least 100 AI professionals by this summer.

As practical or not as those goals may be in the fast changing world of AI and in an election year, the Vice President has her eye on the big picture – and we’re not talking 2028.

“I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm, while ensuring everyone is able to enjoy its full benefit,” Harris said on Wednesday

Best of Deadline

Sign up for Deadline's Newsletter. For the latest news, follow us on Facebook, Twitter, and Instagram.