Ice Lounge Media

Ice Lounge Media

WH’s AI EO is BS

IceLoungeMedia IceLoungeMedia

An executive order was just issued from the White House regarding “the Use of Trustworthy Artificial Intelligence in Government.” Leaving aside the meritless presumption of the government’s own trustworthiness and that it is the software that has trust issues, the order is almost entirely hot air.

The EO is like others in that it is limited to what a president can peremptorily force federal agencies to do — and that really isn’t very much, practically speaking. This one “directs Federal agencies to be guided” by nine principles, which gives away the level of impact right there. Please, agencies — be guided!

And then, of course, all military and national security activities are excepted, which is where AI systems are at their most dangerous and oversight is most important. No one is worried about what NOAA is doing with AI — but they are very concerned with what three-letter agencies and the Pentagon are getting up to. (They have their own, self-imposed rules.)

The principles are something of a wish list. AI used by the feds must be:

lawful; purposeful and performance-driven; accurate, reliable, and effective; safe, secure, and resilient; understandable; responsible and traceable; regularly monitored; transparent; and accountable.

I would challenge anyone to find any significant deployment of AI that is all of these things, anywhere in the world. Any agency claims that an AI or machine learning system they use adheres to all these principles as they are detailed in the EO should be treated with extreme skepticism.

It’s not that the principles themselves are bad or pointless — it’s certainly important that an agency be able to quantify the risks when considering using AI for something, and that there is a process in place for monitoring their effects. But an executive order doesn’t accomplish this. Strong laws, likely starting at the city and state level, have already shown what it is to demand AI accountability, and though a federal law is unlikely to appear any time soon, this is not a replacement for a comprehensive bill. It’s just too hand-wavey on just about everything. Besides, many agencies already adopted “principles” like these years ago.

The one thing the EO does in fact do is compel each agency to produce a list of all the uses to which it is putting AI, however it may be defined. Of course, it’ll be more than a year before we see that.

Within 60 days of the order, the agencies will choose the format for this AI inventory; 180 days after that, the inventory must be completed; 120 days after that, the inventory must be completed and reviewed for consistency with the principles; plans to bring systems in line with them the agencies must “strive” to accomplish within 180 further days; meanwhile, within 60 days of the inventories having been completed they must be shared with other agencies; then, within 120 days of completion, they must be shared with the public (minus anything sensitive for law enforcement, national security, etc.).

In theory we might have those inventories in a month, but in practice we’re looking at about a year and a half, at which point we’ll have a snapshot of AI tools from the previous administration, with all the juicy bits taken out at their discretion. Still, it might make for interesting reading depending on what exactly goes into it.

This executive order is, like others of its ilk, an attempt by this White House to appear as an active leader on something that is almost entirely out of their hands. To develop and deploy AI should certainly be done according to common principles, but even if those principles could be established in a top-down fashion, this loose, lightly binding gesture that kind-of, sort-of makes some agencies have to pinky-swear to think real hard about them isn’t the way to do it.