Yoga Enterprise

Van life: How to work from an RV

When asked last week during its Q3 2022 earnings call about how the company could assist the DoD and its allies in Ukraine, C3.ai CEO Thomas Siebel did not hesitate to highlight the opportunities for enterprise AI software companies.

“We are very actively engaged with the Department of the Army, the Department of the Air Force and with some of the intelligence agencies in some very large projects,” Siebel said.

Fast-tracking government AI purchases will be key to getting more of those projects, he said. Siebel pointed to modifications made in the 2022 National Defense Authorization Act that streamline procurement of tech from commercial AI companies by the Joint Artificial Intelligence Center, which leads integration of data- and AI-related work at the DoD.

“This change in federal procurement policy is very significant, basically mandating the Secretary of Defense put in procedures in place to ensure that commercial software is considered first,” Siebel said, referring to Defense Department work as “a big opportunity.” He added his company recently “achieved a new production deployment with the Defense Counterintelligence and Security Agency and secured additional business with the US Space Force.”

C3.ai, which also works with customers in industries including health care, financial services and telecommunications, already has defense contracts in place. In 2020 it signed a deal with the DoD’s Defense Innovation Unit to help predict system failures and malfunctions, and optimize the performance of aircraft such as the E-3 Sentry, a plane used to surveil and assess situations on the ground during battle. The agreement allowed the DoD or other federal agencies to spend up to $95 million on the company’s technology for aircraft predictive maintenance.

But C3.ai’s software does not just help the military maintain its equipment. The company also works with the Missile Defense Agency, which uses its software to enable data integration, operations and security to simulate the performance of non-ballistic and hypersonic missiles.

“This solution can produce tens of thousands of missile trajectories in a matter of minutes — a 100-fold increase in model generation capacity and speed — enabling the Department to swiftly assess various operational scenarios, as needed, against adversarial missile systems,” wrote the DIU in a 2021 annual report. Alluding to missile technology advancements by China and Russia, the agency said it hoped to improve and accelerate DoD’s ability to simulate hypersonic missiles while in flight.

More recently in December 2021, the DoD awarded a five-year agreement to C3.ai, making it easier for defense customers to use up to $500 million worth of the company’s software without having to go through a formal bidding process.

Growing the ranks of AI tech draftees

C3.ai is just one of many tech companies joining the ranks of Pentagon AI suppliers.

Announced in February, up to $249 million of Joint Artificial Intelligence Center funds for AI testing and evaluation are now readily available to several AI tech providers including cloud AI companies DataRobot, Figure Eight Federal, Scale AI, Veritone, computer vision companies CrowdAI and Image Matters , and Arthur, which helps monitor AI models to avoid bias and inaccuracy.

“One of the ways to avoid the valley of death and one of the ways to get this technology into the warfighters’ hands is to be able to at least readily connect those vendors, those industry partners that have the technology, to the warfighter,” said Jane Pinelis, then-JAIC head of test and evaluation, at a 2021 event regarding a JAIC request for proposals to AI vendors.

The people in charge of the DoD haven’t always wanted to wait for rigorous testing of AI tech. “There’s a lot of conversation, I would say, somewhat erroneously, about test and evaluation delaying fielding,” said Pinelis, who is now chief of AI Assurance at JAIC, during a 2021 webinar, noting that test and evaluation of AI used by the DoD was traditionally thought of as “a hoop that somebody has to jump through before deployment.”

But she said attitudes within the DoD are shifting toward accepting the need to test and evaluate AI as a way to ensure that systems are more robust, and that by working with tech partners, the DoD can assess compliance with the ethical AI principles it established in 2020

C3.ai targets the ‘scholar-statesmen warriors’ of the Pentagon

If recent additions to C3.ai’s board are any indication, its military presence is by design. Last year, the company added two former military leaders to its board, which was already home to former US Secretary of State Condoleezza Rice. Retired Lieutenant General and former National Security Adviser HR McMaster and retired Lieutenant General Ed Cardon, who was also director of the US Army Office of Business Transformation, both joined the company’s board.

In a 2019 Forbes interview, Siebel declared his admiration for people working at the Pentagon, calling them “scholar-statesmen warriors.”

He also referenced Vladimir Putin, recalling that the Russian president had once said, “Whoever wins the war in AI will be the ruler of the world.” Siebel continued, “I believe that is true, but I do not believe Russia will win. It is either going to be China or the US I believe we are currently in a state of non-kinetic warfare with China.”

Good for business?

Many of the AI ​​vendors awarded JAIC contracts last month published press releases or blog posts celebrating them.

When AI tech suppliers tout their military work, “I’m pretty darn sure those companies think it’s good for business,” said Anthony Habayeb, CEO of Monitaur, which sells monitoring software to help ensure AI is fair and accurate, but has not engaged with the Defense Department. The company is focused on selling to insurance providers.

Habayeb said when tech vendors announce they are working with the federal government, other potential customers might think, “Working with the federal government is not easy, and if you got that done, you might be able to work with me.”

As AI companies seek military business, many also aim to appeal to more mainstream business decision-makers in other industries. Both C3.ai and Dataiku — which has also sought defense sector work — advertise to public radio listeners as providers of technology that helps everyday people solve business problems with AI (sometimes to the ire of prominent leaders in AI law and policy).

“We’re focused on working with all sectors of the US government, increasing scalability across agencies with technology that will upskill the federal workforce, provide trustworthy AI and enhance speed,” said Mark Elszy, regional vice president of Federal at Dataiku. The company said it could not say whether it has done any work with the DoD.

“If you can help the federal government to be more fair in their use of AI, that’s pretty important,” Habayeb said, but added, “Serving the DoD is going to take a lot of resources and energy and focus, and if you’ re doing that, how effectively can you serve other sectors?”

Working on AI-related military projects, though controversial, does not necessarily determine potential partners, either. Parity, a company that conducts audits of machine-learning algorithms to find and fix fairness and accuracy problems, is led by CEO Liz O’Sullivan. O’Sullivan co-founded Arthur — one of the companies awarded a recent JAIC contract — which she has since left.

While O’Sullivan said Parity currently has no plans to work with Arthur on defense contracts, Parity will partner with Arthur on non-defense work. “We remain open to working [in conjunction with Arthur] with any clients who are in our target markets of finance, insurance, health care and/or HR,” she said.

Killer robots and bad optics

Still, the mere mention of AI in relation to military work doesn’t sit well with some business leaders. “Our software doesn’t have anything to do with killer robots. We use AI to scan invoices, not blow stuff up,” said the CTO of a software company that provides AI technologies who declined to comment on the record for this story.

When big tech companies have taken on defense industry work, it has spurred opposition from consumers and company employees. For instance, in 2018, Google employees protested the company’s Project Maven effort to develop drone AI technology with the military, prompting employee resignations.

Google dropped the project, which was reportedly taken over by data and AI tech provider Palantir. Palantir, which also works with companies in the energy, telco, media and financial services industries, reportedly won an $823 million contract last year from the US Army.

“Reputational harm can be quite considerable” when AI tech providers work with the military, said Michael Connor, executive director of Open MIC, a nonprofit that has helped shareholders pressure tech companies including Microsoft and Amazon to establish ethical practices.

But these issues are complicated, Connor said. Because AI is used even for basic administrative purposes like automating invoices, it is important to consider DoD contracts with AI vendors on a case-by-case basis, he said.

“Just because you’re working with the defense industry doesn’t mean you’re doing something unethical,” Connor said. “It depends on what’s being used and when it’s being used. Context is critical.”

Related Articles