For months, business buzz has been about the white collar jobs disappearing in the wake of extended use of Artificial Intelligence (AI) across many sectors. What came as a surprise in the last week has been the stance of the AI companies in certain instances.
Anthropic, an AI firm, refused to sell its AI package to Pentagon, the United States’ Department of Defence. Anthropic’s CEO Dario Amodei made public his refusal to let the company’s technology to be used for public surveillance.
It is the business world. When one company leaves, another steps in. Markets hate a vacuum. So, OpenAI, the company that set the world aflame as it were with its ChatGPT, has agreed to work with the Pentagon. OpenAI will deploy its technology to organize Pentagon’s classified information. What the implications remain unknown. This might make Pentagon’s working more efficient. There is however the danger that the technology could be misused. So, the AI tech companies are embedding restrictions in the AI models they are churning out. The sales talk between Anthropic and Pentagon broke down on the issue of restricted usage.
OpenAI has now entered talks with NATO, the 32-member military alliance, a vestige of the Cold War era. OpenAI’s Sam Altman let it be known in the boardroom meeting that the company would be applying its systems for classified NATO information. The company spokesperson clarifies that it was meant only for NATO’s unclassified database.
These can be considered teething problems. Ultimately, tech companies would want to sell their stuff to organisations that can pay, and perhaps also fund their further research and development. AI, at least in America, is a costly thing and there is need for billions of dollars of research investment. Who can be a better funder than the government and its departments?
Of course, governments are not private organisations. They are accountable to the people, to the taxpayers. But security wings in the government have a tendency to maintain secrecy in the name of national security. There are valid reasons for maintaining secrecy but the reasons are not always limited to valid use. People who run these organisations, and who operate the systems wield enormous powers. And if a powerful tool like the AI is placed at their disposal, then there is the real danger of the technology being misused.
Surprisingly, the AI companies are aware of the challenges that AI poses. When ChatGPT was out in the market in November 2022, it was Sam Altman, CEO of OpenAI, who deposed before the US Senate that there is need for regulation and that there is a real danger that AI could be misused. It was an opinion that many of the other heads of tech companies like Microsoft and Google agreed that there is need for regulating the use of AI. So, there is need for safeguards when Pentagon wants to use it, or NATO.
As these are public organisations, which are sustained by taxpayers, there should be some kind of legislation how a ubiquitous technology like the AI is to be used. Pentagon on its part must get a clearance from the Congress, and the issue should be discussed in public. And NATO too should get a green signal from the legislatures of the member-countries.
Is this kind of public scrutiny possible in the case of use of technologies? Perhaps it will be difficult for military technologies, but it can be applied to dual-use technologies like the AI, which can be used in civilian and military spheres. It would be necessary to restrict the use of AI in military terms as well because no weapons system should become an automaton. Weapons should remain under human command.