Tue. Apr 16th, 2024

The military wants AI to replace human decision-making in battle

The development of a medical triage program raises a question: When lives are at stake, should artificial intelligence be involved?

When a suicide bomber attacked Kabul International Airport in August last year, the death and destruction was overwhelming: The violence left 183 people dead, including 13 U.S. service members.
This kind of mass casualty event can be particularly daunting for field workers. Hundreds of people need care, the hospitals nearby have limited room, and decisions on who gets care first and who can wait need to be made quickly. Often, the answer isn’t clear, and people disagree.
The Defense Advanced Research Projects Agency (DARPA) — the innovation arm of the U.S. military — is aiming to answer these thorny questions by outsourcing the decision-making process to artificial intelligence. Through a new program, called In the Moment, it wants to develop technology that would make quick decisions in stressful situations using algorithms and data, arguing that removing human biases may save lives, according to details from the program’s launch this month.
Though the program is in its infancy, it comes as other countries try to update a centuries-old system of medical triage, and as the U.S. military increasingly leans on technology to limit human error in war. But the solution raises red flags among some experts and ethicists who wonder if AI should be involved when lives are at stake.
“AI is great at counting things,” Sally A. Applin, a research fellow and consultant who studies the intersection between people, algorithms and ethics, said in reference to the DARPA program. “But I think it could set a [bad] precedent by which the decision for someone’s life is put in the hands of a machine.”
[The U.S. says humans will always be in control of AI weapons.
But the age of autonomous war is already here.]
Founded in 1958 by President Dwight D. Eisenhower, DARPA is among the most influential organizations in technology research, spawning projects that have played a role in numerous innovations, including the Internet, GPS, weather satellites and, more recently, Moderna’s coronavirus vaccine.
But its history with AI has mirrored the field’s ups and downs. In 1960s, the agency made advances in natural language processing, and getting computers to play games such as chess. During the 1970s and 1980s, progress stalled, notably due to the limits in computing power.
Since the 2000s, as graphics cards have improved, computing power has become cheaper and cloud computing has boomed, the agency has seen a resurgence in using artificial intelligence for military applications. In 2018, it dedicated $2 billion, through a program called AI Next, to incorporate AI in over 60 defense projects, signifying how central the science could be for future fighters.
“DARPA envisions a future in which machines are more than just tools,” the agency said in announcing the AI Next program. “The machines DARPA envisions will function more as colleagues than as tools.”
To that end, DARPA’s In the Moment program will create and evaluate algorithms that aid military decision-makers in two situations: small unit injuries, such as those faced by Special Operations units under fire, and mass casualty events, like the Kabul airport bombing. Later, they may develop algorithms to aid disaster relief situations such as earthquakes, agency officials said.
The program, which will take roughly 3.5 years to complete, is soliciting private corporations to assist in its goals, a part of most early-stage DARPA research. Agency officials would not say which companies are interested, or how much money will be slated for the program.
Matt Turek, a program manager at DARPA in charge of shepherding the program, said the algorithms’ suggestions would model “highly trusted humans” who have expertise in triage. But they will be able to access information to make shrewd decisions in situations where even seasoned experts would be stumped.
For example, he said, AI could help identify all the resources a nearby hospital has — such as drug availability, blood supply and the availability of medical staff — to aid in decision-making.
“That wouldn’t fit within the brain of a single human decision-maker,” Turek added. “Computer algorithms may find solutions that humans can’t.”
Sohrab Dalal, a colonel and head of the medical branch for NATO’s Supreme Allied Command Transformation, said the triage process, whereby clinicians go to each soldier and assess how urgent their care needs are, is nearly 200 years old and could use refreshing.
Similar to DARPA, his team is working with Johns Hopkins University to create a digital triage assistant that can be used by NATO-member countries.
The triage assistant NATO is developing will use NATO injury data sets, casualty scoring systems, predictive modeling, and inputs of a patient’s condition to create a model to decide who should get care first in a situation where resources are limited.
“It’s a really good use of artificial intelligence,” Dalal, a trained physician, said. “The bottom line is that it will treat patients better [and] save lives.”
Despite the promise, some ethicists had questions about how DARPA’s program could play out: Would the data sets they use cause some soldiers to get prioritized for care over others? In the heat of the moment, would soldiers simply do whatever the algorithm told them to, even if common sense suggested different? And, if the algorithm plays a role in someone dying, who is to blame?
Peter Asaro, an AI philosopher at the New School, said military officials will need to decide how much responsibility the algorithm is given in triage decision-making. Leaders, he added, will also need to figure out how ethical situations will be dealt with. For example, he said, if there was a large explosion and civilians were among the people harmed, would they get less priority, even if they are badly hurt?
“That’s a values call,” he said. “That’s something you can tell the machine to prioritize in certain ways, but the machine isn’t gonna figure that out.”
“We know there’s bias in AI; we know that programmers can’t foresee every situation; we know that AI is not social; we know AI is not cultural,” she said. “It can’t think about this stuff.”
And in cases where the algorithm makes recommendations that lead to death, it poses a number of problems for the military and a soldier’s loved ones. “Some people want retribution. Some people prefer to know that the person has regret,” she said. “AI has none of that.”

US Military Aims to Replace Human Commanders With AI to Remove Bias, Accelerate Decision Making on the Battlefield

Fast and timely decision-making is an integral part of the military, whether in combat, medical or disaster relief. MailOnline reported that scientists at the US military’s Defense Advanced Research Projects Agency (DARPA) believe that artificial intelligence (AI) can arrive at sound decisions faster than human commanders, who may have biases that slow down decisions.

Scientists at DARPA said that algorithms could be trained with lessons based on best practices. The technology is still in its early stages, but DARPA hopes to release it soon as the US military increasingly leans on technology to reduce human error.

(Photo : Pixabay/Computerizer)
US Military Aims to Replace Human Commanders With AI to Make Decisions on the Battlefield

In the Moment: DARPA’s New Technology Could Make Difficult Decisions at the Battlefield

DARPA has named its latest initiative “In the Moment” as it can make decisions fast without any biases, which is usually the concern of human commanders. As a research and development agency of the US military, DARPA aims to remove human bias in decision-making and save more lives through technology.

They said that it will take about two years to train and another 18 months to prepare before it can be ready for the real-world scenario. Sally A. Applin, an expert in the interaction of AI and ethics, told Washington Post that AI could set a precedent in making life or death decisions on the battlefield.

DARPA announced the ITM earlier this month, explaining that its first task will be to work with trusted human decision-makers to explore the best options when the best or agreed-upon correct answer is not available. ITM project manager Matt Turek assures that the development of the new technology is different from the typical AI that requires human agreement with the right outcomes

As MailOnline reported, the team takes inspiration from medical imaging analysis, in which techniques are developed for evaluating systems even when skilled experts disagree. Turek said that ITM will develop a quantitative framework from medical imaging insights to evaluate decision-making by algorithms during difficult situations.

He added that the technology will be based on realistic and challenging decision-making scenarios and map its responses and compare them to human decision-makers.

ALSO READ: Will AI Takeover Humanity? Advanced Robots May Now Have Consciousness

AI Can Amplify Human Effectiveness, Threaten Human Autonomy

AI systems rely on Artificial Neural Networks (ANNs) that can be trained like neurons in brains to recognize patterns in the information. It has become the basis for many AI developments over the years. These developments helped humans in many tasks that were once impossible.

According to Pew Research Center, experts believe that AI will amplify the effectiveness of humans. However, it also could threaten their autonomy and capabilities.

They have discussed the possibility of computers exceeding human intelligence, just like how they are now being developed to make decisions and drive vehicles. These smart-systems aim to save time, money, and lives and offer a more customized future.

Despite this, they have expressed concerns about the long-term impact of these new tools on the essence of being a human. Many people shared their deep worries and suggested pathways toward solutions, such as living in the Metaverse and living physical bodies behind to live infinitely.


AI could replace human military commanders in making life or death decisions on the battlefield, DARPA suggests

  • DARPA is working with decision makers to train algorithms to make decisions
  • The idea is that humans have a bias and can disagree, slowing down decisions
  • AI can be trained from the start, based on best practice, to make fast decisions
  • The technology is still at the early stages, but DARPA hopes for a wide roll out 

Modern military operations, whether it be combat, medical or disaster relief, require complex decisions to be made very quickly, and AI could be used to make them.

The Defense Advanced Research Projects Agency (DARPA) launched a new program aimed at introducing artificial intelligence into the decision making process.

This is because, in a real world emergency situation, that might require instant choices between who does and doesn’t get help, the answer isn’t always clear and people disagree over the correct course of action – AI will make a quick decision.

The latest DARPA initiative, called ‘In the Moment’, will involve new technology that could take difficult decisions in stressful situations, using live analysis of data, such as the condition of patients in a mass-casualty event and drug availability.

It comes as the U.S. military increasingly leans on technology to reduce human error, with DARPA arguing removing human bias from decision making will ‘save lives’.

The new AI will take two years to train, then another 18 months to prepare, before it is likely to be used in a real world scenario, according to DARPA.

‘AI is great at counting things,’ Sally A. Applin, an expert in the interaction of AI and ethics, told Washington Post, adding ‘I think it could set a precedent by which the decision for someone’s life is put in the hands of a machine.’

According to DARPA, the technology is only part of the problem when it comes to switching to AI decision making, the rest is on building human trust.

‘As AI systems become more advanced in teaming with humans, building appropriate human trust in the AI’s abilities to make sound decisions is vital,’ a spokesperson for the military research organization explained.


 

 

 

By Ryan Morrison source

error: Content is protected !!