Researchers from the University of Washington discovered that algorithms taught to identify COVID-19 from chest X-rays frequently focus on secondary variables, such as a patient’s age, rather than the images themselves, a phenomenon known as shortcut learning.
“A physician would generally expect a finding of COVID-19 from an X-ray to be based on specific patterns in the image that reflect disease processes,” said Alex DeGrave, a medical science student at the American university and co-author of a paper published this week in Nature Intelligence.
“But, rather than relying on those patterns, a system using shortcut learning might, for example, judge that someone is elderly and, thus, infer that they are more likely to have the disease because it is more common in older patients. The shortcut is not wrong per se, but the association is unexpected and not transparent. And, that could lead to an inappropriate diagnosis.”
Shortcut learning weakens models, making them less reliable, which may explain why clinical performance typically declines. You can find code from the research project here on GitHub.
Have self-driving killer drones actually hunted their first humans in war?
You may have read headlines claiming that self-driving killer robots have assaulted and possibly killed their first humans in a conflict. What was the source of that?
The end of the Second Libyan Civil War, which lasted from 2019 to 2021, was documented in a 550-page UN report released in March. It has piqued the interest of the machine-learning community, policy specialists, and journalists during the last several days. An attack against soldiers employing a combination of remote-controlled drones and autonomous weaponry was outlined in a brief phrase in the dossier.
“Logistics convoys and retreating [Haftar-affiliated forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 and other loitering munitions,” the UN’s dossier stated. “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”
The STM Kargu-2 is a lethal dive-bomb drone made by Turkey, and Haftar-affiliated troops are militants linked with renegade military commander Khalifa Haftar, who assaulted Libya’s capital Tripoli in early 2020. This device, like other loitering weapons, can be regarded of as patient cruise missiles that linger in a specific area for extended periods of time and attack anything comes close to the target they’ve been programmed to eliminate.
These pro-Haftar forces were “subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems” and “suffered significant casualties,” the report added.
The report also does not state explicitly that soldiers were killed by autonomous weapons acting solely on their own artificial instinct.
Furthermore, those aforementioned loitering munitions have been there for years, so is this the first time “killer robots” have attacked humans? “It seems to me that what’s new here isn’t the event, but that the UN report calls them lethal autonomous weapon systems,” Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations, put it.
To put it another way, more information is required. This isn’t the first time autonomous robots have killed humans on a battlefield; rather, it’s the first time the UN has put a label on loitering munitions.
DARPA Seeks To Fund AI Research Into Information Warfare
DARPA, the Pentagon’s research and development arm, is seeking funding for a project to develop an open-source machine-learning-based system that can measure how authoritarian regimes govern their internet corners.
The Measuring the Information Control Environment (MICE) system will “will measure how countries censor, block, or throttle specific internet-based activities [and] will also try to determine the technical capabilities that countries use to enable such repressive activities.” according to the proposal.
Algorithms will be expected to track changes to content transmitted online in the form of text, photos, audio, and video, as well as inspecting things like packet routing, IP address filtering, and domain name resolution to determine who controls the data.
According to MeriTalk, a federal IT blog, DARPA is planning to spend $1 million on the initiative. This month, the contract opportunity was announced, and you can learn more about it here. MICE proposals must be submitted by June 30th.