Background: yesterday, there was heated discussion in the thread "military-industrial complex is a supervillain of causing the climate crisis" (link).
Among others, the thread creator posted a comment to the Guardian article "The climate costs of war and militaries can no longer be ignored", commenting it thusly:
If you want more context or won’t take my word on how militarism will kill is all, you can read this article.
I replied, a copy of my reply is below for your judgement. My reply got moderated by someone with the reason "Comment does not address intent of original post and promotes weapons industry / war in Ukraine."
I think my comment both addressed the topic, did not promote the weapons industry but helping Ukraine defend itself (ironically, tools for military self-defense come from the weapons industry) and did not promote the war (in fact, I noted that war is expensive, resource-intensive and stupid), but did explain the dynamics of war and revolutions.
I consider this moderator misconduct, likely motivated by their political views - and have asked a server administrator to talk with the moderator involved, to ascertain if they can refrain from using moderator powers as a political club to hit people, or to secure their demotion from a moderating role.
The removed post, for your judgement:
The article is fine, and I second the recommendation to read it, but from the article to the slogan you present, things do not follow a logical path.
Yes, war is both an incredibly expensive activity (diverting money that could be used) and a resource-intensive activity (the money goes into actual materials that almost surely destroy something or get destroyed) and an incredibly stupid activity (and it can snowball)...
...but the problem is that successful unilateral disarmament during a war tends to result in a situation called "defeat". If the defeat is not an attack being defeated, but defense being defeated, that is called a "conquest". Now, letting a conquest succeed has a historical tendency of the conqueror having more experience at conquest, and more resources to conquer with... which has, several times in history, lead to another conquest or a whole series of conquests. A regional war in Ukraine resulting in Ukraine being taken over by Russia has a high probability of producing:
- a bigger regional war later, in which Russia, using its own resources and those of Ukraine, proceeds to another country, gets into a direct conflict with NATO and then indeed there is a risk of a global war
- an encouraging effect after which China, noting that international cooperation against the agressor was ultimately insufficient, and deeming itself better prepared than Russia, decides that it can take Taiwan with military force
However, a war ending with inability to show victory tends to produce a revolution in the invading country. For example, World War I produced a revolution in Russia and subsequently a revolution in Germany, with several smaller revolutions in between, empires collapsing and a brief bloom of democracy in Europe, before the Great Depression and the rise of fascism ate all the fruits. The Falklands War produced a revolution in Argentina. The Russo-Japanese war produced the 1905 near-revolution in Russia.
It is better for Ukraine to not get conquered. It is better for Russia to be unable to conquer Ukraine. That result is also better for everyone around them. It's even better globally because it sets a precedent of large-scale cooperation defeating an agressive superpower, discouraging agressive superpowers from undertaking similar wars until memory starts fading again.
Unfortunately, until we see indications that Russian society is getting ready to stop the war (this could involve starting negotiations on terms palatable to Ukraine, a change of leadership, a withdrawal, a revolution, etc)... the path to achieving that outcome remains wearing out the agressor: producing enough weapons and delivering them to Ukraine.
Ultimately, both sides in a war wear each other down. The soldiers most eager to fight are killed soonest. The people most unwilling to get mobilized or recruited, and soldiers most unwilling to fight - they remain alive. If they are pressed forever, some day they will make the calculation: there are less troops blocking the way home than in the trenches of the opposing side. After that realization, they eventually tend to mutiny. Invading troops tend to do that a bit easier than defending troops, because they sense less purpose in their activity. In the long run, if nothing else happens, that will happen. There is just (probably, regrettably) no particularly quick shortcut to getting there.
The concept is new to me, so I'm a bit challenged to give an opinion. I will try however.
In some systems, software can be isolated from the real world in a nice sandbox with no unexpected inputs. If a clear way of expressing what one really wants is available, and more convenient than a programming language, I believe a well-trained and self-critical AI (capable of estimating its probability of success at a task) will be highly qualified to write that kind of software, and tell when things are doubtful.
The coder may not understand the code, though, which is something I find politically unacceptable. I don't want a society where people don't understand how their systems work.
It could even contain a logic bomb and nobody would know. Even the AI which wrote it may tomorrow fail to understand it, after the software has become sufficiently unique through customization. So, there's a risk that the software lacks even a single qualified maintainer.
Meanwhile some software is mission critical - if it fails, something irreversible happens in the real world. This kind of software usually must be understood by several people. New people must be capable of coming to understand it through review. They must be able to predict its limitations, give specifications for each subsystem and build testing routines to detect introduction of errors.
Mission critical software typically has a close relationship with hardware. It typically has sensors coming from the real world and effectors changing the real world. Testing it resembles doing electronical and physical experiments. The system may have undescribed properties that an AI cannot be informed about. It may be impossible to code successfully without actually doing those experiments, finding out the limitations and quirks of hardware, and thus it may be impossible for an AI to build from a prompt.
I'm currently building a drone system and I'm up to my neck in undocumented hardware interactions, but even a heating controller will encounter some. I don't think people will experience success in the near future with letting an AI build such systems for them. In principle it can. In principle, you can let an AI teach a robot dog to walk, and it will take only a few hours. But this will likely require giving it control of said robot dog, letting it run experiments and learn from outcomes. Which may take a week, while writing the code might have also taken a week. In the end, one code base will be maintainable, the other likely not.