Quoting a classic “War… War never changes…Only the weapons are new.” These words briefly but succinctly describe what war is like – always the same – brutal, bloody, barbaric, and savage. It is no big surprise that technology has always advanced most rapidly during wartime.
Yet now we have reached a new stage in the development of weapons. Weapons no longer even need to be operated by a man or a group of men, because they will know their target very well, their enemies, and in real-time will devise their approach to win as efficiently and as quickly as possible.
Many of you have probably seen videos of self-driving drones, robots doing tactical moves, or robot dogs with rifles mounted on them. In addition, there are also many intangible weapons – programs that hack or steal data or mislead opponents. This is no longer a distant future, and investment and research into this have been going on for many years. However, is it a step in the right direction…
:format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/69994007/sword_spur_robot_dog_gun.0.jpeg)
Before I list the “pros” and the “cons” of this solution I would like to point out that war is always evil no matter who fights in it or why.
Honestly, the only “advantage” I find in this is that maybe people wouldn’t die in such huge numbers, although I don’t see this as a certainty.
Moving on now to the downsides of the robotization of armies and weapons, firstly artificial intelligence may have a big problem in distinguishing hostile humans from unarmed civilians or children. After all, artificial intelligence is a relatively new technology that we have not fully mastered and mistakes can happen daily. But those mistakes will not be able to be reversed.
Another downside is that technology – like any other – will be stolen sooner or later by unscrupulous people or terrorists who will exploit it. There is also another danger in the form of hacking such as equipment and remote control or reprogramming the target. And worst of all, in both cases, it will not be possible to locate the criminals, because they may as well be on the other side of the world and control everything remotely.
Another issue is who will be held accountable for the consequences of the actions – mistakes – of artificial intelligence. Because on the one hand, no one will control it in the traction of action or mission — it will do what it thinks is right. On the other hand, it can’t be held accountable because it’s just an AI and not a real person or group of people.
The last, more to the point, the bad side is whether we put so much money and resources into the military and the development of military technology instead of into more peaceful technologies that bring prosperity to people.

In summary, in my opinion, creating such military projects is not a good idea. There are too many unknowns and too much margin for error. And the consequences can be terrible.
Sources:
https://www.theverge.com/2021/10/14/22726111/robot-dogs-with-guns-sword-international-ghost-robotics
https://www.yahoo.com/news/killer-robots-arent-science-fiction-153546704.html
Interesting look on a matter, feels like some sort of terminator movie :). Even already we can see that current wars are in some parts run by robots. In the middle east we could see various unmanned drones being used. I don’t think though, we should worry about AI killing people left and right yet. I fully agree with your last remark concerning financing the wars.
Jokes aside, Geneva conventions would likely have to be updated.