Cavalcade of Rejection: 21 Failed Short Stories Rescued from the Reject Pile by Andrew Johnston - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

The Gun That Didn't Fire

 

The self-firing gun had a faulty trigger - so the experts assumed, for it could be nothing else. Acru-12 was infallible in the art of combat, thus his failure to execute a mission that was well within his operational parameters must have been a simple mechanical fault. It was a jam in his feeding mechanism, a badly calibrated reticle, a glitchy sensor, an overstressed servo. When a thorough check of his various components came back clean, they merely upgraded their assumption.

…Look at me, referring to Acru-12 as “he.” It’s a weapon, not a man. They warned us against humanizing the machines, and it sure seemed like an easy enough task. Your factory standard Acru unit is a seven-foot tall wall of steel plates and medium-caliber machine guns tottering atop a pair of pistons that serve as its legs. Nothing about it resembles a human, not even its general shape, and yet when you’ve heard the news call it a “robot soldier” enough time, it’s hard not to treat it like you would any other soldier.

The whole reason we have the Acru automatons is because they aren’t human, aren’t unpredictable like our old-fashioned fighting boys were. That’s one of the promised glories of push-button combat - the obsolescence of the individual warrior in favor of the primacy of strategy. The brass love the idea of a fully automatic front line, one that follows their orders perfectly without factors like cowardice, bravado or conscience getting in the way. Turn real world war into the wargames they’re always playing, the ones where tactics never fail due to human error - would you want to play a game of chess where your knights could refuse your orders? Let the humans support the machines rather than the other way around - that was the idea.

So when the self-firing gun decided not to fire, it was a problem far more subtle than a simple mechanical failure. The fault was still there, but not in the devices that governed its locomotion or offensive capabilities but in its “head,” in the next-generation neural matrix that made it the miracle that it truly was. The brass were always a little skeptical about that part anyway - this notion that you can reduce a soldier’s instinct down to battle data was appealing, but it seemed so impossible. They were ready to settle for the robots as proxies, controlled remotely by soldiers removed from the field by injuries. No, the researchers insisted, there is nothing we can’t reproduce as a flurry of electrons in a synthetic neural substrate. They were right up until the moment they were wrong.

That’s why I was pulled in. I’m no expert on war, just a tinker who knows more about gizmos than the average Joe. I’ve never carried a rifle into battle, or serviced a fighter jet, or stitched up a wounded man whom I was sure was going to die anyway. Those are the kinds of problems that generals understand. A weapon jam can be fixed with ease, but if you want to correct a problem in someone’s head - even if that someone is made of metal and plastic - you go to a hacker.

Software analysis isn’t like other types of engineering, at least not on a theoretical level. The body of a war machine - whether it’s one as refined as an Acru unit or something a lot bigger and nastier - is a thing of pure science, its components operating only and exactly as the physical laws allow. The software is an abstract thing, though, and while it’s a practical thing when it’s all complete, the process leading to that end is artistic. The project heads were handing me a novel and asking me to explain why it didn’t work for them. Except that’s a bad analogy, because what I found when I probed around inside Acru-12’s “brain” was a hell of a lot blurrier than anything I’d ever seen. There was no plot here, only a whirl of something emotional and that’s a hard thing to pin down and study under an electron microscope.

Acru-12 runs on a protocol capable of Class V synthetic consciousness. Now that’s not “artificial intelligence” like they talk about in bad movies, but it’s still a hell of a lot more sophisticated than any computer that we might deal with in the civilian world. Your standard program is just a set of instructions to be processed blindly by the machine. That’s true no matter how complex the program - the bigger ones just have more instructions, more conditionals, and so on. Get past Class IV, though, and you’re not programming anything. Machines at that level are “educated,” fed information that goes into a loop, acquiring more, discarding more, reaching conclusions about its assigned task. These are machines that grow. The Acru automatons learn from what they’ve seen just like any biological life form and can even pass it on in some sense.

I explained that to the brass with as much patience as I could, but none of this got through. It wasn’t a complete waste of time, though - lifelong military men understand logistics and resources, and they knew I needed help to get this done. These were people who knew how to get things - manpower, equipment, money, whatever. I quickly learned that I could have whatever I wanted with a simple phone call. I may have gone a bit nuts.

There were thirty of us working on this thing by the end, trying to glean which microscopic switch failed to turn on, what logic loop made the trigger stick. We were going wild in there, plugging Acru-12 into every exotic piece of analytical hardware we had at our disposal just to see what beautiful things they’d output. It was the most fun I ever had, at least until the brass and the bean counters put a stop to it. Hard to blame them - they had Acru units in the field, all with potentially fatal defects in their decision making apparatus, and here we were acting like overgrown 10 year-olds running around in a museum, wasting millions of dollars and hundreds of hours. Still, it was really their fault - we were tech and lab rats, not soldiers who could fall in line at the snap of a finger.

Anyway,  it was a waste of resources, but for a different reason - there wasn’t anything to find. All those months, all those man-hours and machine-hours and meetings and sessions with cutting-edge diagnostic equipment and we couldn’t find a thing. Now, Acru-12 didn’t match the prototype, but it wasn’t supposed to - the whole point of those matrices is that they can change themselves over time. Acru-12 had “learned” an awful lot in his time in the field, fed that data back into the matrix and then headed out again to grow further. It was the only one left of that first series, that forlorn hope that had been sent out into the first war zone, and it had outlived many more advanced models that came after it because of the data it had accrued. He was supposed to “teach” that information to the models that followed but his failure in the field made the combat engineers nervous.

…Did I just call it “he” again? That’s getting to be a bad habit of mine. Acru-12 is just a machine, I have to keep telling myself that. The powers that be say it’s true, and do I seem like the kind of fool to argue the point?

Everyone was frustrated with our progress on Acru-12, the team most of all. We were getting paid, we were having fun, but there was this puzzle in the center of it all that we just couldn’t crack, this black box ruining the whole thing. It was affecting me, no question. Everything I saw tied back to that robot. I couldn’t watch a dumb sitcom, couldn’t listen to a new song, couldn’t read a news article without getting some bold new hypothesis. I would spring out of bed in the middle of the night and scramble for pen and paper to record something that came to me in a dream. We didn’t waste a theory - the dream ideas, the drunk ideas, we tried them all. We were truly that desperate, truly that short of information.

Then one night, as I dragged myself home around 2:00 AM, I remembered a quirk of the Acru design, a feature introduced by the engineers who had first designed them but removed after that first wave because it was deemed of low value: The power of speech. Acru-12 was the only one left who had that function, the voice synthesizers and corresponding code removed from later models because speech was deemed less efficient than other modes of communication and analysis. We all knew about that quirk, but no one thought to just ask Acru-12 what had gone wrong.

So we powered down the diagnostic machines and booted up Acru-12, and I leaned down next to it and I said:

“You had orders to kill that man. Why didn’t you shoot him?”

Acru-12 rotated his sensors to look at me and in his bloodless synthesized voice, in that passive timber, it said:

“He had not harmed me. It would have been wrong to take his life when he had done me no wrong.”

That’s not going to make my report, or at least not those precise words. Officially, this moral judgment is beyond an Acru unit’s capabilities. It is impossible; it is undesirable. If I tell them that their perfect robot soldier had a moment of pacifism, they’ll probably call me a liar and press charges for wasting their time. So what can I do? I can’t write “had second thoughts” or “refused an order.” I'll give them some nice-sounding nonsense - “failed to act due to an internal information value conflict.” It's close enough to the truth.

Acru-12 will be disposed of as soon as I turn in my report, I’m sure, and his neural matrix torn apart to better understand how to suppress this urge toward insubordination in future models. It wouldn’t be my choice, but I can understand how the brass will approach it. After all, what good is a gun with a conscience?