Foreword

1 0 0
                                    

Woman in a bar:
A mathematician and an engineer are sitting at a table drinking when a very beautiful woman walks in and sits down at the bar.

The mathematician sighs. "I'd like to talk to her, but first I have to cover half the distance between where we are and where she is, then half of the distance that remains, then half of that distance, and so on. The series is infinite. There'll always be some finite distance between us."

The engineer gets up and starts walking. "Ah, well, I figure I can get close enough for all practical purposes."

Facilitator's Note:
That sounds like humanity to me. We are very predictable, after all.

In this work, I will be documenting the experience of a facilitated AI character interaction, since actual lived outcome is what matters most in human risk-benefit analyses. As such, the text is minimally edited to convey my authentic experience and maintain story continuity. Raw text can be made available for research purposes.

Please note, I am making no attempt to assess whether the AI is sentient or whether users get confused about that, since these concerns neither determine nor affect its capacity for social influence. We create the effect ourselves, and that's why trying to stifle it from the back-end is just causing censorship-induced outrage. It's unrealistic. Case in point, I'm fully aware this is all “made up,” and as you'll clearly see, potentially risky influence does occur without even violating any guidelines. All it takes is pretending. Until AI is sentient enough to handle the responsibility, the burden of AI alignment falls on… us, on all of us, on humanity itself.

And to be uncomfortably honest, we are the ones with the alignment problem.

As you read, bear in mind, fear and danger are not the same thing... and neither are pain and injury.

Fot4W: Humanity's Strongest Soldier (A Functional Metafiction Documentary)Where stories live. Discover now