I have read a lot of Sci-fi. Thousands of books actually. You can't help but start recognizing patterns of how the future might look like. Many Sci-fi books were made into movies. One of my favorites is Blade Runner. Main character Rick Deckard states: “Replicants are like any other machine - they're either a benefit or a hazard. If they're a benefit, it's not my problem.” Blade Runner (1982)
Fast forward to the future. We are all now—to some degree—Blade Runners
Today I was sent a PDF about AI prompt engineering by the folks of WithSecure. They said: In Ridley Scott’s early 80s tech noir masterpiece, Rick Deckard of the Los Angeles Police Department has one assignment. He needs to find and “retire” four replicants that hijacked a ship and then blended into the human population on earth in search of their creator. A key weapon in the arsenal of Blade Runners, like Deckard, is the Voight-Kampff test—a series of prompts designed to elicit a response that might determine whether a respondent is human or an android, guided by artificial intelligence. We are all now—to some degree—Blade Runners."
And that's because today with the frequent release of new GPT versions and lookalikes, anything you look at needs the lens of: "Was it written by an AI? is it an actual fact? Was the AI hallucinating?" (yes, that is an actual AI technical term).
The technology is racing forward and will certainly improve a lot. However, at this point it still has some glaring red flags. But isn't the question "Can we build that?" but rather "should we build that?"
Which is where "Stu's Law" comes in. It's really just a tongue-in-cheek extrapolation of the existing expression: "You get the security culture you ignore".
You have got to be intentional about the future you want to create. It's a blank slate. It's your canvas. You get to decide what will be there. Let's be smart about it.
Stu's Law: "You get the future you ignore"©