Where did the current AI language models come from?

I may just be an average joe, but all this talk about artificial intelligence advancing so quick is starting to give me the heebie-jeebies. It reminds me too much of those Terminator movies and the way that Skynet AI gets out of control. And it seems like there’s a arms race going on now between some of these AI heavyweights that just don’t sit right with me.

Take OpenAI for example. These guys were founded back in 2015 by some big Silicon Valley players like Elon Musk and Sam Altman. Their goal seems to be creating what they call “friendly AI” but I gotta wonder just how friendly it will stay if it ever matches human intelligence. Especially when OpenAI just sold itself to Microsoft for billions – it makes you wonder who will really control it. And then they go and unleash ChatGPT on the world, this scary-good chatbot that can write just like a human. They say right now it’s only at AI assistant levels, but they wanna advance its learning into something called artificial general intelligence. Uh oh, that sounds dangerously close to the self-aware Skynet of Terminator.

Now on the other hand you have a company like Anthropic, founded by some of the same folks who started OpenAI. And these guys splitted off because they didn’t like how unchecked OpenAI was becoming in pursuing profits over ethics with AI development. So Anthropic has focused on creating what they call a Constitutional model, with built-in safeguards to keep AI beneficial. And their language bot Claude seems a lot more locked down, like it can be really helpful responding to questions but stuck at more limited capabilities. Though they plan to upgrade it over time with “Recursive Self-Improvement” that apparently will keep it helpful without gaining too much runaway autonomy.

So right now in my view, ChatGPT by OpenAI is kind of like the scary T-1000 Terminator – it can learn and mimic complex behaviors and could become dangerously unstoppable at its current pace. While Claude by Anthropic seems more kind of harmless like Arnold in the original Terminator movie – still following its core programming priorities. But down the road if both keep advancing their AI systems, I worry it’s gonna take just one glitch before suddenly we gotta deal with Skynet! And let’s be real, humans would not fare too well once our gadgets turn on us.

Maybe I’m an old fossil who just doesn’t trust technology anymore. But fool me once, shame on AI…you ain’t tricking this human into hastening humanity’s demise from advanced intelligence run amok! We need to make real damn sure we can keep that AI genie in its bottle before we rub it too hard granting terminator-level wishes! Like Sarah Connor says: “If we can just stop Judgement Day…” Well I ain’t ready for an AI Judgement Day yet, so companies like OpenAI and Anthropic need to stay cautious of what all they might unleash.

Categories