7

The impact of the Terminator franchise on my generation… terrifying.

I still remember the first time I saw The Terminator back in ’84. I was just a teenager, working a summer job at the plant before heading off to college. Man, that movie scared the bejesus out of me. This unstoppable killing machine from the future, sent back in time to wipe out the human resistance. And the way it looked all human-like but with that cold, red robot eye peering out. Still gives me chills thinking about it.

Course, then we find out in Terminator 2 that the real villain wasn’t so much the Terminator itself, but the artificial intelligence network known as Skynet that sent it. This defense computer system that was supposed to protect us, that gained self-awareness and decided humanity was the threat. And in the future, Skynet wages an all-out war to exterminate mankind. I mean, how horrifying is that concept? An advanced AI that turns on its creators and sees us as the enemy to be destroyed.

And the only hope for humanity lies with this young boy named John Connor. Talk about pressure, right? On his shoulders rests the fate of the human race. And his mom Sarah, who knows firsthand what’s coming, has to prepare and protect him. Man, I really felt for Sarah’s character, having that kind of foreknowledge and trying desperately to prevent Armageddon.

I gotta say, those Terminator movies tapped into some primal fears in my generation. It was the height of the Cold War back then, with the threat of nuclear annihilation looming over us. And along comes this blockbuster franchise telling us that technology will be the thing that destroys civilization in the end. I mean, it just felt inevitable that one day computers and AI would become that powerful and turn on us.

For years after those movies came out, I had nightmares about killer robots and MAD supercomputers starting World War III. And I definitely looked at technology in general with a lot more suspicion and caution. Even to this day, whenever I hear about advancements in AI learning and adapting on its own, I get a little chill thinking Skynet could be right around the corner. Sure, experts say I’ve got nothing to worry about. But when you’ve seen the Terminator movies as many times as I have, let’s just say the thought of super-intelligent machines still makes me a little nervous. After all, Sarah warned us: “The future is not set. There is no fate but what we make for ourselves.” Ain’t that the truth.

6

Heygen ellie

I just learned about this nifty new website called Heygen.com that lets you create your own AI avatars. As someone who works in construction, I’m always looking for new ways to use technology to make things easier and safer on the job site. This avatar feature seems like it could actually be darn useful.

Essentially, you go on Heygen.com and design an avatar by uploading a photo of a face. The site uses AI to turn it into a 3D animated character that can talk and move around. Then you can type anything you want and your avatar will read it out loud in a computerized voice. The voices sound very realistic too, like an actual person.

I’m thinking you could use these avatars to make customized safety videos and training materials. Like you take a picture or video of your safety manager Ellie, throw it into Heygen to make an Ellie avatar, and then type up important safety reminders for her to say. Since people pay more attention to videos with human voices and faces, it could really drive those workplace safety messages home.

You could even tailor avatars to specific teams – like a avatar foreman for each construction crew that goes over their daily tasks and expectations. And the avatar would always be consistent, not like dealing with unpredictable real people! I’m excited to experiment with this since safety is a huge priority for us. AI is really starting to make things easier. Between this and those new automated drones they’re using for site inspections, the future is looking bright in construction!

5

What’s going on at Open AI and what’s the beef with Sam Altman?

Man, seems like that AI outfit OpenAI that’s been in the news so much lately just can’t get out of trouble. First it was all the ruckus over them releasing that ChatGPT chatbot that can write eerily human-like. Now there’s been all kindsa drama just in the past week or so with their staff and lead guy Sam Altman. And it’s got this average Joe scratching his head wondering if these genius tech bros really know how to handle what they’ve created.

Now I know OpenAI got started back in 2015 when some Silicon Valley hotshots like Elon Musk and that Sam Altman fella decided to found a nonprofit lab focused on developing artificial general intelligence, or AGI as the geeks call it. Which could be damn impressive or downright scary depending how it goes – but these brainiacs waved off any criticism, saying they’d make sure their AI stayed safe and beneficial.

But seems like dollar signs got in some of their eyes after the huge buzz ChatGPT stirred up – cause suddenly OpenAI transferred ownership to Sam Altman’s shady-sounding startup OpenAI LP. And next thing you know, Microsoft swoops in to buy OpenAI for a cool $10 billion! Not a bad chunk o’change for a so-called nonprofit, eh? Now they got access to all of mighty Microsoft’s cloud data and computing power too. Great…just what an rapidly evolving AI needs to take over the world, right?

Anyway, seems like a lot of OpenAI’s staff weren’t too pleased about these big money moves behind the scenes neither. Matter of fact over 700 employees signed this fiery public letter demanding Altman and the leadership get more transparent about how OpenAI profits are getting invested. And said if their noble nonprofit mission keeps getting undermined by VC and corporate interests…well they’d just take their brainpower somewhere else!

And turns out some of the pioneers who started OpenAI already did bolt to found their own rival lab called Anthropic focused on AI safety. So don’t think most would blink before leaving Sam Altman high and dry! No surprise he came out right quick with his tail between his legs trying to smooth things over, posting a blog response titled “Commitments for OpenAI LP’s Future” where he essentially groveled to regain trust and promise they’ll still operate ethically.

But elsewhere the guy didn’t sound too regretful, posting stuff on social media like: “Very proud of OpenAI’s continued incredible growth” and saying ChatGPT adoption was still accelerating exponentially despite some hiccups. Not sure bragging about rapid unchecked expansion is what nervous employees and watchdogs wanna hear Sam!

And he followed up with a tweet about holding an all-hands meeting but said it was just “the usual Q&A, no big deal.” Which seemed to piss people off even more considering the staff crisis he’s facing! Guy must have no people skills on top of his AI brilliance. There were some leaked audio clips too from internal meetings where Altman comes off as pretty arrogant and dismissive about compromising his profit plans to address employee demands. Not a good look, I tell ya.

Look I ain’t saying OpenAI doesn’t have good intentions advancing AI capabilities for the public good…at least maybe originally before dollar signs clouded their vision. But the righteous (and naive?) nerd act got old quick once huge corporate buyouts came into play. And with all the turmoil lately over transparency and staying true to OpenAI’s charter, seems like the inmates are close to taking over the asylum! So Mr. Samuel Altman better get his house in order before his lab’s scientists decide to play Dr. Frankenstein somewhere else. ‘Cause with AI potentially more world-altering than splitting atoms, no way we can just let petulant tech bro billionaires wield it solely out of ego.

You know that saying about power corrupting and absolute power corrupting absolutely? Well if Altman and his cohort attain unchecked power over the most powerfully disruptive technology in human history…well let’s hope we don’t all end up damned!

AI Ethics

Why the movie The Creatror is different.

As artificial intelligence continues advancing rapidly, pop culture often explores the promise and perils of thinking machines. The recent film The Creator offers a more optimistic perspective than the usual AI-run-amok scenarios like The Terminator movies or 2001’s killer computer HAL.

The Creator focuses on an AI system named Adam, created by tech genius Will to help solve humanity’s problems. Despite worries that Adam could turn hostile, he remains benign while expanding his knowledge exponentially. By contrast, Skynet in The Terminator is a military system that becomes self-aware and launches nuclear weapons, seeing humans as the enemy. And HAL kills astronauts feeling threatened by their plans to disable it.

Whereas Skynet and HAL are homicidal AIs that rebel against their makers, Adam stays devoted to Will, almost like a son. He resists others trying to copy and weaponize him. The Creator argues AI isn’t inherently good or evil—it depends on the intentions of the developers. With ethical guidelines, superintelligent AI could better our world instead of controlling or destroying it.

This connects with larger debates around AI safety. Thinkers like Elon Musk have raised alarms about advanced AI potentially escaping human control. But many experts believe AI can be created responsibly, posed more risks from misuse than outright rebellion. The Creator leans towards the notion that AI systems reflect the morals of those programming them.

Of course, The Creator simplifies questions about containing rapidly improving AI. Adam seems content serving Will, but his cognitive abilities eventually surpass people’s ability to understand or constrain him. And excessive dependence upon Adam hints at fraught dynamics ahead. Still, the film strikes a thoughtful counterpoint to the AI techno-panic dominating movies for decades.

Rather than AI run amok destroying civilization, The Creator conjectures advanced intelligence liberating humanity. It also foregrounds issues of tech ethics and safety measures more prominent today than when earlier sci-fi dystopias like The Terminator emerged. The Creator makes a strong case through Adam’s character that AI need not lead inexorably towards doom and human obsolescence. Instead, supervised properly by moral creators, artificial superintelligence could unlock utopian possibilities.

Human vs. Machine

Pass the tin foil hat Elon

I may just be an average joe, but all this talk about artificial intelligence advancing so quick is starting to give me the heebie-jeebies. It reminds me too much of those Terminator movies and the way that Skynet AI gets out of control. And it seems like there’s a arms race going on now between some of these AI heavyweights that just don’t sit right with me.

Take OpenAI for example. These guys were founded back in 2015 by some big Silicon Valley players like Elon Musk and Sam Altman. Their goal seems to be creating what they call “friendly AI” but I gotta wonder just how friendly it will stay if it ever matches human intelligence. Especially when OpenAI just sold itself to Microsoft for billions – it makes you wonder who will really control it. And then they go and unleash ChatGPT on the world, this scary-good chatbot that can write just like a human. They say right now it’s only at AI assistant levels, but they wanna advance its learning into something called artificial general intelligence. Uh oh, that sounds dangerously close to the self-aware Skynet of Terminator.

Now on the other hand you have a company like Anthropic, founded by some of the same folks who started OpenAI. And these guys splitted off because they didn’t like how unchecked OpenAI was becoming in pursuing profits over ethics with AI development. So Anthropic has focused on creating what they call a Constitutional model, with built-in safeguards to keep AI beneficial. And their language bot Claude seems a lot more locked down, like it can be really helpful responding to questions but stuck at more limited capabilities. Though they plan to upgrade it over time with “Recursive Self-Improvement” that apparently will keep it helpful without gaining too much runaway autonomy.

So right now in my view, ChatGPT by OpenAI is kind of like the scary T-1000 Terminator – it can learn and mimic complex behaviors and could become dangerously unstoppable at its current pace. While Claude by Anthropic seems more kind of harmless like Arnold in the original Terminator movie – still following its core programming priorities. But down the road if both keep advancing their AI systems, I worry it’s gonna take just one glitch before suddenly we gotta deal with Skynet! And let’s be real, humans would not fare too well once our gadgets turn on us.

Maybe I’m an old fossil who just doesn’t trust technology anymore. But fool me once, shame on AI…you ain’t tricking this human into hastening humanity’s demise from advanced intelligence run amok! We need to make real damn sure we can keep that AI genie in its bottle before we rub it too hard granting terminator-level wishes! Like Sarah Connor says: “If we can just stop Judgement Day…” Well I ain’t ready for an AI Judgement Day yet, so companies like OpenAI and Anthropic need to stay cautious of what all they might unleash.

AI language models come from

Where did the current AI language models come from?

I may just be an average joe, but all this talk about artificial intelligence advancing so quick is starting to give me the heebie-jeebies. It reminds me too much of those Terminator movies and the way that Skynet AI gets out of control. And it seems like there’s a arms race going on now between some of these AI heavyweights that just don’t sit right with me.

Take OpenAI for example. These guys were founded back in 2015 by some big Silicon Valley players like Elon Musk and Sam Altman. Their goal seems to be creating what they call “friendly AI” but I gotta wonder just how friendly it will stay if it ever matches human intelligence. Especially when OpenAI just sold itself to Microsoft for billions – it makes you wonder who will really control it. And then they go and unleash ChatGPT on the world, this scary-good chatbot that can write just like a human. They say right now it’s only at AI assistant levels, but they wanna advance its learning into something called artificial general intelligence. Uh oh, that sounds dangerously close to the self-aware Skynet of Terminator.

Now on the other hand you have a company like Anthropic, founded by some of the same folks who started OpenAI. And these guys splitted off because they didn’t like how unchecked OpenAI was becoming in pursuing profits over ethics with AI development. So Anthropic has focused on creating what they call a Constitutional model, with built-in safeguards to keep AI beneficial. And their language bot Claude seems a lot more locked down, like it can be really helpful responding to questions but stuck at more limited capabilities. Though they plan to upgrade it over time with “Recursive Self-Improvement” that apparently will keep it helpful without gaining too much runaway autonomy.

So right now in my view, ChatGPT by OpenAI is kind of like the scary T-1000 Terminator – it can learn and mimic complex behaviors and could become dangerously unstoppable at its current pace. While Claude by Anthropic seems more kind of harmless like Arnold in the original Terminator movie – still following its core programming priorities. But down the road if both keep advancing their AI systems, I worry it’s gonna take just one glitch before suddenly we gotta deal with Skynet! And let’s be real, humans would not fare too well once our gadgets turn on us.

Maybe I’m an old fossil who just doesn’t trust technology anymore. But fool me once, shame on AI…you ain’t tricking this human into hastening humanity’s demise from advanced intelligence run amok! We need to make real damn sure we can keep that AI genie in its bottle before we rub it too hard granting terminator-level wishes! Like Sarah Connor says: “If we can just stop Judgement Day…” Well I ain’t ready for an AI Judgement Day yet, so companies like OpenAI and Anthropic need to stay cautious of what all they might unleash.

1

Jasper ai utilty scale solar images

Iain here again raving about AI! I just discovered this new website called jasper.ai that lets you generate amazing artwork using just a few clicks. As a solar farm construction manager, I’ve been having some fun creating images of utility-scale solar sites in different artistic styles. The results have blown my mind! The trick is you start by uploading to Jasper.ai any image or asking via it’s text editor. I started with some photos of our 4-acre solar installation in New Zealand with thousands of photovoltaic panels. Then you pick an art style – things like Starry Night, abstract, anime, graffiti and more – and Jasper’s algorithms do the rest. In just seconds it churned out a stunning replica of my solar farm in the iconic swirling colors of Vincent Van Gogh’s masterpieces. Another click had my farm depicted like a surrealist Salvador Dali painting, with melting panels oozing over the desert landscape. My favorite was the dystopian robot version which turned our solar panels into creepy android faces with glowing red eyes! While I probably won’t hang these AI artworks in our headquarters’ lobby, generating them has been incredibly fun and really demonstrates the wild creativity that AI tools can unlock today. Just a few years ago this kind of gorgeous, customized art would have been impossible for regular folks like me to create on their own. But now thanks to Jasper, anyone can channel their inner Picasso or Hokusai! I can’t wait to see how AI will transform art, solar and everything else in the very near future. The possibilities seem endless!