Leslie, when that wasn't important right now.
ZeroGravitas
Painted wall? That's high tech shit.
I got a Tesla from my work before Elon went full Reich 3, and try this:
- break on bridge shadows on the highway
- start wipers on shadows, but not on rain
- break on cars parked on the roadside if there's a bend in the road
- disengage autopilot and break when driving towards the sun
- change set speed at highway crossings because fuck the guy behind me, right?
- engage emergency break if a bike waits to cross at the side of the road
To which I'll add:
- moldy frunk (short for fucking trunk, I guess?), no ventilation whatsoever, water comes in, water stays in
- pay attention noises for fuck-all reasons masking my podcasts and forcing me to rewind
- the fucking cabin camera nanny - which I admittedly disabled with some chewing gum
- the worst mp3 player known to man, the original Winamp was light years ahead - won't index, won't search, will reload USB and lose its place with almost every car start
- bonkers UI with no integration with Android or Apple - I'm playing podcasts via low rate Bluetooth codecs, at least it doesn't matter much for voice
- unusable airco in auto mode, insists on blowing cold air in your face
Say what you want about European cars, at least they got usability and integration right. As did most of the auto industry. Fuck Tesla, never again. Bunch of Steve Jobs wannabes.
I think you nailed it. In the grand scheme of things, critical thinking is always required.
The problem is that, when it comes to LLMs, people seem to use magical thinking instead. I'm not an artist, so I oohd and aahd at some of the AI art I got to see, especially in the early days, when we weren't flooded with all this AI slop. But when I saw the coding shit it spewed? Thanks, I'll pass.
The only legit use of AI in my field that I know of is an unit test generator, where tests were measured for stability and code coverage increase before being submitted to dev approval. But actual non-trivial production grade code? Hell no.
You know, I was happy to dig through 9yo StackOverflow posts and adapt answers to my needs, because at least those examples did work for somebody. LLMs for me are just glorified autocorrect functions, and I treat them as such.
A colleague of mine had a recent experience with Copilot hallucinating a few Python functions that looked legit, ran without issue and did fuck all. We figured it out on testing, but boy was that a wake up call (colleague in question has what you might call an early adopter mindset).
He's a treasure. Loved him in Discovery too, I stopped watching when he left.