

I used the mold.
I used the mold.
Walter Brimley from the accounts payable department was only pre-diabetic before he died.
Like diabetes
This guy clamps
Or it’s just the classic Apple “launch some weird shit with a cool interaction model or form factor, but we don’t really know how people will -actually- use this.”
AppleTV, AppleWatch, Firewire iPod, HomePod, etc. They kick it out, people complain about it, Apple learns the users who adopted it, then they focus the feature set when they better understand the market fit.
IMHO, it seems like that’s the play here. Heck, they even started with the “pro” during the initial launch, which gives them a very obvious off ramp for a cheaper / more focused non-pro product.
At least one of those guys is able to ship a product that does what it was advertised to do.
The problem with the Vision Pro is that no one wants to pay $4000 for what it does.
The Vision Pro is a cool solution in search of a user need.
Voice control is a user need that Apple struggles to deliver solutions for.
I think enterprise needs will ensure that people develops solutions to this.
Companies can’t have their data creeping out into the public, or even creeping out into other parts of the org. If you’re customer, roadmap, or HR data got into the wrong hands, that could be a disaster.
Apple, Google, and Microsoft will never get AI into the workplace is AI is sharing confidential enterprise data outside of an organization. And all of these tech companies desperately want their tools to be used in enterprises.
Yeah, it a lot of those studies are about stupid stuff like an LLM in-app to look at grammar, or a diffusion model to throw stupid clip art into things. No one gives a shit about that stuff. You can easily just cut and paste from OpenAI’s experience, and get access to more tools there.
That said, being able to ask an OS to look at one local vectorized DB of texts, images, documents, recognize context, then compose and complete tasks based upon that context. That shit is fucking cool.
That said, a lot of people haven’t experienced that yet, so when they get asked about “AI,” their responses are framed with what they’ve experienced.
It’s the “faster horse” analogy. People that don’t know about cars, busses, and trains will ask for a faster horse when you ask them to envision a faster mode of transport.
To be fair, Rivian is selling as many trucks as they can produce. Rivian could sell more vehicles if they had the line capacity.
Tesla is an older company with more mature manufacturing lines, and they can make more Trucks, but no one wants them.
Why can’t it work?
I work on AI systems that integrate into other apps and make contextual requests. That’s the big feature that Apple hasn’t launched, and it’s very much a problem that others have solved before.
The new models are being fixed by “nut clamping”
Some of the newer auto manufacturers do that. Telsa, Rivian, etc. Those companies all have good in-house software developers. Almost everyone else farms this stuff out, which is why it’s never updated.