this post was submitted on 10 Jun 2024
97 points (100.0% liked)

Technology

68244 readers
3774 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Actually, really liked the Apple Intelligence announcement. It must be a very exciting time at Apple as they layer AI on top of the entire OS. A few of the major themes.

Step 1 Multimodal I/O. Enable text/audio/image/video capability, both read and write. These are the native human APIs, so to speak.

Step 2 Agentic. Allow all parts of the OS and apps to inter-operate via "function calling"; kernel process LLM that can schedule and coordinate work across them given user queries.

Step 3 Frictionless. Fully integrate these features in a highly frictionless, fast, "always on", and contextual way. No going around copy pasting information, prompt engineering, or etc. Adapt the UI accordingly.

Step 4 Initiative. Don't perform a task given a prompt, anticipate the prompt, suggest, initiate.

Step 5 Delegation hierarchy. Move as much intelligence as you can on device (Apple Silicon very helpful and well-suited), but allow optional dispatch of work to cloud.

Step 6 Modularity. Allow the OS to access and support an entire and growing ecosystem of LLMs (e.g. ChatGPT announcement).

Step 7 Privacy. <3

We're quickly heading into a world where you can open up your phone and just say stuff. It talks back and it knows you. And it just works. Super exciting and as a user, quite looking forward to it.

https://x.com/karpathy/status/1800242310116262150?s=46

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 105 points 9 months ago (3 children)

Yikes. Just hit em with the ol' "<3" for privacy. Does not inspire confidence.

[–] [email protected] 51 points 9 months ago

#trustmebro

<3

[–] [email protected] 33 points 9 months ago

I thought the original post was satire - list all of the privacy issues, then throw in "Privacy <3" at the end. Seriously, almost every one of those points has a potential privacy issue.

Guess I was being too generous.

[–] [email protected] 14 points 9 months ago (3 children)

How so? Many people want to use AI in privacy, but it’s too hard for most people to set it up for themselves currently.

Having AI tools on the OS level so you can use it in almost any app and that is guaranteed to be processed on device in privacy will be very useful if done right.

[–] [email protected] 44 points 9 months ago (2 children)

You think your iPhone isn’t collecting data on you? Is that what you’re saying?

[–] [email protected] 12 points 9 months ago (2 children)

Unless you are designing and creating your own chips for processing, networking etc, then privacy today is about trust, not technology. There’s no escaping it. I know iPhone and Apple is collecting data about me. I currently trust them the most on how they use it.

[–] [email protected] 24 points 9 months ago

Running FOSS and taking control of your network will do a far better trick of privacy vs convenience than most people can imagine

[–] [email protected] 14 points 9 months ago* (last edited 9 months ago) (9 children)

There are degrees of trust though. You can trust the developers and people who audited the code if you have no skill/desire to audit it yourself, or you can trust just the developers.

And even closed systems' behavior can be monitored and analyzed.

load more comments (9 replies)
[–] [email protected] 5 points 9 months ago (2 children)

The phone is, Apple isn’t. They outline everything in the keynote if you are interested.

[–] [email protected] 11 points 9 months ago

Because Apple has never lied or misled before

[–] [email protected] 8 points 9 months ago* (last edited 9 months ago)

Their keynotes are irrelevant, their official privacy policies and legal disclosures take precedence over marketing claims or statements made in keynotes or presentations. Apple's privacy policy states that the company collects data necessary to provide and improve its products and services. The OS-level AI would fall under this category, allowing Apple to collect data processed by the AI for improving its functionality and models. Apple's keynotes and marketing materials do not carry legal weight when it comes to their data practices. With the AI system operating at the OS level, it likely has access to a wide range of user data, including text inputs, conversations, and potentially other sensitive information.

[–] [email protected] 22 points 9 months ago (5 children)

Yeah just like Microsoft Recall right? An AI that has access to every single thing you do (and would also be recording, otherwise how does it know "you") can never be private by design. Its literal design is to know everything about you, your actions, and your habits. I wouldn't trust anyone to be able to create an actually secure piece of software that does the above. It will always be able to be stolen/sold/abused.

[–] [email protected] 11 points 9 months ago

But how can we best sell your data to advertisers otherwise?

load more comments (4 replies)
[–] [email protected] 10 points 9 months ago* (last edited 9 months ago) (2 children)

you can use it in almost any app
if done right

How are you going to be able to use it in "almost any app" in a way that is secure? How are you going to design it so that the apps don't abuse the AI to get more information on the user out of it than intended? Seems pretty damn inherently insecure to me.

load more comments (2 replies)
[–] [email protected] 103 points 9 months ago (1 children)

Founding member of company that stands to make fortunes through a product endorses said product.

[–] [email protected] 14 points 9 months ago

I mean, that’s fair, if you don’t believe in his integrity than this news have very little value to you.

[–] [email protected] 59 points 9 months ago* (last edited 9 months ago) (3 children)

Kernel process LLM

God I hope not. That sounds extremely insecure. Definitely do not do this in the kernel.

[–] [email protected] 23 points 9 months ago (1 children)

Why not just have the LLM replace the kernel?

[–] [email protected] 37 points 9 months ago (1 children)

Why not have the LLM replace the user?

[–] [email protected] 17 points 9 months ago (1 children)

This could really cut down on those pesky bug reports....

[–] [email protected] 10 points 9 months ago

I'm imagining a world where advertisers have to try to raise engagement from AIs in their ads

[–] [email protected] 7 points 9 months ago

Why do I feel like this is making me Recall another recent awful idea?

load more comments (1 replies)
[–] [email protected] 59 points 9 months ago (1 children)

The amount of corporate speak makes me sick. Especially the mix of buzzwords being mixed with shit like "KERNEL PROCESS", shit's cursed.

[–] [email protected] 11 points 9 months ago

Hey, I love my kernel processes! Especially my LLM kernel processes.

[–] [email protected] 46 points 9 months ago (1 children)

"and it just works"

has he even used an llm before?

[–] [email protected] 9 points 9 months ago (1 children)

He sort of invented it, so you have to think he’s commenting on the concept here, not the implementation.

I have tried a lot of medium and small models, and there it just no good replacement for the larger ones for natural text output. And they won’t run on device.

Still, fine-tuning smaller models can do wonders, so my guess would be that Apple Intelligence is really 20+ small and fine tuned models that kick in based on which action you take.

[–] [email protected] 10 points 9 months ago (4 children)

An LLM has no comprehension of what it says. It’s just a puppy that is really good at performing for treats. This will always yield nonsense a meaningful proportion of the time.

I don’t care how statistically good your model can be under certain constraints and inputs. At the end of the day, all you’ve done is classically condition your computer.

load more comments (4 replies)
[–] [email protected] 45 points 9 months ago (3 children)
[–] [email protected] 9 points 9 months ago
[–] [email protected] 8 points 9 months ago (1 children)

Care to elaborate?

The suspicious parts to me was that they didn’t show much of the private cloud stuff, how much it would cost, and that they still feel the need to promote ChatGPT .

[–] [email protected] 31 points 9 months ago (1 children)

All of it sounds like marketing and I have serious doubt's about their commitment to, or ability to respect privacy when one of their previous points is that they plan to integrate third party systems. So...I have doubts.

[–] [email protected] 12 points 9 months ago (1 children)

I mean, that’s fair, I personally use Apple devices specifically because I trust them the most on privacy, but if you don’t trust Apple with privacy, which is a 100% valid take to have, then of course this mayor selling point of their marketing becomes moot.

[–] [email protected] 16 points 9 months ago (2 children)

I would not give the right of anyone deciding what is good for my privacy, including Apple. This should be a judgement made by myself.

load more comments (2 replies)
[–] [email protected] 43 points 9 months ago (1 children)
[–] [email protected] 22 points 9 months ago (1 children)

Check out OP defending Apple in every comment in this thread. It would be funny if it weren't so... yeah.

[–] [email protected] 12 points 9 months ago

I am just sitting here like.. how. Am I too autistic to distinguish satire from non-satire ones

[–] [email protected] 34 points 9 months ago

What the hell is the fella smoking if he thinks Apple would ever let others use their on-device LLM? Like, the company that deems it too dangerous if apps could change a wallpaper?

[–] [email protected] 20 points 9 months ago
[–] [email protected] 20 points 9 months ago (2 children)

I look forward to Apple Marketing coming up with their usual line of nonsense, like a meaningless name for an existing capability that they are claiming to have invented.

[–] [email protected] 9 points 9 months ago (1 children)

I watched an abbreviated video. Pretty much everything they announced was available on other platforms 5+years ago

[–] [email protected] 5 points 9 months ago* (last edited 9 months ago) (3 children)

But now it’s on your iPhone.

I agree that it’s dumb.

It will still make stock go up.

load more comments (3 replies)
load more comments (1 replies)
[–] [email protected] 10 points 9 months ago* (last edited 9 months ago)

Andrej Karpathy endorses Apple Intelligence

Who is this guy and why his opinion should mean anything to me?

EDIT: nevermind, searched for it and its some guy who used to work at OpenAI.

load more comments
view more: next ›