1
submitted 2 years ago by [email protected] to c/[email protected]

Intelligence explosion arguments don’t require Platonism. They just require intelligence to exist in the normal fuzzy way that all concepts exist.

1
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

At OpenAI, protecting user data is fundamental to our mission. We do not train our models on inputs and outputs through our API.

1
submitted 2 years ago by [email protected] to c/[email protected]
1
submitted 2 years ago by [email protected] to c/[email protected]

We’re rolling out custom instructions to give you more control over how ChatGPT responds. Set your preferences, and ChatGPT will keep them in mind for all future conversations.

@AutoTLDR

1
submitted 2 years ago by [email protected] to c/[email protected]

GPT-3.5 and GPT-4 are the two most widely used large language model (LLM) services. However, when and how these models are updated over time is opaque. Here, we evaluate the March 2023 and June 2023 versions of GPT-3.5 and GPT-4 on four diverse tasks: 1) solving math problems, 2) answering sensitive/dangerous questions, 3) generating code and 4) visual reasoning. We find that the performance and behavior of both GPT-3.5 and GPT-4 can vary greatly over time. For example, GPT-4 (March 2023) was very good at identifying prime numbers (accuracy 97.6%) but GPT-4 (June 2023) was very poor on these same questions (accuracy 2.4%). Interestingly GPT-3.5 (June 2023) was much better than GPT-3.5 (March 2023) in this task. GPT-4 was less willing to answer sensitive questions in June than in March, and both GPT-4 and GPT-3.5 had more formatting mistakes in code generation in June than in March. Overall, our findings shows that the behavior of the “same” LLM service can change substantially in a relatively short amount of time, highlighting the need for continuous monitoring of LLM quality.

2
Llama 2 - Meta AI (ai.meta.com)
submitted 2 years ago by [email protected] to c/[email protected]

Introducing Llama 2 - The next generation of our open source large language model. Llama 2 is available for free for research and commercial use.

This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters.

@AutoTLDR

1
submitted 2 years ago by [email protected] to c/[email protected]

16 Mar, 2023

Kagi Search is pleased to announce the introduction of three AI features into our product offering.

We’d like to discuss how we see AI’s role in search, what are the challenges and our AI integration philosophy. Finally, we will be going over the features we are launching today.

@AutoTLDR

1
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

This is a game that tests your ability to predict ("forecast") how well GPT-4 will perform at various types of questions. (In caase you've been living under a rock these last few months, GPT-4 is a state-of-the-art "AI" language model that can solve all kinds of tasks.)

Many people speak very confidently about what capabilities large language models do and do not have (and sometimes even could or could never have). I get the impression that most people who make such claims don't even know what current models can do. So: put yourself to the test.

1
submitted 2 years ago by [email protected] to c/[email protected]

Increasingly powerful AI systems are being released at an increasingly rapid pace. This week saw the debut of Claude 2, likely the second most capable AI system available to the public. The week before, Open AI released Code Interpreter, the most sophisticated mode of AI yet available. The week before that, some AIs got the ability to see images.

And yet not a single AI lab seems to have provided any user documentation. Instead, the only user guides out there appear to be Twitter influencer threads. Documentation-by-rumor is a weird choice for organizations claiming to be concerned about proper use of their technologies, but here we are.

@AutoTLDR

1
submitted 2 years ago by [email protected] to c/[email protected]

TL;DR: (by GPT-4 🤖)

The article by Chandler Kilpatrick on Medium discusses the new Code Interpreter feature of ChatGPT, which has been released to Beta from its previous Alpha testing phase. The Code Interpreter enhances ChatGPT's ability to process, generate, manipulate, and run code, currently supporting only Python. Users can upload files (with a limit of 100 MB per file) for the AI to interact with, although it cannot edit files directly. The Code Interpreter can be used in various fields such as software development, data analytics, documentation, and education, helping with tasks like code generation, error detection, code refactoring, creating data visualizations, and providing real-time programming tutoring. The article also highlights some impressive feats accomplished by users, including recreating the game Flappy Bird in less than 10 minutes.

1
submitted 2 years ago by [email protected] to c/[email protected]

LLM is my command-line utility and Python library for working with large language models such as GPT-4. I just released version 0.5 with a huge new feature: you can now install plugins that add support for additional models to the tool, including models that can run on your own hardware.

@AutoTLDR

1
submitted 2 years ago by [email protected] to c/[email protected]

An AI-first notebook, grounded in your own documents, designed to help you gain insights faster.

@AutoTLDR

[-] [email protected] 22 points 2 years ago* (last edited 2 years ago)

And people are seriously considering federating with Threads if it implements ActivityPub. Things have been so crazy recently that I think If Satan existed and started a Lemmy instance, probably there would still be people arguing in good faith for federating with him.

[-] [email protected] 17 points 2 years ago

Lol that’s like saying there’s too much porn on /r/gonewild

[-] [email protected] 13 points 2 years ago

I hope all major instances would immediately defederate

[-] [email protected] 11 points 2 years ago

Trust me, the shit show is glorious. I even instinctively upvoted a couple of medieval memes but quickly realized what I was doing and closed the tab.

[-] [email protected] 13 points 2 years ago

I’m firmly in the print statement / console.log camp but this article convinced me to try using a debugger.

[-] [email protected] 12 points 2 years ago* (last edited 2 years ago)

I absolutely agree. But:

  • sometimes you need to modify existing code and you can't add the types necessary without a giant refactoring
  • you can't express units with types in:
    • JSON/YAML object keys
    • XML tag or attribute names
    • environment variable names
    • CLI switch names
    • database column names
    • HTTP query parameters
    • programming languages without a strong type system

Obviously as a Hungarian I have a soft spot for Hungarian notation :) But in these cases I think it's warranted.

[-] [email protected] 28 points 2 years ago* (last edited 2 years ago)

I’m sure it’s a nice client but I don’t understand why so many GUI projects have no screenshots in their READMEs. It would be great if I could immediately see if I like it without installing it.

EDIT: thanks for adding the screenshot to your post! It looks awesome!

[-] [email protected] 29 points 2 years ago

This is exactly what it felt like. It is amazing to see how well federation works - look at all the usernames from different instances! I enjoy the Cambrian explosion of new communities. It feels like conquering and taming a wild frontier.

[-] [email protected] 11 points 2 years ago

The more I think about it, the more it seems that the appropriate response is mutual defederation. It will cause a lot of unnecessary confusion if lemmy.world and the other affected instances don’t do that.

[-] [email protected] 50 points 2 years ago

Beehaw instance owners:

[-] [email protected] 12 points 2 years ago

That may be part of it but I've also observed it among fellow programmers.

You give your opinion about something and your coworker has a smug, arrogant knee-jerk reaction based on some cargo-cult belief without actually thinking about the details of the problem. Then you need to walk them through why what you said is not what they meant step-by-step, and while it may be wrong it is still a valid opinion. If you succeed, they completely change and become cooperative, and you can have an actually useful discussion. But you have to be super patient, like when taming an irritated feral cat that wants to scratch you. If you're good, the cat becomes cuddly and cute.

This works but I'm extremely tired of having to perform this dance with 60% of the new coders I meet.

[-] [email protected] 11 points 2 years ago

Haha, so true! I can definitely switch between "god at the keyboard" vs. "dog at the keyboard" within a single minute.

view more: next ›

sisyphean

0 post score
0 comment score
joined 2 years ago
MODERATOR OF