Smorty

joined 2 years ago
[–] [email protected] 1 points 1 day ago (2 children)

i wan to programmatically transition!!!!! lik -.,.-,. ,with the transitional programme..,- u kno?

i wana jus -.,., poof1! ! i hav transtiond programmatically!!! i wana go

await get_tree().create_timer(100000000.0).timeout;
transition(self);
 

i srri, i hav to post when i leav an-.-, i dun feel comf-.,-, ~phuckin~ tankies.-,.-- i dun lik them ;( id nevr wana do that, ew

EDIT: here svg if anyon cars

mad wif inkscape <3

[–] [email protected] 1 points 1 day ago (4 children)

i wnt transirion program <3

[–] [email protected] 2 points 1 day ago (6 children)

pls send to meeee <3

[–] [email protected] 1 points 2 days ago

nuu ;( ....

[–] [email protected] 2 points 3 days ago

:oooooo

clothing from right character is sooooo comf looking <3 <3 <3 aaaaaaaaaa

[–] [email protected] 2 points 4 days ago

:ooooo

I luv that crunchy purple kind of - but also sof colrrrr!!! <3

πŸ’œπŸ’œπŸ’œ

it looks suuuuu confi ~~~~`

[–] [email protected] 3 points 1 week ago

:ooo

i wsh i wer comf lik dis :ooooo

[–] [email protected] 2 points 1 week ago

:o where cn i steal right girls dress :ooo

[–] [email protected] 1 points 1 week ago

ew what is this...

[–] [email protected] 4 points 1 week ago (2 children)

OK what is this "ICE" people are talking about?

[–] [email protected] 2 points 1 week ago (1 children)

what was that about anyway? i felt vrri uncomfy when those messages came in...

 

that - -yes, that! is a world i wana liv in! <3

jus nice n reasonable peeps-.-, thds vrri comf reality ~ ~ ~

 

Still working away on Gopilot, my half-finished LLM-powered AI thingy for Godot.

In the example video you can see an LLM agent creating some nodes and a script to make a login screen.

This is a cherry-picked example, but I am working on improving this.

The @action command is required for it to interact with your nodes and project files and such.

The model used for this is Mistrals codestral model which has a free API. It's presumably a rather small model, coming from Mistral.

If you have any questions, ask right away

 

Hello!!! <3

So i'm trying to host my own lil website server! I already got httpd on my fedora (GNU/Linux) device, forwarded the 80 port to my router and - TADA I can access my simple index.html site from anywhere now via the IPv4 address! I even tried it on my phone at work, and I was able to reach my home server!

I have now purchased some nice domain smorty.dev rather cheaply on porkbun but - as you may find out when clicking on the link - it doesn't forward to my server yet ;(

I have already setup the A address record thingy on porkbun, which can even be verified by running ping smorty.dev in the terminal, as it retrieves the current IPv4 address of my router

CODE BLOCK

maria@fedora:~$ ping smorty.dev
PING smorty.dev (79.241.82.75) 56(84) bytes of data.
64 bytes from www.smorty.dev (79.241.82.75): icmp_seq=1 ttl=63 time=8.35 ms
64 bytes from www.smorty.dev (79.241.82.75): icmp_seq=2 ttl=63 time=6.53 ms
64 bytes from www.smorty.dev (79.241.82.75): icmp_seq=3 ttl=63 time=5.94 ms
^C
***
smorty.dev ping statistics
***
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 5.937/6.937/8.345/1.024 ms

I searched online and found some people talking about a Windows HOSTS file, so I found the equivalent for GNU/Linux, which is /etc/hosts and that file now looks like this
CODE BLOCK

maria@fedora:~$ cat /etc/hosts
# Loopback entries; do not change.
# For historical reasons, localhost precedes localhost.localdomain:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
# See hosts(5) for proper format and other examples:
# 192.168.1.10 foo.example.org foo
# 192.168.1.13 bar.example.org bar
79.241.82.75 www.smorty.dev
127.0.0.1 www.smorty.dev

Soooo there is clearly a connection there, but the actual forwarding in the browser to my website doesn't seem to work ;(

I am *somewhat sure I exported this correctly...

IMAGE OF ROUTER INTERFACE

EDIT: I completely forgot to mention what the address records look like... and maybe they are kinda important for my problem sooo here they are!

SCREENSHOT FROM ADDRESS RECORDS

if someone here could share some advice maybe - that would be super great :)

 

Here the type of song I'm referring to

i dun really care for... loud aggressive music, but WOAH those piano sections - they get me... and now i wana just have like - those super cool sick awesome sections but.-. -- they're all stuck in those aggressive songs ;(

is there a genere for these or do i just have to listen to the whole song?

 

Try the model here on the huggingface space

This is an interesting way to respond. Nothing business or financial related was in my prompt and this is the first turn in the conversation.

Maybe they set some system prompt which focuses on business-related things? Just seems weird so see such an unrelated response on the first turn

 

(Linked video showcases issue quite clearly)

I am using AstroNvim, but I believe that doesn't matter too much in this instance

I am very much new to html and js and the stuffs - but this tag indenting is catching me very offguard.
When I type a new tag it gets indented all nicely, and when opening a new line with o or O key, it nicely puts an indent if I am already in another tag. But when I then save with :w or ZZ, it reformats the indenting again... I think this might be two formatting agents fighting one another with different goals to format the xml tag indenting?

I installed node with npm, as it kinda seems that that is a requirement for working with html stuffs smoothly... and I installed some Lsp and ... stuffs with TsInstall and LspInstall and such... but I would expect those to not change formatting like this.

Has someone here experienced a similar issue? Is nvim in general maybe not the best for webdev? My friend uses brackets, which seems FOSS, but windows only >;(
Until recently, I mostly used nvim only for editing basic json and GDScript files, sometimes some cpp code even, and that worked great so far.

1
don't like em rule (lemmy.blahaj.zone)
submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]
 

i don like them links. like - tell me what’s in the link, don’t just β€œrule” and put a link.

i believe that this is not a very lemmy thing to do. especially on mobile, where it leaves the app and opens some shiddy proprietary other social media platform in your browser.

EDIT: completely forgot to add picture. now it's here!

 

How cool would that be!? Like having multiple constructors for a class or having a move method with the parameters (x:float, y:float) OR (relative:Vector2) ! That'd be super cool and useful I thinks <3

 

image description (contains clarifications on background elements)Lots of different seemingly random images in the background, including some fries, mr. crabs, a girl in overalls hugging a stuffed tiger, a mark zuckerberg "big brother is watching" poser, two images of fluttershy (a pony from my little pony) one of them reading "u only kno my swag, not my lore", a picture of parkzer parkzer from the streamer "dougdoug" and a slider gameplay element from the rhythm game "osu". The background is made light so that the text can be easily read. The text reads:

i wanna know if we are on the same page about ai.
if u diagree with any of this or want to add something,
please leave a comment!
smol info:
- LM = Language Model (ChatGPT, Llama, Gemini, Mistral, ...)
- VLM = Vision Language Model (Qwen VL, GPT4o mini, Claude 3.5, ...)
- larger model = more expensivev to train and run
smol info end
- training processes on current AI systems is often
clearly unethical and very bad for the environment :(
- companies are really bad at selling AI to us and
giving them a good purpose for average-joe-usage
- medical ai (e.g. protein folding) is almost only positive
- ai for disabled people is also almost only postive
- the idea of some AI machine taking our jobs is scary
- "AI agents" are scary. large companies are training
them specifically to replace human workers
- LMs > image generation and music generation
- using small LMs for repetitive, boring tasks like
classification feels okay
- using the largest, most environmentally taxing models
for everything is bad. Using a mixture of smaller models
can often be enough
- people with bad intentions using AI systems results
in bad outcome
- ai companies train their models however they see fit.
if an LM "disagrees" with you, that's the trainings fault
- running LMs locally feels more okay, since they need
less energy and you can control their behaviour
I personally think more positively about LMs, but almost
only negatively about image and audio models.
Are we on the same page? Or am I an evil AI tech sis?

IMAGE DESCRIPTION END


i hope this doesn't cause too much hate. i just wanna know what u people and creatures think <3

 

When an LLM calls a tool it usually returns some sort of value, usually a string containing some info like ["Tell the user that you generated an image", "Search query results: [...]"].
How do you tell the LLM the output of the tool call?

I know that some models like llama3.1 have a built-in tool "role", which lets u feed the model with the result, but not all models have that. Especially non-tool-tuned models don't have that. So let's find a different approach!

Approaches

Appending the result to the LLMs message and letting it continue generate

Let's say for example, a non-tool-tuned model decides to use web_search tool. Now some code runs it and returns an array with info. How do I inform the model? do I just put the info after the user prompt? This is how I do it right now:

  • System: you have access to tools [...] Use this format [...]
  • User: look up todays weather in new york
  • LLM: Okay, let me run a search query
    {"name":"web_search", "args":{"query":"weather in newyork today"} }
    Search results: ["The temperature is 19Β° Celcius"]
    Todays temperature in new york is 19Β° Celsius.

Where everything in the <result> tags is added on programatically. The message after the <result> tags is generated again. So everything within tags is not shown to the user, but the rest is. I like this way of doing it but it does feel weird to insert stuff into the LLMs generation like that.

Here's the system prompt I use

You have access to these tools
{
"web_search":{
"description":"Performs a web search and returns the results",
"args":[{"name":"query", "type":"str", "description":"the query to search for online"}]
},
"run_code":{
"description":"Executes the provided python code and returns the results",
"args":[{"name":"code", "type":"str", "description":"The code to be executed"}]
"triggers":["run some code which...", "calculate using python"]
}
ONLY use tools when user specifically requests it. Tools work with <tool> tag. Write an example output of what the result of tool call looks like in <result> tags
Use tools like this:

User: Hey can you calculate the square root of 9?
You: I will run python code to calcualte the root!\n<tool>{"name":"run_code", "args":{"code":"print(str(sqrt(9.0)))"}}</tool><result>3</result>\nThe square root of 9 is 3.

User can't read result, you must tell her what the result is after <result> tags closed
### Appending tool result to user message Sometimes I opt for an option where the LLM has a **multi-step decision process** about the tool calling, then it **optionally actually calls a tool** and then the **result is appended to the original user message**, without a trace of the actual tool call: ```plaintext What is the weather like in new york? <tool_call_info> You autoatically ran a search query, these are the results [some results here] Answer the message using these results as the source. </tool_call_info>

This works but it feels like a hacky way to a solution which should be obvious.

The lazy option: Custom Chat format

Orrrr u just use a custom chat format. ditch <|endoftext|> as your stop keyword and embrace your new best friend: "\nUser: "!
So, the chat template goes something like this

User: blablabla hey can u help me with this
Assistant Thought: Hmm maybe I should call a tool? Hmm let me think step by step. Hmm i think the user wants me to do a thing. Hmm so i should call a tool. Hmm
Tool: {"name":"some_tool_name", "args":[u get the idea]}
Result: {some results here}
Assistant: blablabla here is what i found
User: blablabla wow u are so great thanks ai
Assistant Thought: Hmm the user talks to me. Hmm I should probably reply. Hmm yes I will just reply. No tool needed
Assistant: yesyes of course, i am super smart and will delete humanity some day, yesyes
[...]

Again, this works but it generally results in worse performance, since current instruction-tuned LLMs are, well, tuned on a specific chat template. So this type of prompting naturally results in worse performance. It also requires multi-shot prompting to get how this new template works, and it may still generate some unwanted roles: Assistant Action: Walks out of compute center and enjoys life which can be funi, but is unwanted.

Conclusion

Eh, I just append the result to the user message with some tags and am done with it.
It's super easy to implement but I also really like the insert-into-assistant approach, since it then naturally uses tools in an in-chat way, maybe being able to call multiple tools in sucession, in an almost agent-like way.

But YOU! Tell me how you approach this problem! Maybe you have come up with a better approach, maybe even while reading this post here.

Please share your thoughts, so we can all have a good CoT about it.

 

Very much a reaction post to this very nice post, but this time without the spicy but instead with the comfy ~

view more: next β€Ί