this post was submitted on 26 Mar 2025
561 points (100.0% liked)

Programmer Humor

23712 readers
2242 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 149 points 2 months ago (4 children)

all programs are single threaded unless otherwise specified.

[–] [email protected] 52 points 2 months ago (3 children)

It’s safe to assume that any non-trivial program written in Go is multithreaded

[–] [email protected] 19 points 2 months ago (1 children)

But it's still not a guarantee

[–] [email protected] 1 points 2 months ago

Definitely not a guarantee, bad devs will still write bad code (and junior devs might want to let their seniors handle concurrency).

[–] [email protected] 17 points 2 months ago (2 children)

And yet: You’ll still be limited to two simultaneous calls to your REST API because the default HTTP client was built in the dumbest way possible.

[–] [email protected] 2 points 2 months ago

Really? Huh, TIL. I guess I've just never run into a situation where that was the bottleneck.

[–] [email protected] 1 points 2 weeks ago (1 children)

The client object or the library?

[–] [email protected] 1 points 2 weeks ago

… Is this a trick question? The object, provided by the library (net/http which is about as default as they come) sets “DefaultMaxIdleConnsPerHost” to 2. This is significant because if you finish a connection and you’ve got more than 2 idles, it slams that connection close. If you have a lot of simultaneous fast lived requests to the same IP (say a load balanced IP), your go programs will exhaust the ephemeral port list quickly. It’s one of the most common “gotchas” I see where Go programs work great in dev and blow themselves apart in prod.

https://dev.to/gkampitakis/http-connection-churn-in-go-34pl is a fairly decent write up.

[–] [email protected] 7 points 2 months ago (1 children)

I absolutely love how easy multi threading and communication between threads is made in Go. Easily one of the biggest selling points.

[–] [email protected] 1 points 2 months ago (1 children)

Key point: they're not threads, at least not in the traditional sense. That makes a huge difference under the hood.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago) (1 children)

Well, they're userspace threads. That's still concurrency just like kernel threads.

Also, it still uses kernel threads, just not for every single goroutine.

[–] [email protected] 1 points 2 months ago (1 children)

What I mean is, from the perspective of performance they are very different. In a language like C where (p)threads are kernel threads, creating a new thread is only marginally less expensive than creating a new process (in Linux, not sure about Windows). In comparison creating a new 'user thread' in Go is exceedingly cheap. Creating 10s of thousands of goroutines is feasible. Creating 10s of thousands of threads is a problem.

Also, it still uses kernel threads, just not for every single goroutine.

This touches on the other major difference. There is zero connection between the number of goroutines a program spawns and the number of kernel threads it spawns. A program using kernel threads is relying on the kernel's scheduler which adds a lot of complexity and non-determinism. But a Go program uses the same number of kernel threads (assuming the same hardware and you don't mess with GOMAXPROCS) regardless of the number of goroutines it uses, and the goroutines are cooperatively scheduled by the runtime instead of preemptively scheduled by the kernel.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago)

Great details! I know the difference personally, but this is a really nice explanation for other readers.

About the last point though: I'm not sure Go always uses the maximum amount of kernel threads it is allowed to use. I read it spawns one on blocking syscalls, but I can't confirm that. I could imagine it would make sense for it to spawn them lazily and then keep around to lessen the overhead of creating it in case it's needed later again, but that is speculation.

Edit: I dove a bit deeper. It seems that nowadays it spawns as many kernel threads as CPU cores available plus additional ones for blocking syscalls. https://go.dev/doc/go1.5 https://docs.google.com/document/u/0/d/1At2Ls5_fhJQ59kDK2DFVhFu3g5mATSXqqV5QrxinasI/mobilebasic

[–] [email protected] 23 points 2 months ago (3 children)

Does Python have the ability to specify loops that should be executed in parallel, as e.g. Matlab uses parfor instead of for?

[–] [email protected] 54 points 2 months ago (2 children)

python has way too many ways to do that. asyncio, future, thread, multiprocessing...

[–] [email protected] 41 points 2 months ago (1 children)

Of the ways you listed the only one that will actually take advantage of a multi core CPU is multiprocessing

[–] [email protected] 11 points 2 months ago (1 children)

yup, that's true. most meaningful tasks are io-bound so "parallel" basically qualifies as "whatever allows multiple threads of execution to keep going". if you're doing numbercrunching in pythen without a proper library like pandas, that can parallelize your calculations, you're doing it wrong.

[–] [email protected] 8 points 2 months ago* (last edited 2 months ago) (1 children)

I’ve used multiprocessing to squeeze more performance out of numpy and scipy. But yeah, resorting to multiprocessing is a sign that you should be dropping into something like Rust or a C variant.

[–] [email protected] 2 points 2 months ago

Most numpy array functions already utilize multiple cores, because they're optimized and written in C

[–] [email protected] 10 points 2 months ago (1 children)

I've always hated object oriented multi threading. Goroutines (green threads) are just the best way 90% of the time. If I need to control where threads go I'll write it in rust.

[–] [email protected] 7 points 2 months ago (2 children)

nothing about any of those libraries dictates an OO approach.

[–] [email protected] 4 points 2 months ago (1 children)
[–] [email protected] 2 points 2 months ago

Meh, even Java has decent FP paradigm support these days. Just because you can do everything in an OO way in Java doesn't mean you need to.

[–] [email protected] 2 points 2 months ago (1 children)

If I have to put a thread object in a variable and call a method on it to start it then it's OO multi threading. I don't want to know when the thread spawns, I don't want to know what code it's running, and I don't want to know when it's done. I just want shit to happen at the same time (90% of the time)

[–] [email protected] 4 points 2 months ago

the thread library is aping the posix thread interface with python semantics.

[–] [email protected] 13 points 2 months ago (2 children)

Are you still using matlab? Why? Seriously

[–] [email protected] 18 points 2 months ago (1 children)

No, I'm not at university anymore.

[–] [email protected] 5 points 2 months ago (1 children)
[–] [email protected] 5 points 2 months ago* (last edited 2 months ago) (1 children)

We weren't doing any ressource extensive computations with Matlab, mainly just for teaching FEM, as we've had an extensive collection of scripts for that purpose, and pre- and some post processing.

[–] [email protected] 1 points 2 months ago

I don't like that they don't write their own algorithms in any other language. I was trying to understand low-pass filters a while back and so many web pages were like, "Call this MATLAB function" or "here's a code generator that puts out bad C for specific filter parameters" Like no, I want the algorithm explained to me...

[–] [email protected] 7 points 2 months ago (1 children)

I was telling a colleague about how my department started using Rust for some parts of our projects lately. (normally Python was good enough for almost everything but we wanted to try it out)

They asked me why we're not using MATLAB. They were not joking. So, I can at least tell you their reasoning. It was their first programming language in university, it's safer and faster than Python, and it's quite challenging to use.

[–] [email protected] 4 points 2 months ago

"Just use MATLAB" - Someone with a kind heart who has never deployed anything to anything

[–] [email protected] 9 points 2 months ago (1 children)
[–] [email protected] 4 points 2 months ago
[–] [email protected] 14 points 2 months ago (1 children)

I think OP is making a joke about python's GIL, which makes it so even if you are explicitly multi threading, only one thread is ever running at a time, which can defeat the point in some circumstances.

[–] [email protected] 3 points 2 months ago* (last edited 2 months ago) (1 children)

~~no, they're just saying python is slow. even without the GIL python is not multithreaded. the thread library doesn't use OS threads so even a free-threaded runtime running "parallel" code is limited to one thread.~~

apparently not!

[–] [email protected] 3 points 2 months ago (2 children)

If what you said were true, wouldn't it make a lot more sense for OP to be making a joke about how even if the source includes multi threading, all his extra cores are wasted? And make your original comment suggesting a coding issue instead of a language issue pretty misleading?

But what you said is not correct. I just did a dumb little test

import threading 
import time

def task(name):
  time.sleep(600)

t1 = threading.Thread(target=task, args=("1",))
t2 = threading.Thread(target=task, args=("2",))
t3 = threading.Thread(target=task, args=("3",))

t1.start()
t2.start()
t3.start()

And then ps -efT | grep python and sure enough that python process has 4 threads. If you want to be even more certain of it you can strace -e clone,clone3 python ./threadtest.py and see that it is making clone3 syscalls.

[–] [email protected] 3 points 2 months ago* (last edited 2 months ago)

is this stackless?

anyway, that's interesting! i was under the impression that they eschewed os threads because of the gil. i've learned something.

[–] [email protected] 2 points 2 months ago* (last edited 2 months ago) (1 children)

~~Now do computation in those threads and realize that they all wait on the GIL giving you single core performance on computation and multi threaded performance on io.~~

[–] [email protected] 4 points 2 months ago (2 children)

Correct, which is why before I had said

I think OP is making a joke about python's GIL, which makes it so even if you are explicitly multi threading, only one thread is ever running at a time, which can defeat the point in some circumstances.

[–] [email protected] 3 points 2 months ago

Ups, my attention got trapped by the code and I didn't properly read the comment.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago) (1 children)

Isn't that what threading is? Concurrency always happens on single core. Parallelism is when separate threads are running on different cores. Either way, while the post is meant to be humorous, understanding the difference is what prevents people from picking up the topic. It's really not difficult. Most reasons to bypass the GIL are IO bound, meaning using threading is perfectly fine. If things ran on multiple cores by default it would be a nightmare with race conditions.

[–] [email protected] 1 points 2 months ago

I haven't heard of that being what threading is, but that threading is about shared resourcing and memory space and not any special relationship with the scheduler.

Per the wiki:

On a multiprocessor or multi-core system, multiple threads can execute in parallel, with every processor or core executing a separate thread simultaneously; on a processor or core with hardware threads, separate software threads can also be executed concurrently by separate hardware threads.

https://en.m.wikipedia.org/wiki/Thread_(computing)

I also think you might be misunderstanding the relationship between concurrency and parallelism; they are not mutually exclusive. Something can be concurrent through parallelism, as the wiki page has (emphasis mine):

Concurrency refers to the ability of a system to execute multiple tasks through simultaneous execution or time-sharing (context switching), sharing resources and managing interactions.

https://en.m.wikipedia.org/wiki/Concurrency_(computer_science)

[–] [email protected] 5 points 2 months ago

I initially read this as “all programmers are single-threaded” and thought to myself, “yeah, that tracks”