New language promises to reduce compilation times by using all threads and gpu cores available on your machine. What’s your opinions on it so far?
seems like gnu parallel with some build chain helpers. my problem is that if youre not already writing with gnu parallel or similar in mind then youre just added more complexity.
how do you compile code with gnu parallel? i mean, i’m really ignorant on parallel and at first glance it seemed that there’s no way of compiling separate chunks of code with it.
blend seems like gnu parallel with some build chain helpers.
It seems like Fortran except it’s python syntax and it’s weakly typed so you will get into type checking hell if you use any library which tries to be fancy and create their own types.
Outside of the syntax though: The speedups look really cool!
I’m curious to see what potential speedups would look like in a large project.
Additionally, I’m curious to see what the power requirements are for programs written in it since it seems like it will highly parallelize all statements in the language.
I also wonder how soon it will be for someone to implement a deadfish / bf / lisp interpreter in it
Why is it fashionable to hate curly braces - I think readability is much better served with explicit block closing tags…
And why do we hate type declarations? I don’t mind being able to omit type declarations, it’s handy for quick and dirty stuff - but strict type checking is a powerful tool… so much so that PHP has put a lot of effort into adding it after the fact.
There’s type declarations and checking in Bend and HVM, it’s just Bend has type inferrance. I personally don’t mind either way, although for scripting I do like mutable types like in Python, it makes things easier to write, at the cost of needing to know exactly what you’re doing or cleaning up bugs.
Yea, when it comes to type declarations it’s mostly about an added layer of safety especially when it comes to function layers and code contracts… these are useful things when you have a lot of cooks in the kitchen.
reduce compilation times
They mention nothing about compile times. This is about allowing the compiler to automagically run your code in multiple threads on CPU and GPU.
It’s an interesting idea. I like the CPU/GPU abstraction but it’s going to have some learning curve to write code for it. I’m not in the niche it’s aimed at though so I can’t comment to it’s usefulness.
That’s the interesting thing about HVM, you have 0 overhead multithreading. It happens automagically.
And the fact that it’s seamless across both cpu and gpu is exciting. Most devs can probably hack out a half decent multi threaded cpu implementation. But gpu is far more complicated. Having one solution that works on both is very intriguing to me.
I wrote some math shit in it the other day. It’s cool. Waiting for more data types, more optimizations and then I’ll probably be using the hell out of it.
I like the HVM language better. It’s more functional-y and suits the model. Bend is just syntactic sugar for HVM.
Skeptical. I wrote a compiler from scratch which does this. The biggest problem is not in the execution but in the memory bandwidth that becomes costly.
Automatic parallel computing is to me a pipe dream.
The concept also appears to downplay the importance of software architecture. You must design your program around this. The compiler can’t help you if you express your programs in a serial fashion, which is the fundamental problem that makes parallel computing hard.
I don’t mean to be a downer. By all means, give it a shot. I’m just not seeing the special ingredient that makes this attempt successful where many others like it have failed.