Given the number of Julia-related posts on the front page in the recent past, I doubt there is a need for a post linking to the generic home page for the language. That said, it would definitely be interesting discuss something unique or specifically interesting.
The most effective way to kickstart such a discussion might be sharing a blog post discussing a handful of specific aspects (could even have many of them). Then the HN discussion is likely to pick up on some of those themes. I think the current submission has not managed to provide a nucleus around which comments can crystallize — it’s too open-ended for drive-by commenters.
One thing I've wondered about: I see it frequently written that someone new to Julia sits down and write an algorithm, and despite that language's marketing, the code runs at a fraction of C++ speeds. Eventually, the code gets a speedup only after significant and non-obvious tuning that requires a lot of Julia experience. That concerns me that this would be the common scenario, and I see this written often in articles, blog posts, forums, etc. It's the one point that has turned me off of diving too deeply into it.
That's true, but it's because Julia looks like Python/Fortran/Matlab on the surface but it's a really unique language that you can't really learn in one day or two. Write Julia like Python and it will be slow (dynamic languages are slow after all), write Julia like Fortran and it will be fast (static languages are fast, but they are restrictive). And after you actually learn the language you can fairly easily write extremely dynamic code that is around 80% of the speed of C just following a few rules (only consts on global namespace, type stability, typing containers like struct, abusing multiple dispatch/parametric types and profiling the eventual inference failures).
Being permissive lowers the barrier for non CS people, one of the main target of the language, to start using the language, especially in the REPL/Notebook, even if they are not the most effective at the language.
This is a feature of Julia I like. I write in my pythonic way, slow but good enough. Then I tune it when I really need the speed. As you and others mentioned, there's only a few things to do to get most of the way - put things in functions, type stability, optimize array allocations. Also more here: https://docs.julialang.org/en/v1/manual/performance-tips/
It's important to emphasize that while this may sound like what Python people do with Numba or Cython, it's actually quite different because whether or not you write "slow julia code" or write "fast julia code", it's all seen by the same compiler.
A highly tuned kernel function can be inlined into a not so finely tuned outer function and vice versa, our metaprogramming tools see both functions the same, etc.
Plus high performance Julia code (at the 80% of C range, maybe not at the 100% range where micro-optimizations start to happen) isn't any less readable than low performance Julia code. A common misconception is that annotating every type and making it more imperative and C-like makes code faster, but as long as you internalized what makes code slow, making fast code is surprisingly just as concise as it would be in Python without any type annotation.
But since fast and slow code are so similar, and both give the same ultimate result, it becomes less obvious for someone who is reading someone's code to see what was made to make it fast (even if it's easy to see what the code does because of it's high level nature). I think that's something that could improve over time with better tooling, using the meta-capabilities of the language to create linters and compiler suggestions that guide the users into making the small changes in the code that makes the most difference.
>Eventually, the code gets a speedup only after significant and non-obvious tuning that requires a lot of Julia experience
I wouldn't say it's non-obvious: the things you have to do would be obvious to any programmer who's written code in something like C++. The problem usually tends to be that MATLAB, R, and Python tend to train people to want to use sub-optimal programming styles, like something heavy on array allocations, even if writing a function that does a loop is perfectly optimal. If someone is always told that mapping one function at a time ("vectorization") is the optimal way to do things, then they will try it when they go to a more optimal language. So it's really just a re-education issue.
There aren't a ton Julia-specific tuning. Off the top of my head, the main performance gotchas are: type unstable code, closures, globals, and (in some cases) passing by functions instead of function pointers.
Most performance tips are similar across most fast languages: minimize OS-managed allocations, use cache and SIMD friendly memory layouts, better algo design in general. I find that these optimizations are typically easier in Julia than in most languages.
It's kind of complicated to consider creating new programming languages as reinventing the wheel. Every new language will be created with a smaller ecosystem so someone will always have to rewrite X that some older language already has. But they are also created with the knowledge of what worked well and what didn't in those previous languages, so it's not reinventing the wheel, but inventing a wheel that (attempts to) corrects the fundamental mistakes/compromises that were inevitable with the knowledge and tools of that time.
Julia was created with 20 years of extra knowledge from Python, Matlab, R and other languages, and from a domain that basically didn't exist when Python was designed, resulting in a set of features that cannot be added to Python at this point. It's a different wheel, which can move faster so even if the cars with the old wheel are way ahead, it can still eventually catch up (creating libraries in Julia from scratch is easier and faster so it can compete even 20 years late and with much lower support). Should we stop trying to create improved languages, stay with the languages we have now forever and simply create increasingly "clever" ways to compensate any flaw (that can't be directly fixed without breaking all that legacy)?
Plus Julia has web frameworks like Genie, but it's true that they are nowhere near as mature (especially compared to a project that is considerably older than Julia itself).
It's easier and faster because it's a more powerful language. You can achieve fortran/C speeds without leaving the language (for example Tullio.jl and LoopVectorization.jl competing with super optimized BLAS methods). Multiple Dispatch means libraries can compose, for example Julia's main Machine Learning library doesn't even need to know anything about GPUs to run all of it's methods on it (said library is also only a few thousands of lines of high level Julia, and the CUDA library is also 100% Julia). Even the state of the art implementation of Tensorflow interface that couldn't work on Python (Swift for Tensorflow, a fork of the swift compiler) had a competitor implemented as pure Julia libraries (Zygote.jl, a source to source differentiation library) thanks to it's metaprogamming capabilities.
I did mention from scratch because of course, if your library requires another library that does not exist (and you don't want to use the FFI), it won't be faster or easier (though I did mention in the previous comment that it's something all new languages will have to go through, until it's not a problem anymore).
> you get acceptable performance with an insane package repository that means you can ship so much faster.
You absolutely do not. There are a ton of holes in the Python package ecosystem that have been plugged by Julia. For example, stiff ODE solvers that mix with structured matrices, like block banded matrices, are very common in PDE fields, but Python really just has (very slow) stiff ODE solvers and unstructured sparse matrices are the only thing you can mix with that, along with a lack of a solid sparse AD. You can keep going down the list of all of these applications that are wide open, and where Julia libraries already are either (a) existing where Python libraries don't exist or (b) are orders of magnitude faster. So let's be a bit more concrete there: there are areas where Python packages can get you by, and there are not, and there are a lot of things in the not category (which is why we as devs exist!).
Since each language has killer libraries you might want to mix and match ecosystems and not make an all-or-nothing decision. Thankfully, that's also easily supported. Either language can call the other via their C FFI interfaces. Throw in a Cython intermediary if that makes the glue easier to write.
The problem with Python is that you start writing in Python because you ship quickly and at some point you start needing performance and you have to decide whether to rewrite in a new language or start down the potentially painful path of using another language and calling that from Python.
That's really bad advice. Re-inventing the wheel is a good idea, and should definitely happen from time to time. If you can use new technology and ditch old preceptions, you may actually be able to make a better wheel.
From your comments I gather numerical algorithms aren't really your area. It really has worked as a separate silo for decades, and experts probably have little interest in ever writing a web server. What is the problems you are referring to? Aren't there also going to be problems with using one language for everyone (like array indexing headaches for starters).
I think Julia is older than numba by the way. Personally I haven't used it much, but I like the idea.
I gather from these comments that Julia is not a good choice for run-of-the-mill devs that do numerical computation on the side, but might be ideal for someone who's entire job is to write algorithms from scratch. Correct?
I'd say it's not a good choice if like your example, you want to integrate with a mature web framework, or something else that only Python has (though Julia FFI allows to call Python almost like you're writing python directly in the Julia code), or if you already have an infrastructure in Python.
For run-of-the-mill numerical computation, like you said it's good writing loops and code that is really natural and run it fast. Most commonly used algorithms for data science and scientific computing are already written in Julia, you don't need to write them from scratch. But it's good to have libraries that you can easily inspect, understand and extend within your code.
A really interesting trend in the data science community is that languages can be used in conjunction with one another. This can be seen in projects like Jupyter (Julia, Python, Latex, and R), Julia's FFI, and Apache Arrow.