Hacker Newsnew | past | comments | ask | show | jobs | submit | sjkelly's commentslogin

This is lack of impulse response data, usually broken by motor control paradigms. I reread Cybernetic by Norbert Weiner recently and this is one of the fundamental insights he had. Once we go from Position/Velocity/Torque to encoder ticks, resolver ADCs, and PWM we will have proprioception as you expect. This also requires several orders of magnitude cycle time improvement and variable rate controllers.


It is great to see this getting more attention. One of the best routines I started was morning yoga. But it is cold here in Boston, and I have mild seasonal affect disorder. So I got some 250W IR heat lamps and a UV light therapy box to try to simulate sunlight on gloomy days while I do morning yoga. I usually also have some mild eye trouble and dryness in the winter (probably a mix of cycling in the cold, and being indoors). The eye and mood symptoms have pretty much disappeared since using the lights.


What IR heaters did you purchase? TIA!


This article confuses modern with high end. The ultra high end of cycles is just like the ultra high end of auto mobiles. There are bespoke tools and techniques required for maintenance. The average bike sold today is probably easier to maintain that those of the past. Disc brakes don't come out of alignment as bad, aluminum frames don't rust, rims are far stronger, etc...


Spot on. This is the equivalent of OP being surprised his Ferrari has expensive and difficult maintenance. The bike he's swapping parts for retails for ~12k USD


Indeed. 10k for a bike is what, 10-20 times the price of a 'normal' bike, just as a Ferrari is 10-20 times the price of a normal car.


Agreed, and I think a lot of people find themselves with high end cycles by using price as a buying guide. Relatively speaking it's a smaller purchase than say a car, so it's easy to end up with more bike than you needed.


Dual band GPS on my Garmin is certainly amazing. It is mind blowing seeing accuracy down to which side of the street I was on during a run.


There are some phys missing like PCIe.


This is one of the benefits of a capable JIT for numerical computing. Even including compile time, the overall execution time is way lower. Chris Elrod is an AVX2/512 whisperer.


JIT-compiled code is way faster than linear algebra kernels which are completely vectorized by GCC et al? How does that work?


if you only vectorize the linear algebra, you leave performance on the table. Vectorizing fused operations reduces the number of memory passes. Also knowing the sizes (which are chosen at runtime) is necessary to make optimal decisions.


I was responding to a blanket statement alluding to AVX512 (which is rather a can of worms). I don't understand what this is on about, especially with reference to the article. Of course you account for matrix dimensions at run time, like to decide whether or not to do packing in GEMM, whether to call out to GEMM for matmul, etc.


SimpleChains does not call out to gemm. All layers are currently implemented as naive for loops. It uses LoopVectorization.jl to compile them, which does a good job leveraging AVX512 -- much better than llvm or gcc.

It doesn't pack, which is why column major A' * B is slow: https://juliasimd.github.io/LoopVectorization.jl/latest/exam... And also why SimpleChains wouldn't scale to large matrix multiplies. But, with 1 MiB L2 cache (e.g. Skylake-X), a 512x256 dense layer of Float32 is still small enough to not really need packing, so I haven't yet needed to implement it (but I will eventually, in a future version of a rewritten LoopVectorization that also actually adds dependency analysis). For an ML library, I'd implement packing via changing the data layout of the parameter vector to tile major, to just skip any runtime packing altogether (i.e., the data would be pre-packed). Only extremely large arrays benefit from a second packing level, so that I don't think it's worthwhile; smaller batch sizes would avoid the need.

The benchmark plots I linked above used dynamically sized arrays. LoopVectorization.jl performed better at small sizes than MKL, and much better than everything else. Compared to that application, LV can also specialize on compile time sizes, fuse the addition of the bias vector, and fuse the activation function.


Gate level sims would be sweet. Yosys has the CXXRTL backend that may nicely dovetail into this too.


I think even since 2014 there have been many niche technical computing companies supporting development quite successfully. Bigger players definitely would help, but I think most of that at this time is more public/private research money for scientific computing. Hence Julia has many academic contributors (grad students and post docs a likely majority), many of which are now becoming professors and industry leaders.

The post 1.0 world in Julia has been spectacular for development stability. In the early days it was somewhat tiring trying to develop basic foundational libraries, and keep pace with language changes. 1.0 has stabilized things quite a bit, and the forthcoming LTS (sometime this year maybe) I think will really start to button up some of the major issues people have with package load times and installation.


Yes, writing pre-1.0 Julia code was like balancing on a log in the water - the platform kept moving under you :D I'm glad it's gotten a lot stabler now.

And I agree about academics as a key user base. I think Julia's growth is far from over, there's a lot of organic spread yet to come. It may never replace Python in terms of global popularity, but then, why should it have to?


Generative design is almost always in SDF form. Things like point clouds, images, and 3DNN also dovetail nicely. SIMP in topology optimization is a good example also. I believe alot of SDF applications are still held back by mesh extraction. There is no silver bullet that can handle adaptive methods, sharp features, and generate a manifold.

SDF and mesh extraction are one of my favorite areas of research. I think it is very important for additive manufacturing in particular. The value will be hybrid SDF and spline methods for complex and highly integrated applications such as fluid and heat transfer or compliant mechanisms.

Modeling a box or cylinder with SDF isn't the right application IMO. Optimized topology for a given PDE is.


> Modeling a box or cylinder with SDF isn't the right application IMO. Optimized topology for a given PDE is.

Like anything in engineering and design, IMO, it depends on who you are what you are doing.

> SDF and mesh extraction are one of my favorite areas of research.

Why convert a SDF to a mesh to do visualization and engineering analysis? Even with FDM, which can be thought of a 2D filling problem like scan conversion once you have a SDF, meshless may be the way forward with additive manufacturing as well. One can think of reasons you would have to do mesh extraction, but for niche areas of manufacturing.


That's pretty cool. I see you are using marching cubes. How are you handling sharp geometry (I'm assuming previews use a finite difference for normals)?

Also I see you are at formlabs. I'm sure you know of Matt Keeter's libfive :)


Yes, I'm well aware of libfive!

I'm not doing anything special for sharp geometry. For now you just need to use sufficient resolution until it looks "good enough." Normals are just based on the triangle normals.


Take the spatial derivative for the normals? Runga-kutta methods work wonders for that: https://en.m.wikipedia.org/wiki/Runge–Kutta_methods That works well if they are true signed distance.


Since you have a distance, you could fairly easily implement dual contouring, I think?


Or Surface Nets. Nice compromise of speed and quality. Marching cubes needs a lot of resolution and it's imperfections are especially ugly my eyes.

Maybe mesh resolution isn't a problem for 3D printing. But if 3D printing is the main goal why bother with a mesh? Just go straight to voxels that can be fed directly to the printer.


The printer does not use voxels, it uses G-codes, which are tool paths for the print head.


True for FDM, but for SLA you have the potential rendering a direct slice of the SDF.


OK. But a voxel representation has to be a closer match than a mesh. I guess overhang, solidity etc still need to be calculated and dealt with. Which is non-trivial.

Is it that "mesh to G-code" libraries are more mature than "voxel to G-code"?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: