Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Isn't 96% of stack usage a bit high? When you are dealing with recursions?

You pay a lot for those cars, can't they at least put better electronic hardware. They probably have less than my phone from 5 years before



> You pay a lot for those cars, can't they at least put better electronic hardware. They probably have less than my phone from 5 years before

Car electronics have a very long development process. When the cars in question (models from ~5 years ago) were designed (~10 years ago), the hardware they chose was probably quite decent for that era.

When the next model of the car is designed, they will most likely end up using the same model of computer (or a successor with conservative upgrades) to avoid having to redesign the hardware and software that much.

The cost of the actual hardware is negligible compared to the cost of the redesign.


What cost? Lives, recalls? This probably is the motivation, but it is a false myopic accounting that makes this calculation. I have personally witnessed way too many embedded engineers cost reduce themselves directly into this situation and do it as a matter of pride. They will use a 2.75$ part instead of a 3.80$ part that forces design trade-offs that cause errors like this. The WRT54 was released in 2002, it had 16MB of ram and 4MB of flash running @ 125Mhz. There is absolutely no reason that the ECU couldn't have been running something similar.

I would argue that Toyota squandered billions with this failure. The ECU needs to be a protocol that can be swapped out at any time to decouple the evolution of the machine.

It is a shame, morally and fiscally that embedded development isn't using safe, provable and verifiable languages.


The WRT54 was released in 2002, it had 16MB of ram and 4MB of flash running @ 125Mhz. There is absolutely no reason that the ECU couldn't have been running something similar.

The WRT54 also runs in a comfy corner of your living room, fails every few years, and crashes.


> When you are dealing with recursions?

They shouldn't be dealing with recursions. If stack corruption is what caused their failures, inappropriate testing played an important role IMHO.

> You pay a lot for those cars, can't they at least put better electronic hardware.

This isn't how it works for cost-sensitive designs. You don't hear people boasting about how they have a quad-core car computer and how the touchscreens from their motor control are perfect for Facebook interactions.

The way people think about this is, if half your RAM memory never gets used, then you used twice more than you need and your module is more expensive than it should be. CPU use never increases past 20%? It's about 80% more powerful than it needs to be. And so on.

"Better electronic hardware" (in the sense of "more powerful" or "faster") also introduces additional complexity. This means more difficult constraints in testing, longer and more expensive verification processes, additional non-deterministic behaviour and so on.

Not that their system wasn't at fault. It was, but throwing more hardware at it wouldn't have made it better.


> This isn't how it works for cost-sensitive designs. You don't hear people boasting about how they have a quad-core car computer and how the touchscreens from their motor control are perfect for Facebook interactions.

I work for a company that makes quad core computers for automotive use and they do end up being used for Facebook interactions among other things like the dashboard etc. The engine management computers will be a separate entity, though. If you look at the big auto shows from the past few years, the car manufacturers clearly do think that this going to be a major differentiator in the next years and it's going to be average consumer models too, not just premium sports cars like today.

But the quad core chips we sell today will be on the road in five or more years. By the time they roll out of the assembly line, the computers will not be spectacular by the standards of that day. A smartphone is 6-18 months from design to production, a car is several times that.

It's not like the car manufacturers were cheap on the hardware.


In a sense they are rather cheap on the hardware.

Instead of using stuff like netbook chipsets they tend to gravitate towards mobile chipsets. The difference between IMX.6 and AMD Jaguar (just examples, you can also look at Intel's chipsets and other boards like Tegra etc) is like night and day. Why isn't the Jaguar used in Mobile phones? Because it can use tens of watts compared to just few watts of IMX.6.

So at least for me it seems the companies wish to save few watts of power usage and few tens of dollars per car.


My car was parked outside in sunny upstate NY a few weeks ago. I started it up, warmed up the engine for 5 m and drove 70mph to work in the cold. The temperature? -5 F. In a few months, it will be 110+ F in the sun and humidity when the car is parked outside.

Things like engine control modules and even entertainment systems in cars operate outside of the environmental ranges that a netbook is designed to survive in. I don't want my car sidelined because of some unreliable computer.


The safety regulations, standards compliance requirements, mechanical, thermal and electrical parameters are significantly more strenuous for automotive ICs than for consumer devices. Does AMD manufacture Jaguar chips that can be used in automotive?

There are also issues of logistics at stake, such as maintenance. Unless AMD is willing to manufacture a certain Jaguar chip for the 5-8 years that automotive manufacturers typically require it, the Jaguars wouldn't even be considered for many systems.


But having some additional capacity available gives them the ability to do field upgrades. The firm I worked for a while ago had to undertake a very expensive hardware refresh because there just wasn't any way to get any additional bug fixes into the field -- they were down to 20 bytes free. In something like a car, that you know people are going to drive for 10+ years, you need that extra space not only to make bug fixes, but also to comply with new legislation (such as brake override), and also to offer a few new features to your customers.


Another valuable engineering lesson; Always Have Headroom. If one designs or operates to the limit there is no margin for error. Resilient systems can get pushed beyond their acceptable limits and recover.


1+.

IMO, You need several sets of limits: standard limits posted to the consumer, engineering limits posted to the techie/maintenance guy/developer/etc, and actual limits.... each of those is comfortably beyond the others. Know the actual limit, but design well under that if at all possible, because the system will be misused.


It would be stupid of me to disagree. One should always leave headroom for bugfixes, future expansion and so on; it's only mindlessly throwing hardware at problems that I disagree with, not futureproof engineering.


Actually I slightly disagree with your "You don't hear people boasting about how they have a quad-core car computer and how the touchscreens from their motor control are perfect for Facebook interactions." statement.

Of course you're right if we're talking engine management/internal stuff, but complaining about the laggy/slow/annoying performance in all things entertainment is quite a well-known first world problem in my circles.

In fact, I'm annoyed by each car I drove over the last 10 years due to their inability to provide sensible hardware (and charge a huge markup for all these 'official' components on top: Think navigation: You get a 3rd party system for a fraction of the cost of the supplier provided one, often providing better features, decent updates, extensibility - while you're stuck with whatever your manufacturer grabbed for pennies).

So in context ("Increase RAM so that the stack doesn't grow into the area where my acceleration value is stored") you're right, this isn't an issue. In general though I haven't seen a car manufacturer that gets consumer electronics/entertainment etc. right.


Mission critical applications tend to use working, proven, components. Why buy more complexity?

What I find confusing about the article is that it describes how to avoid problems on a completely different architecture. ARM (Von Neumann) vs v850 (Harvard), hard stack exception vs none, etc.

These differences and resulting inapplicable recommendations confound what could be an interesting article.


The article is indeed a mish-mash. Not everything that uses the ARM instruction set and architecture is Von Newumann though. The Cortex M3 and M4 MCU series are Harvard.


Good point, although it was a bit cheeky of me to generalize anything these days as Von Neumann vs Harvard. Thanks for keeping me honestish. :)


96% may look high but it's still <= 100%, meaning if their analysis was correct, there's no way it could overflow.


Similar situation to why you still get spacecraft using chips from the 90s. They need chips which will remain reliable for a significant lifespan, in hostile environments. Way beyond what you're going to expect for a phone or PC.

Quite a challenge when the consequences for failure could be extreme.


The failures were extreme, both in terms of lives and brand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: