Hacker Newsnew | past | comments | ask | show | jobs | submit | jdfreefly's commentslogin

I'm against the death penalty, but I'm willing to re-evaluate the position.


* Healthcare isn't a right. It's the product of someone's hardwork and as such, you have no "right" to it.

* Making college education free for US citizens is a terrible idea.

* We've taken the idea of "sanctuary cities" way too far.

* Gun control is a bad idea

These are a few conservative ideas that I think plenty of people would be afraid to express in a workplace that is disproportionately filled with ardent "liberals".


> ardent "liberals"

Yeah, I mean those aren't necessarily popular opinions. So if you express them you might find a bunch of people who will vigorously argue against you. If that's upsetting to you, I don't know what to say.

I really, really doubt you're going to get fired. Unless you're expressing these opinions in very inappropriate ways. It's a workplace, after all.


I agree with all of those assertions except the one about sanctuary cities because I am an actual liberal (now known in the US as a classical liberal).


because they would get fired or because they would face a unified front of "you're wrong"?

Cause the former is a problem. The latter, meh. If you can't figure out how censor yourself at work that's your problem. I felt like calling out a coworker for recent conduct I think was incredibly petty, immature, and damaging to a relationship with another team. I think he deserves to be called out, but it's just not worth it.

I do think you should be able to say some of this stuff in a casual conversation at lunch, or whatever. You have to be aware of your audience though.


Sure, but discussing any of those wouldn't get you fired out of any reasonable job -- at most it might make your coworkers think less of you, but I don't really see why holding an opinion _should_ protect you from the social (i.e. not job security) consequences of holding it.

However, saying that the idea of sanctuary cities has been taken too far "because immigrants are rapists", as a random example, _should_ perhaps get you fired, especially since it's likely that some of your colleagues are in fact immigrants and saying something like that is a direct insult to them.

I think the state of politics in the US today is such that many (though certainly not all, as you've outlined) "conservative" ideas have strong ties to xenophobia and racism, whereas "liberal" ideas can be more freely discussed because they are about inclusion, rather than exclusion. And that's the way it _should_ be. Saying "I support giving more rights to group X" is not in general harmful to other groups, and is not the moral equivalent of saying "I support taking away rights from group X", which is actively harmful to group X.


> However, saying that the idea of sanctuary cities has been taken too far "because immigrants are rapists", as a random example,

Nice strawman there! I think that the idea of sanctuary cities has been taken too far, that the illegal immigration should be curbed and that democrats are just exploiting this for their political gains. I am also a non-white immigrant myself. But discussing my views openly would definitely get me in a few discussions with HR, if not outright fired.


Not intended as a straw man, but rather as an illustration that the same general opinion ("the idea of sanctuary cities has been taken too far") should be a perfectly fine (even if unpopular) thing to discuss in the workplace if your opinion and reasoning behind it are not derogatory, but not at all fine to discuss if they are.


But thats the problem. Nowadays, saying "the idea of sanctuary cities has been taken too far; nations should have full control over their borders; immigration laws should be enforced" would not be a safe thing to say at my workplace.


Then I think your workplace is in the wrong in this situation. Reasoned discussions not based in bigotry/xenophobia/etc, where both sides are respectful, should definitely be acceptable at work -- they definitely are at mine.


I would think Netflix's strong ties to AWS would be a non-starter for Apple.


How so? I thought apple uses aws (and google's cloud too) for some of their services.


Have you tried Amazon Chime? We are ridiculously aware of the latency issue in high quality voip. We do not at the moment support user generated keys the way Telegram does but we do use TLS/DTLS encryption end to end for our VOIP streams.


> We are ridiculously aware of the latency issue in high quality voip. We do not at the moment support user generated keys the way Telegram does but we do use TLS/DTLS encryption end to end for our VOIP streams.

Any plans to make this available to developers as a service via AWS?


Sorry, I can't really comment on our roadmap and our priorities within it.


Boyle's law - 1/2 the pressure, twice the volume.


Rails & Server developer at biba.com - San Francisco

We're building a unified communications platform that provides audio, video, screen sharing, text messaging and group messaging on Android, iOS, OSX and Windows. Competing against established companies like webex and new disruptors like UberConference.

The rails team provides the backend api that drives and enables the clients and media teams to deliver a smooth collaborative experience.

Great office in downtown San Francisco (2nd & Mission). Rails team is 5 people, supporting the demands of the 20 or so engineers on the other teams. Fast paced weekly release cycles with room to grow as an engineer.

We work in rails, but languages used elsewhere are C, Go, Java, C#, C++ and we love developers that can switch hit between teams when it's needed. We have a lot of our infrastructure built in AWS so experience there is a plus.

Meetings don't need to suck, and we're making that happen. Come join us. Contact john@biba.com and put [Hacker News] in your subject to make sure I get it.

Room on the team for novices and experts.

No recruiters please.


Skydiver with 3 GoPros in the family. I started with the 1, got my wife a 2, and then I upgraded to a 3. The V1 is still alive and kicking, but the image quality was so much better on the 3 that I felt like I had to upgrade.

Also, we see new jumpers every year, and the first thing they want to do is start jumping with a GoPro (even if we don't want them to for safety reasons).

I think they're doing a good enough job with new features that people like me will keep upgrading for a while.


Same here. I've seen one survive from 10,000 ft (lost helmet). You would really have to try hard to break one of these.


Dropped one without the case: lens onto concrete floor and it chipped it.

Consequence of fumbling while trying to take it out of the case for SD card retrieval and charging. Now, I unmount the case, take it to a desk in a carpeted room & open the case.


I found mine washed up on a beach in a waterproof case (had no way of identifying the true owner). No idea how long it had been floating in the ocean before it washed up on the beach, but it works perfectly.


GoPro have a serial registry. Maybe you want to inquire to find the original owner?

http://gopro.com/support/product-update/register-camera


I think it's safe to say suicide is rarely anything but mental health related. There are triggers for sure, but people in good mental health don't generally face those triggers and put suicide on the table as a potential solution.


s/Cyber //


It would be great if your regex was used everywhere.

Leave it to Washington to unknowingly take a euphemism for sex chat and use it both for the name of a military branch ("Cyber Command") and for political rhetoric. Their mistake is mildly entertaining, but it really makes them look dumb. I wonder what they'll do with "ASL?"


Cyber- was a prefix long before AOL.

iirc, it was coined in '82 by Gibson and popularized in the cyberpunk genre of literature.


The "cyber-" prefix is actually much older than Gibson. In English, the use came from the term "cybernetic" and that term came from or was based on Greek.

http://dictionary.reference.com/browse/cyber-

http://dictionary.reference.com/browse/cybernetics


First off, I would say that is some pretty awesome work by this guy to chase this down. Including his work with the manufacturer to help them reliably recreate the issue.

Second, I would say that over the course of my 10 year career in managing developers, I've heard many, many times that the bug was in the kernel, or in the hardware, or in the complier, or in the other lower level thing the developer had no control over. This has been the correct diagnosis exactly once. If I had to guess, I would say about 5%.


I have been the first to trigger two CPU bugs and came across a third a few days after it was discovered, before it was published. Once errata are published, software workarounds are usually put in place quickly, and tripping over them is rare.

Compiler bugs are another story entirely. I have found dozens of them (confirmed), and I can find more whenever I feel like it.


Out of curiosity, if someone paid you to find compiler bugs for a day, how would you go about it?

(I've found several missed-optimization bugs in gcc, but I found them while working on a project where I examine assembly frequently; I have no idea how I'd go about looking for a compiler bug).


One way of actively looking for compiler bugs is using a tool like Csmith[1]. Another is to compile some known-difficult code (e.g. Libav[2]) with various combinations of (optimisation) flags until the test suite fails. Most of the bugs I've found were during routine testing of Libav.

While I don't consider missed optimisations bugs as such, they are easy to find. Simply compile some non-trivial function and look at the output. There's usually something that could be done better, especially if some exotic instruction can be used.

[1] http://embed.cs.utah.edu/csmith/ [2] http://libav.org/


> While I don't consider missed optimisations bugs as such, they are easy to find. Simply compile some non-trivial function and look at the output.

Perhaps you'll give me a little credit :) if I mention that I found missed optimization bugs in extremely trivial functions. One of them involved gcc generating several completely useless stores even at -O3: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=44194


That's not "missed optimisation," that's "generating ridiculous code."


I've been a developer for over 30 years, from mainframes to micros, and a hardware problem has been responsible for a bug in my code exactly zero times.


This is because OS devs (and compiler devs) have suffered them for you. I've run into many x86 bugs, both documented and undocumented. Have you done much assembly?

Usually bugs would involve unlikely sequences of operations or operations in unexpected states. But there have been very serious bugs involving wrong math (Intel Pentium) or cache failures leading to complete crashes (AMD Phenom). These two made it to production and were show-stoppers because OS devs could do very little about them (in the Phenom bug, they could, but with a noticeable performance hit). I don't think I've seen any production CISC chip completely free of bugs. OS devs have to do the testing and the circumventing.

I mean... typically x86 chips have DOZENS of documented bugs.

http://en.wikipedia.org/wiki/Pentium_FDIV_bug http://en.wikipedia.org/wiki/AMD_Phenom


It's interesting how with significantly worse (or at the very least, comparable) complexity to manage, and uniformly horrific costs for repairing production bugs, the ASIC design industry and Intel/AMD in particular have managed to scrape by with something like <20 bugs between them in the past decade.

Perhaps we need to incentivize software developers with fear of execution, or something.


A recent x86 processor model has at least 20 "errata" that they'll tell you about; in the last decade they must have had hundreds. But most of them are just worked around so they don't affect you.


Modern x86 CPUs can trap arbitrary instructions to microcode. This means that most hardware bugs can be fixed with a firmware update that just slows down the cpu somewhat when it encounters the offending instruction.

There certainly are a lot of hardware bugs in cpus -- it's just that most of them get fixed before anyone outside the cpu company ever sees them.


The amount of effort spent on what is generally called "functional verification" is much higher for hardware than for software. Also, the specifications tend to be clearer and the source code size is smaller than you might imagine.


Get one of the original Pentiums :)

But really, I have only ever experienced one bug in a compiler (that I hadn't written) but it was such an odd experience, like the patient having Lupus.


Yeah, when I find myself drifting towards the 'maybe it is a bug in the compiler/OS/debugger' territory I know it is time to take a break from the debugging as it is rarely true[1]. Nice to see it occasionally happens :)

[1] I work on Visual Studio so I have found compiler/debugger bugs as I am generally using 'in development' bits, but far more often than not the bug turns out to be mine and mine alone :)


I had few bugs traced down to the kernel, on Windows. ALL of them were in 3rd party antivirus packages. In fact, it a machine blue-screened after installing our stuff, you can rest assured it had an antivirus and it was a Kaspersky.


Didn't Raymond Chen write once how most Windows crashes (of some version, during some year) were due to beta nvidia drivers installed by gamers to increase FPS?


I would go as far as to say 'finding' bugs in the Compiler / Optimizer / OS / Hardware is a warning signal of a poor programmer.

Always expect you are doing it wrong. It will so rarely be the case that this expectation is wrong that you can discount it as insignificant.


Disagree. In fact, were I to come up with a rule of thumb, I'd say the opposite is true.

Want to find bugs in Sun's Java 6 compiler for X64 Linux , use annotations (yeah, I found one in their V30 release last week). Want to find bugs in MS' C++ compiler, write your own templates (this was a few years ago, maybe it's better?). The best programmers push the limit of their tools because they know what's "supposed to happen".

Poor programmers hit something that doesn't work, and just try something else, cause, well they're just trying shit. I would go so far to say that poor programmers, in fact, are unable to find compiler, optimizer, OS, or hardware bugs because, by definition, they probably don't have a firm handle on what's "supposed to happen".


I think what VBprogrammer meant is that thinking you've found a bug in a compiler / OS / CPU is often a warning sign you're a poor programmer. Often times a beginner will have a bug in their code that is too subtle for them to identify, so they end up attributing it to some external factor. Actually finding a bug in a compiler / OS / CPU is as you suggest likely a sign you're doing something advanced or unusual and therefore are perhaps more knowledgeable than most.


Yeah, that's exactly what I meant. Sorry if the sarcasm didn't quite carry.

I know these things can and do happen. I've come across one or two of these strange ones before, but too often I've seen people jump to the conclusion that someone / something else was to blame. Without any other real evidence other than that they have exhausted their shallow back of talent.


I think that's often called the "select isn't broken" rule:

http://pragprog.com/the-pragmatic-programmer/extracts/tips

However, I did once work with someone who did find a bug in select on SunOS... (this was '89 or so).


I remember scripting on a MUD that used a customized version of the standard MPROG[1] patch for ROM-based MUDs. Whoever originally "documented" it just grabbed some docs from some other ROM-based MUD that used their own customized version.

The documentation was a completely wrong for several years before I started programming there. Once I realized that the documentation was lying to me, I started methodically examining how things actually worked by writing lots of very simple test programs and documenting the actual behavior.

The others didn't care why it was broken. Most of them were just trying to build cool areas, they weren't really programmers at all. They would just tweak things until it appeared to work or they were frustrated enough to give up.

[1] I'm sure a lot of HN knows this, but to save the non-gamers the hassle of looking it up, a MUD is a type of text-based online game and an MPROG is a script used to control the actions of the characters in the game.


No, templates still buggy, came across one last week.


Reported?


Way too complicated to identify concisely, and trying to release just now; in the end the wrong object was returned by a casting operator (very, very wrong object). Added and called a named method to do the cast (very same implementation) and all was well.


Depends on the compiler. When I took my "hardware for CS students" class as an undergrad, our big project for the semester was to write a CPU simulator in c, and the campus labs had just rolled out the upgrade from gcc from 2.x to 3.x. I had a bug in my program that I just couldn't isolate, and after about eight hours of chasing it, I realized that the compiler had allocated space for an integer variable right in the middle of an already-allocated array, so the two variables were stomping on each other. I changed the name of the integer variable and my problems went away.

Apparently I wasn't the only one, because within a week all the labs were back on GCC 2.95.


Actually, that sounds a lot like a bug in your build system. Did you have a custom Makefile (either hand written, or provided by a teacher)? If you didn't keep track of dependencies very carefully so that you always recompile all the .c files that depended on shared .h files when the .h files change, you can wind up with situations where different object files disagree on the layout of structures -- it could cause exactly the sort of problem you describe. Changing the name of the variable could force the file to be recompiled, thus appearing to solve the problem.


To further support your point, the rollback to gcc 2.95 happened across many linux distributions due to incompatible changes in the language that happened at the gcc 3.0 version.

Many distributions rolled back so that the default "stable" compiler matched the one they had to use to build the packages - i.e. common sense.

Once the packages were updated to deal with the gcc 3.x language changes, the compiler and packages started appearing together.


I've seen that problem also (not pretty) but no, the two variables were from the same .c file, and were only used in that file.


I find the thought and your name to be ironic.

My first programming job was doing VB programming in Access 2 programs that had to run on Windows 3.1. (Yes, this was in the last millennium.) I kept on running into bugs that I could demonstrate were in Access, not in my code. It was very frustrating.

My next job was in Perl. I went several years before I found an actual bug in the language. Which then went unfixed for years because someone might be using it. Despite the fact that in every significant Perl code base that I've seen since, there are real bugs in the code that nobody has noticed which trace back to the bug that I found. Why do you ask whether I am bitter?

So your suggestion failed glaringly for me when I was using VB, but since has worked much better.


Access 2 did not use VBA.


It did not use Visual Basic for Applications (VBA), but it did use Access Basic. Which was a dialect of Visual Basic.

Access 95 had the ability to upgrade from Access 2, and that included the ability to migrate from Access Basic to VBA. The tool was not flawless (very little from Microsoft is), but mostly worked pretty well.


"Finding" them is a warning signal of a poor programmer if and only if the scare quotes mean that they have not actually researched the problem sufficiently to prove that the problem is in one of those areas.

These legitimate bugs do exist, and some of us have a talent for finding them with annoying frequency.

Less experienced programmers often "want" to find bugs in the compiler/OS/whatever because that way it's not their fault, and they lack the skill to track down difficult problems in their own code. More experienced programmers realize that finding bugs outside of your own code is often a disaster because frequently there's nothing you can do to fix it.


Nah. Finding a bug in the dev stack is a sign that you're running on the edge. I've encountered one or two myself. I sent one in and the company wrote back and said "yep, it's a bug, fixed next build".

Now, blaming without reproduction on the dev stack is a sign of a lamer. :-)


If that were true then there would be no need to ever release new versions of these things!

I have personally found bugs in Linux (kernel, libc), Oracle, various JVMs, etc, usually cases in which algorithms optimized for "normal" loads became pathological under extreme load. It's much more common than perhaps you'd think.


When I'm working, it's always a bug in the compiler, kernel or hardware. The semicolon was implied. Just give me a minute to work around the compiler bug.


I have only twice thought one of my problems was due to a compiler bug, and I was right one of those times (and that was because my company was stuck with a 4-year old version of the compiler; The bug had already been fixed in the latest version.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: