Hacker Newsnew | past | comments | ask | show | jobs | submit | theospears's commentslogin

I was responsible for maintaining some large glusterfs clusters built on top of these drives, and they certainly added excitement to the experience.

1) At one point we were experiencing disk failures often enough that we wrote a cronjob to detect drive failures via smartctl and automatically send an email to our hosting company requesting them replace the drive. This saved engineering time, and more importantly reduced time to drive replacement, because

2) On at least one occasion, we had the third drive in a RAID6 array fail before we had rebuilt from the initial failure, leading to loss of the array. We think the increased load of the rebuild increased the chance of subsequent failures. Needless to say recovering from this destroyed all plans (and sleep) for the weekend it happened.


That serial number sends shivers down my spine. I also used to work on this system, and still do! At one point seagate admitted that the firmware was faulty. It would periodically stop responding under load, and cause the raid controller to think the drive had failed, causing it to remove it from the array.

I’ve never seen this posted publicly by them, but I’m fairly sure a revised firmware was offered. We deemed it too risky to upgrade the drives one at a time, so built a new cluster on Western Digital drives.


These things always happen in the weekend don't they? When I managed a large network of servers, it seems that all the outages happened during parties and/or the weekend.


Issues seem to happen as you implement the long desired upgrade/fix/repair too. Things hold on for years then crap out right at the final moment.


This makes sense though doesn’t it? Rebuilding an array is a very heavy load op, and putting components that have been around for a while under heavy load seems like a good and fast way to expose failing parts.


> These things always happen in the weekend don't they?

Thats because servers know.... oh they know. Tuesday morning when you get into the office? Naw... that's too easy. Let's shit the bed on Christmas morning at 3:49am. That will let those stupid humans know who is boss around here.


Disclaimer: I work at Asana

We have an in house system (LunaDb) which is a little like this. There's a tech talk available about how it works at https://blog.asana.com/2015/10/asana-tech-talk-reactive-quer... - it's from a few years ago, but the core ideas are there. There's also some details on the caching layer we built for it at https://blog.asana.com/2020/09/worldstore-distributed-cachin...

A few properties based on your questions and the observations here:

- We don't attempt to incrementally update query results. Given the number of simultaneous queries the system handles, we've found it much more important to instead by very precise about only re-running exactly the right queries in response to data modification.

- We support joins (although not queries of arbitrary complexity). We avoid a race conditions and cross-table locking issues by using the binlog as our source of changes, which imposes a linearization on the changes matching the state in the db. Correctly determining which queries to update for these requires going back to the database.

- Performance is an interesting problem. It's easy to arrange situations where total load is a function of "rate of data change" * "number of queries over that data", so being overly broad in triggering recalculations gets expensive fast.

We're actively hiring to work on this - if you are interested my contact details are in my hn profile.


Google have a few options here:

1. Attempt to detect these scam +1 votes, and either ignore them or penalise the targetted sites (although the latter is risky as it can be exploited to destroy the ranking of competitors)

2. Focus on encouraging more genuine use of the +1 button, so any +1 results that can affordably be bought are lost in the noise.

3. The spy vs spy option. Start selling +1 votes themselves under a dummy name, and then immediately remove them as spam (whilst optionally continuing to display them in google analytics). Google could of course afford to undercut all other sellers, and their +1 selling services could appear at the top of all relevant searches, rather than other sites attempting the same.

Option 1 seems the most likely for google, but if I were a small startup I'd definitely pick option 3, then blog about it a few months later.


How about just removing the sites selling them?


I think Digg tried to do that with a company in Australia and failed.


Some acronyms in the article I had to look up:

RMB: Remnibi, the currency of china. Individual denominations are 1 yuán = 10 jiǎo = 100 fēn GFC: Great Financial Crisis


Personally, I don't enjoy receiving anything electronic or gadget related as a present - I have a reasonable idea of what is available and if there is something I want I'll buy it for myself.

I'd love to be given a voucher paying for me to do some activity that the giver things I will enjoy, but which I would not have thought of doing myself. Bonus points if it has a time limit to give me an incentive to get on and try it straight away.


Disclaimer: I have watched the video but not tried the product, so some of these observations may not apply to the the product itself.

Overall, this looks like a great product. I know a number of people who find Google Website Optimizer complicated to use, and I would definitely recommend this to them as a simpler option. I love how slick the browser interface to edit pages is, and I think having the default 'engagement' metric so people can see results without having to set up a goal page is a brilliant idea.

There are a few things you have done which I would consider doing differently.

1. It looks like your mission is to make A/B testing really easy, but your pricing page at the moment doesn't really reflect that. Number of visitors tested is an easy metric, but one that it is hard for me to interpret without lots of knowledge of A/B testing. How many tests does this mean I can run how quickly?

I would also reconsider the additional features you offer in premium packages. Cross-browser testing sounds complex and makes me worry that your site edits will fail in IE6. I don't want to have to test it, I just want it to work. With uptime monitoring, what does this have to do with A/B testing? Bigger sites probably already have some form of monitoring already anyway, so it looks like they are going to pay for something they don't need. I think your core product is strong enough that you don't need to offer these.

2. Showing me the percentage significance level appeals to my inner stats nerd, but I suspect the sort of people I think will benefit most from Optimizely will have difficulty interpreting this number. What level is 'ok'? Having a rank out of 5 below doesn't really address this, is 4/5 ok or do I need 5/5? Google deal with this very well with their bars which turn red or green when they reach significance.

3. The 'select container' option to expand the selection seems non-obvious, and isn't how multi-select works in any other interface I've seen. Maybe allow people to select multiple components and then take their deepest common parent?

There are also some additional features I personally would like to see

1. It would be great if you gave an estimate for how long until my experiment will reach an appropriate significance level (obviously based on % change and traffic seen so far).

2. I would like to be able to choose my conversion action by clicking on a form button or link in your page editor.

3. It would be amazingly useful to have some automatic suggestions for how a page could be changed - on many occasions I've seen people resist A/B testing because the options are so wide and they don't know what to do. Doing this for some simple suggestions sounds possible - e.g. making key links bigger and moving them up the page. Doing anything more sophisticated could be a good challenge though :-)


Great feedback!

On pricing page: I agree this isn't perfect. What metric would you like us to segment our plans by ideally? We want to make this as simple as possible so folks know how to budget for this and so they can easily know which plan makes the most sense for them.

On additional features: Are there any other "value-add" services you think bigger companies might want besides just more visitors?

On 'select container': we realize this is a bit confusing and we're working on implementing a multi-select almost exactly as you described.

On choosing conversion action: right now we automatically track all reasonable conversion events on a page (clicks, form submissions, subsequent pageviews, custom events) and we want to make this even easier by allow you to explicitly create a custom conversion event to track by specifying it when you are editing the experiment.

On automatic suggestions for how a page could be changed: this is a hard one and we hope to get there eventually. In the mean time we're going to try to do a better job blogging about best practices and lessons we've learned working with our customers.

Thanks again for all the feedback!


Another possibility for pricing would be to simply buy a one time number of visitors for a test. I think that would be a much easier first time sell in my organization.

One test, 30k visitors, x statistical significance - $100.

Perhaps with the option of only deleting variations on the fly that don't work.


TinEye (http://www.tineye.com): a 'reverse image search'. You upload or link to an image and it finds you other copies of that image on the web. This is something you absolutely can't do via google, and is surprisingly useful, for example to find uncropped or pre-photoshopped copies of images.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: