1) Appear to be the only one affected by it.
2) Don't have exact steps to reproduce it.
This makes it very difficult for anybody apart from you to work on the issue.
> nothing has been done about it.
From the bug, it looks like lots has been done about it.
> This worries me.
I think this is justified for you, but has little bearing on others. Software projects are full of bugs that only appear to affect one person. These bugs make little progress but don't necessarily reflect on the general quality of the software. Admittedly it is more of a concern on something like a filesystem, but I am still skeptical.
(a) It's easy to reproduce. There are exact steps in the bug and in the mailing list thread. In fact, the steps are so simple I'll tell you what they are: (1) Run mkfs.btrfs, (2) Run any btrfs command on the filesystem. Put (1) + (2) into a shell script so they run immediately after each other. Put a loop around it so it tries over and over again. Bang => data corruption.
(b) It's a data corrupter. That should be an immediate "stop everything" red flag (for a serious filesystem at least).
Edit: Rereading this comment, I sound like I'm being very negative about btrfs. I just want to say that I really want btrfs to work, because it's based on very sound principles. I've also offered to help immediately testing any proposed patches for this bug.
Because it's a test designed to reproduce a bug. We hit the bug all the time in libguestfs which spends much of its time formatting filesystems and immediately using them, but it doesn't happen 100% of the time. It's a race condition somewhere.
> (a) It's easy to reproduce, if anyone had put any effort in at all.
That's not the impression I get from the bug! I'm sure I read "difficult to reproduce". Perhaps it would help with attention if you made this absolutely clear?
As a professional storage engineer, your comment strikes my heart with horror. A reproducible bug that causes a crash/hang/loss of data in your file system in patently unacceptable.
We labor to design a system that stores data safely to a stable medium. This is done while acknowledging that we are not perfect and cannot get everything right. HOWEVER, when we know that we have done something wrong and it can gravely affect a user, we do not ship. Ever. Period.
"Software has bugs" is the lamest excuse to ship something with bugs. Do you think that pacemaker firmware leaves the factory floor with known bugs? I doubt it.
I think that SUSE are trying to differentiate themselves by supporting Btrfs, which is a nice move on their part. Btrfs extensive features can be very useful in quite a few scenarios, so its wider adoption is more than welcome. I know quite a few people that have been longing for ZFS's functionality for years and Btrfs provides the most important (IMHO) ones.
For some time Btrfs has been rapidly stabilizing and I expect the other distros to jump ship and provide support as well.
Redhat is supposed to support it with version 7. I believe Oracle already supports it.
iirc Suse isn't supporting all the features yet, such as compression.
I really tried to go zfs for enterprise nfs at work and have decided to wait until Redhat 7 and btrfs. various problems with solaris and x86 hardware from hp while not getting the support iI needed internally to do zfs for linux. btrfs still has catch up to do. Last iI heard data De duplication isn't sted yet. No block level send/receive like zfs, btrfs send/receive is different.
I know it is not the same as a supported module, but zfsonlinux.org has been reliable for me on a CentOS 6 dev box. Of course as an RC, I would not deploy it in production, but I have high hopes.
Does anyone know the reasoning for Ext3 to be supported but not Ext4?
From the article, "A notable omission is Ext4; read-only functionality is supported for migrating to a different filesystem. Full read-write support is available with the ext4-writeable KMPkernel module from the SLES11-Extras repository, but it is not supported."
SLES decided several years ago not to support ext4. Their reasoning was that once they started supporting ext4, they would be committed to support it for ten years (since they are an enterprise distro). At the time, they didn't have the staffing levels to support another file system, and they assumed that btrfs would be coming down the pike soon.
As it turns out btrfs took a bit longer for them to stablize than they had counted upon, but that's why they made the decision they did when SLES 11 was first released.
that's odd. thought ext4 was supported in opensuse (article is about enterprise suse). in fact just checked - it's the default if i add a new logical vol in yast, in 12.2 (current release).
That's not true. The changes from ext4 to ext3 are evolutionary, not revolutionary. There is full backwards compatibility, and there's still a lot of the original ext3 code in ext4 (in particular, we still support old-fashioned indirect blocks in ext4, even though it's not the default, and we do that using a copy of the ext3 code).
If you look in the git history, you can see all of the changes as we added new features into ext3 during the course of the ext4 development.
> Despite the fact that Ext4 adds a number of compelling features to the filesystem, T'so doesn't see it as a major step forward. He dismisses it as a rehash of outdated "1970s technology" and describes it as a conservative short-term solution. He believes that the way forward is Oracle's open source Btrfs filesystem, which is designed to deliver significant improvements in scalability, reliability, and ease of management.
I decided to run a couple partitions on several computer with btrfs about a year or so ago. At some point I noticed that things were going really slow. I ended replacing all my btrfs partitions with lvm + xfs a couple months ago.
It made me sad because I loved the design ideas behind btrfs and really wanted it to be great. But right now with lvm and xfs I have a very stable and fast system (we're talking an order of magnitude faster--seriously).
I think SUSE is crazy to call it ready at this point.
This is a nice development even for those of us not using SUSE -- I like Btrfs, I'm looking forward to using it, but I won't trust it with my own data until it's been dubbed "production ready" and used that way in the wild for several years. Glad the clock's started ticking for Btrfs on a major distro.
Does anyone know what the story with file systems and SSD's is?
Google did made it clear that some tuning makes sense for SSD's, but did not make it clear whether the various file systems are more or less suited to SSD's.
Btrfs and Ext4 work fine with tuning (enabling TRIM, partition alignment, etc.[1][2]).
Samsung is working on a log-structured file system designed with SSDs in mind called F2FS[3]. There is also NILFS (also log-structured) which has been around for awhile[4].
I would stick with Ext4+TRIM for now on a personal laptop.
these solid state specific file systems are not intended for use on the types of ssds you buy to plug into a SATA port, those are already heavily abstracted by their firmware, which handles stuff like wear leveling and block deallocation.
These types of filesystems are intended for when the kernel has access to the raw addressing of the flash chips themselves, such as on some cellphones and embedded devices. They handle stuff like wear leveling, and deleting blocks.
i think you are confusing f2fs and nilfs which work with regular block devices (including ssds, but also cheaper flash (and also poorly regular harddrives)) with jffs(2) which will only work on raw flash.
Modern harddrives and SSDs do a lot of work internally, SSDs especially. Probably the most important thing that an SSD does is wear-levelling, keeping an internal map of sectors and remapping them to ensure equal wear (so the OS never knows which sector it's actually writing to). This makes file system choice nearly inconsequential on a modern SSD (this doesn't apply to flash drives and flash cards though). The only important thing is TRIM, which should be enabled by default on nearly everything (just ensure that AHCI is set in the bios, and not IDE-Mode).
It has notably poor performance for virtual machine disk images. These require large amount of in-place changes, which simply isn't what it's good for. I've not tried databases, but I imagine it could be bad for them, for the same reasons.
Presumably because Oracle wants you to use ASMlib / raw disks / O_DIRECT to partitions, and wouldn't support Oracle database on btrfs. Btrfs is for their RHEL clone OS. Big company, different departments ...
https://bugzilla.redhat.com/show_bug.cgi?id=863978