Qu Wenruo [Fri, 27 Feb 2015 08:24:28 +0000 (16:24 +0800)]
btrfs: quota: Update quota tree after qgroup relationship change.
Previous patch modified the in memory struct but it's not written in
quota tree until next commit.
So user will still get old data using "btrfs qgroup show" after
assign/remove.
This patch will call btrfs_run_qgroups in assign ioctl so it will be
updated to in memory quota trees and user will get up-to-date results.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit e082f56313f374d723b0366978ddb062c8fe79ea)
Signed-off-by: Dongsheng Yang <yangds.fnst@cn.fujitsu.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 8ea0ec9e011eb542a3e7b1171776aa4877cf8a90)
Qu Wenruo [Fri, 27 Feb 2015 08:24:24 +0000 (16:24 +0800)]
btrfs: qgroup: Fix dead judgement on qgroup_rescan_leaf() return value.
Old qgroup_rescan_leaf() comment indicates ret == 2 as complete and
cleared INCONSISTENT flag.
This is not true since it will never return 2, and inside it no codes
will clear INCONSISTENT flag.
The flag clearance is done in btrfs_qgroup_rescan_work().
This caused the bug that INCONSISTENT flag is never cleared.
So change the comment and fix the dead judgment.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 3393168d22fd5f1be5b5429a818c10f642e88ae3)
Qu Wenruo [Fri, 27 Feb 2015 08:24:23 +0000 (16:24 +0800)]
btrfs: Don't allow subvolid >= (1 << BTRFS_QGROUP_LEVEL_SHIFT) to be created
Btrfs will create qgroup on subvolume creation if quota is enabled, but
qgroup uses the high bits(currently 16 bits) as level, to build the
inheritance.
However it is fully possible a subvolume can be created with a
subvolumeid larger than 1 << BTRFS_QGROUP_LEVEL_SHIFT, so it will be
considered as level 1 and can't be assigned to other qgroup in level 1.
This patch will prevent such things so qgroup inheritance will not be
screwed up.
The downside is very clear, btrfs subvolume number limit will decrease
from (u64 max - 256(fisrt free objectid) - 256(last free objectid)) to
(u48 max -256(first free objectid)).
But we still have near u48(that's 15 digits in dec), so that should not
be a huge problem.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit e09fe2d2119800e6060f9b8ba71e072a0eb0fa4d)
Qu Wenruo [Fri, 27 Feb 2015 08:24:22 +0000 (16:24 +0800)]
btrfs: Check qgroup level in kernel qgroup assign.
Although we have qgroup level check in btrfs-progs, it's not enough
since other programe may still call ioctl directly not using
btrfs-progs. For example, systemd.
But it's btrfs-progs to be blame since we don't provide a
full-function(like subvolume create things) btrfs library with enough
check, and only rely on kernel ioctl.
So Add level checks in kernel too.
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 8465ecec9611d60cbbc8e374ecf68453e0dd5b50)
Dongsheng Yang [Mon, 24 Nov 2014 15:27:09 +0000 (10:27 -0500)]
btrfs: qgroup: allow to remove qgroup which has parent but no child.
When a qgroup has parents but no child, it should be removable in
Theory I think. But currently, we can not remove it when it has
either parent or child.
At this time, there is no subvol or qgroup depending on it.
Just a qgroup 2/0 is its parent, but 2/0 can work well without
1/0. So I think 1/0 should be removalbe. But:
# btrfs qgroup destroy 1/0 /mnt
ERROR: unable to destroy quota group: Device or resource busy
This patch remove the check of qgroup->parent in removing it,
then we can remove a qgroup when it has a parent.
Signed-off-by: Dongsheng Yang <yangds.fnst@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit f5a6b1c53bdd44f79e3904c0f5e59f956b49b2c8)
Dongsheng Yang [Tue, 11 Nov 2014 12:18:22 +0000 (07:18 -0500)]
btrfs: qgroup: return EINVAL if level of parent is not higher than child's.
When we create a subvol inheriting a qgroup, we need to check the level
of them. Otherwise, there is a chance a qgroup can inherit another qgroup
at the same level.
Signed-off-by: Dongsheng Yang <yangds.fnst@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 09870d2772b284d0061a5e4d1e1cdf6fb6764344)
Dongsheng Yang [Mon, 29 Dec 2014 11:23:05 +0000 (06:23 -0500)]
Btrfs: qgroup, Account data space in more proper timings.
Currenly, in data writing, ->reserved is accounted in
fill_delalloc(), but ->may_use is released in clear_bit_hook()
which is called by btrfs_finish_ordered_io(). That's too late,
that said, between fill_delalloc() and btrfs_finish_ordered_io(),
the data is doublely accounted by qgroup. It will cause some
unexpected -EDQUOT.
Example:
# btrfs quota enable /root/btrfs-auto-test/
# btrfs subvolume create /root/btrfs-auto-test//sub
Create subvolume '/root/btrfs-auto-test/sub'
# btrfs qgroup limit 1G /root/btrfs-auto-test//sub
dd if=/dev/zero of=/root/btrfs-auto-test//sub/file bs=1024 count=1500000
dd: error writing '/root/btrfs-auto-test//sub/file': Disk quota exceeded
681353+0 records in
681352+0 records out 697704448 bytes (698 MB) copied, 8.15563 s, 85.5 MB/s
It's (698 MB) when we got an -EDQUOT, but we limit it by 1G.
This patch move the btrfs_qgroup_reserve/free() for data from
btrfs_delalloc_reserve/release_metadata() to btrfs_check_data_free_space()
and btrfs_free_reserved_data_space(). Then the accounter in qgroup
will be updated at the same time with the accounter in space_info updated.
In this way, the unexpected -EDQUOT will be killed.
Reported-by: Satoru Takeuchi <takeuchi_satoru@jp.fujitsu.com> Signed-off-by: Dongsheng Yang <yangds.fnst@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 237c0e9f1fbfdca7287f3539f1fa73e5063156b5)
Dongsheng Yang [Fri, 12 Dec 2014 08:44:34 +0000 (16:44 +0800)]
Btrfs: qgroup: free reserved in exceeding quota.
When we exceed quota limit in writing, we will free
some reserved extent when we need to drop but not free
account in qgroup. It means, each time we exceed quota
in writing, there will be some remain space in qg->reserved
we can not use any more. If things go on like this, the
all space will be ate up.
Signed-off-by: Dongsheng Yang <yangds.fnst@cn.fujitsu.com> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 804ca127fb93988c6a9d5f2bf4a8f1a780c9a2d0)
Dongsheng Yang [Sun, 18 Jan 2015 15:59:23 +0000 (10:59 -0500)]
Btrfs: qgroup: cleanup, remove an unsued parameter in btrfs_create_qgroup().
Signed-off-by: Dongsheng Yang <yangds.fnst@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 4087cf24ae2af17f7dd9fd34e22fde816952d421)
btrfs_limit_group use arg limit to override the old qgroup_limit of
corresponding qgroup. However, we should override part of old qgroup_limit
according to the bit which has been set in arg limit.
Signed-off-by: Fan Chengniang <fancn.fnst@cn.fujitsu.com> Signed-off-by: Dongsheng Yang <yangds.fnst@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 03477d945f13a284d35a757b2c2323d165d5cd81)
Dongsheng Yang [Fri, 21 Nov 2014 02:04:56 +0000 (21:04 -0500)]
btrfs: qgroup: update limit info in function btrfs_run_qgroups().
When we commit_transaction(), qgroups in btree should be updated.
But, limit info is not considered currently. It will cause a problem
when a qgroup of a snapshot inherit the limit info from srcqgroup,
then there is an inconsistency.
Signed-off-by: Dongsheng Yang <yangds.fnst@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit d3001ed3a82ec2696bb13c78092d0a3460003fd7)
Dongsheng Yang [Fri, 21 Nov 2014 02:01:41 +0000 (21:01 -0500)]
btrfs: qgroup: consolidate the parameter of fucntion update_qgroup_limit_item().
Cleanup: Change the parameter of update_qgroup_limit_item() to the family of
update_qgroup_xxx_item().
Signed-off-by: Dongsheng Yang <yangds.fnst@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 1510e71c620c27ffc7540176a0689f70d6915e28)
Dongsheng Yang [Fri, 21 Nov 2014 01:58:34 +0000 (20:58 -0500)]
btrfs: qgroup: update qgroup in memory at the same time when we update it in btree.
When we call btrfs_qgroup_inherit() with BTRFS_QGROUP_INHERIT_SET_LIMITS,
btrfs will update the limit info of qgroup in btree but forget to update
the qgroup in rbtree at the same time. It obviousely will cause an inconsistency.
This patch fix it by updating the rbtree at the same time.
Signed-off-by: Dongsheng Yang <yangds.fnst@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit e8c8541ac379709db8d2339e1cb720469fc2cd8f)
Dongsheng Yang [Fri, 21 Nov 2014 01:14:38 +0000 (20:14 -0500)]
btrfs: qgroup: inherit limit info from srcgroup in creating snapshot.
Currently, when we snapshot a subvol, snapshot will not copy the limits
from srcqgroup.
This patch make the qgroup in snapshot inherit the limit info when create
a snapshot.
Signed-off-by: Dongsheng Yang <yangds.fnst@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 3eeb4d597efc9e068902057f1bd040cffc69e9e6)
Zhao Lei [Mon, 2 Mar 2015 11:32:20 +0000 (19:32 +0800)]
btrfs: Support busy loop of write and delete
Reproduce:
while true; do
dd if=/dev/zero of=/mnt/btrfs/file count=[75% fs_size]
rm /mnt/btrfs/file
done
Then we can see above loop failed on NO_SPACE.
It it long-term problem since very beginning, because delayed-iput
after rm are not run.
We already have commit_transaction() in alloc_space code, but it is
not triggered in above case.
This patch trigger commit_transaction() to run delayed-iput and
reflash pinned-space to to make write success.
It is based on previous fix of delayed-iput in commit_transaction(),
need to be applied on top of:
btrfs: Fix NO_SPACE bug caused by delayed-iput
Zhao Lei [Thu, 26 Feb 2015 02:49:20 +0000 (10:49 +0800)]
btrfs: Fix NO_SPACE bug caused by delayed-iput
Steps to reproduce:
while true; do
dd if=/dev/zero of=/btrfs_dir/file count=[fs_size * 75%]
rm /btrfs_dir/file
sync
done
And we'll see dd failed because btrfs return NO_SPACE.
Reason:
Normally, btrfs_commit_transaction() call btrfs_run_delayed_iputs()
in end to free fs space for next write, but sometimes it hadn't
done work on time, because btrfs-cleaner thread get delayed-iputs
from list before, but do iput() after next write.
This is log:
[ 2569.050776] comm=btrfs-cleaner func=btrfs_evict_inode() begin
Zhao Lei [Wed, 25 Feb 2015 06:17:20 +0000 (14:17 +0800)]
btrfs: Set relative data on clear btrfs_block_group_cache->pinned
Bug1:
space_info->bytes_readonly was set to very large(negative) value in
btrfs_remove_block_group().
Reason:
Current code set block_group_cache->pinned = 0 in btrfs_delete_unused_bgs(),
but above space was not counted to space_info->bytes_readonly.
Then in btrfs_remove_block_group():
block_group->space_info->bytes_readonly -= block_group->key.offset;
We can see following value in trace:
btrfs_remove_block_group: pid=2677 comm=btrfs-cleaner WARNING: bytes_readonly=12582912, key.offset=134217728
Bug2:
space_info->total_bytes_pinned grow to value larger than fs size.
In a 1.2G fs, we can get following trace log:
at first:
ZL_DEBUG: add_pinned_bytes: pid=2710 comm=sync change total_bytes_pinned flags=1 869793792 + 95944704 = 965738496
after some op:
ZL_DEBUG: add_pinned_bytes: pid=2770 comm=sync change total_bytes_pinned flags=1 1780178944 + 95944704 = 1876123648
after some op:
ZL_DEBUG: add_pinned_bytes: pid=3193 comm=sync change total_bytes_pinned flags=1 2924568576 + 95551488 = 3020120064
...
Reason:
Similar to bug1, we also need to adjust space_info->total_bytes_pinned
in above code block.
Zhao Lei [Mon, 16 Feb 2015 10:52:17 +0000 (18:52 +0800)]
btrfs: Fix tail space processing in find_free_dev_extent()
It is another reason for NO_SPACE case.
When we found enough free space in loop and saved them to
max_hole_start/size before, and tail space contains pending extent,
origional innocent max_hole_start/size are reset in retry.
As a result, find_free_dev_extent() returns less space than it can,
and cause NO_SPACE in user program.
Reviewed-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit f2ab76188ec185dde84e7fe7c533ef2f5d668a32)
Zhao Lei [Sat, 14 Feb 2015 05:23:45 +0000 (13:23 +0800)]
btrfs: fix condition of commit transaction
Old code bypass commit transaction when we don't have enough
pinned space, but another case is there exist freed bgs in current
transction, it have possibility to make alloc_chunk success.
This patch modify the condition to:
if (have_free_bg || have_pinned_space) commit_transaction()
Confirmed above action by printk before and after patch.
Chris Mason [Sat, 11 Apr 2015 12:09:06 +0000 (05:09 -0700)]
Btrfs: fix uninit variable in clone ioctl
Commit 0d97a64e0 creates a new variable but doesn't always set it up.
This puts it back to the original method (key.offset + 1) for the cases
not covered by Filipe's new logic.
Filipe Manana [Mon, 30 Mar 2015 17:23:59 +0000 (18:23 +0100)]
Btrfs: fix inode eviction infinite loop after cloning into it
If we attempt to clone a 0 length region into a file we can end up
inserting a range in the inode's extent_io tree with a start offset
that is greater then the end offset, which triggers immediately the
following warning:
So just bail out of the clone ioctl if the length of the region to clone
is zero, without locking any extent range, in order to prevent this issue
(same behaviour as a pwrite with a 0 length for example).
This is trivial to reproduce. For example, the steps for the test I just
made for fstests:
mkfs.btrfs -f SCRATCH_DEV
mount SCRATCH_DEV $SCRATCH_MNT
Filipe Manana [Mon, 30 Mar 2015 17:26:47 +0000 (18:26 +0100)]
Btrfs: fix inode eviction infinite loop after extent_same ioctl
If we pass a length of 0 to the extent_same ioctl, we end up locking an
extent range with a start offset greater then its end offset (if the
destination file's offset is greater than zero). This results in a warning
from extent_io.c:insert_state through the following call chain:
This leads to an infinite loop when evicting the inode. This is the same
problem that my previous patch titled
"Btrfs: fix inode eviction infinite loop after cloning into it" addressed
but for the extent_same ioctl instead of the clone ioctl.
CC: <stable@vger.kernel.org> Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Omar Sandoval <osandov@osandov.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 113e8283869b9855c8b999796aadd506bbac155f)
Filipe Manana [Tue, 31 Mar 2015 13:56:46 +0000 (14:56 +0100)]
Btrfs: fix range cloning when same inode used as source and destination
While searching for extents to clone we might find one where we only use
a part of it coming from its tail. If our destination inode is the same
the source inode, we end up removing the tail part of the extent item and
insert after a new one that point to the same extent with an adjusted
key file offset and data offset. After this we search for the next extent
item in the fs/subvol tree with a key that has an offset incremented by
one. But this second search leaves us at the new extent item we inserted
previously, and since that extent item has a non-zero data offset, it
it can make us call btrfs_drop_extents with an empty range (start == end)
which causes the following warning:
Then we attempt to insert a new extent item with a key that already
exists, which makes btrfs_insert_empty_item return -EEXIST resulting in
abortion of the current transaction:
This is wrong because we should not process the extent item that we just
inserted previously, and instead process the extent item that follows it
in the tree
For example for the test case I wrote for fstests:
The second clone call fails with -EEXIST, because when we process the
first extent item (offset 262144), we drop part of it (counting from the
end) and then insert a new extent item with a key greater then the key we
found. The next time we search the tree we search for a key with offset
262144 + 1, which leaves us at the new extent item we have just inserted
but we think it refers to an extent that we need to clone.
Fix this by ensuring the next search key uses an offset corresponding to
the offset of the key we found previously plus the data length of the
corresponding extent item. This ensures we skip new extent items that we
inserted and works for the case of implicit holes too (NO_HOLES feature).
Chris Mason [Tue, 7 Apr 2015 01:17:00 +0000 (18:17 -0700)]
Btrfs: fix use after free when close_ctree frees the orphan_rsv
Near the end of close_ctree, we're calling btrfs_free_block_rsv
to free up the orphan rsv. The problem is this call updates the
space_info, which has already been freed.
This adds a new __ function that directly calls kfree instead of trying
to update the space infos.
Chris Mason [Mon, 6 Apr 2015 14:48:20 +0000 (07:48 -0700)]
Btrfs: don't use highmem for free space cache pages
In order to create the free space cache concurrently with FS modifications,
we need to take a few block group locks.
The cache code also does kmap, which would schedule with the locks held.
Instead of going through kmap_atomic, lets just use lowmem for the cache
pages.
Josef Bacik [Tue, 24 Feb 2015 20:35:51 +0000 (12:35 -0800)]
Btrfs: don't steal from the global reserve if we don't have the space
btrfs_evict_inode() needs to be more careful about stealing from the
global_rsv. We dont' want to end up aborting commit with ENOSPC just
because the evict_inode code was too greedy.
Josef Bacik [Wed, 18 Feb 2015 21:58:15 +0000 (13:58 -0800)]
Btrfs: don't commit the transaction in the async space flushing
We're triggering a huge number of commits from
btrfs_async_reclaim_metadata_space. These aren't really requried,
because everyone calling the async reclaim code is going to end up
triggering a commit on their own.
Chris Mason [Wed, 17 Dec 2014 17:41:04 +0000 (09:41 -0800)]
btrfs: actively run the delayed refs while deleting large files
When we are deleting large files with large extents, we are building up
a huge set of delayed refs for processing. Truncate isn't checking
often enough to see if we need to back off and process those, or let
a commit proceed.
The end result is long stalls after the rm, and very long commit times.
During the commits, other processes back up waiting to start new
transactions and we get into trouble.
Chris Mason [Wed, 1 Apr 2015 15:36:05 +0000 (08:36 -0700)]
Btrfs: free and unlock our path before btrfs_free_and_pin_reserved_extent()
The error handling path for alloc_reserved_tree_block is calling
btrfs_free_and_pin_reserved_extent with a spinning tree lock held. This
might sleep as we allocate extent_state objects:
This changes things to free the path first, which will also unlock the
extent buffer.
Signed-off-by: Chris Mason <clm@fb.com> Reported-by: Dave Sterba <dsterba@suse.cz> Tested-by: Dave Sterba <dsterba@suse.cz>
(cherry picked from commit dd82525956744c3a5e6b7275cd468b6180cc5b72)
Liu Bo [Tue, 17 Mar 2015 06:34:16 +0000 (14:34 +0800)]
Btrfs: Remove the check for old-style mkfs
This was used to make sure that a fresh btrfs from an older mkfs.btrfs,
but it also allows us to mount a buggy btrfs if this btrfs has the right
superblock head part but has something wrong with chunk tree part[1], and
after that we can hit BUG_ON()s set in the code to prevent something
impossible.
Since David has released "Btrfs progs v3.19-rc2", just remove the check,
if anyone who wants to make a fresh btrfs, please use the latest one.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com> Reviewed-by: Omar Sandoval <osandov@osandov.com> Reviewed-by: David Sterba <dsterba@suse.cz> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit e56a951e01bf55f49533c47ad2ce61dbd613a3f3)
Jeff Mahoney [Fri, 20 Mar 2015 18:02:09 +0000 (14:02 -0400)]
btrfs: cleanup orphans while looking up default subvolume
Orphans in the fs tree are cleaned up via open_ctree and subvolume
orphans are cleaned via btrfs_lookup_dentry -- except when a default
subvolume is in use. The name for the default subvolume uses a manual
lookup that doesn't trigger orphan cleanup and needs to trigger it
manually as well. This doesn't apply to the remount case since the
subvolumes are cleaned up by walking the root radix tree.
Signed-off-by: Jeff Mahoney <jeffm@suse.com> Reviewed-by: David Sterba <dsterba@suse.cz> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 727b9784b6085c99c2f836bf4fcc2848dc9cf904)
Tom Van Braeckel [Tue, 24 Mar 2015 15:35:49 +0000 (16:35 +0100)]
btrfs: explicitly set control file's private_data
The private_data member of the Btrfs control device file
(/dev/btrfs-control) is used to hold the current transaction and needs
to be initialized to NULL to signify that no transaction is in progress.
We explicitly set the control file's private_data to NULL to be
independent of whatever value the misc subsystem initializes it to.
Backstory:
----------
The misc subsystem (which is used by /dev/btrfs-control) initializes
a file's private_data to point to the misc device when a driver has
registered a custom open file operation and initializes it to NULL
when a custom open file operation has *not* been provided.
This subtle quirk is confusing, to the point where kernel code registers
*empty* file open operations to have private_data point to the misc
device structure.
And it leads to bugs, where the addition or removal of a custom open
file operation surprisingly changes the initial contents of a file's
private_data structure.
To simplify things in the misc subsystem, a patch [1] has been proposed
to *always* set private_data to point to the misc device instead of
only doing this when a custom open file operation has been registered.
But before we can fix this in the misc subsystem itself, we need to
modify the (few) drivers that rely on this very subtle behavior.
[1] https://lkml.org/lkml/2014/12/4/939
Signed-off-by: Martin Kepplinger <martink@posteo.de> Signed-off-by: Tom Van Braeckel <tomvanbraeckel@gmail.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit d8620958296e4fa61afde421f1de16a5c2234b28)
Chengyu Song [Tue, 24 Mar 2015 22:12:56 +0000 (18:12 -0400)]
btrfs: incorrect handling for fiemap_fill_next_extent return
fiemap_fill_next_extent returns 0 on success, -errno on error, 1 if this was
the last extent that will fit in user array. If 1 is returned, the return
value may eventually returned to user space, which should not happen, according
to manpage of ioctl.
Signed-off-by: Chengyu Song <csong84@gatech.edu> Reviewed-by: David Sterba <dsterba@suse.cz> Reviewed-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 26e726afe01c1c82072cf23a5ed89ce25f39d9f2)
ie. the missing attribute name after the namespace.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=94291 Reported-by: William Douglas <william.douglas@intel.com> CC: <stable@vger.kernel.org> # 2.6.29+ Signed-off-by: David Sterba <dsterba@suse.cz> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 3c3b04d10ff1811a27f86684ccd2f5ba6983211d)
Filipe Manana [Mon, 23 Mar 2015 14:07:40 +0000 (14:07 +0000)]
Btrfs: fix log tree corruption when fs mounted with -o discard
While committing a transaction we free the log roots before we write the
new super block. Freeing the log roots implies marking the disk location
of every node/leaf (metadata extent) as pinned before the new super block
is written. This is to prevent the disk location of log metadata extents
from being reused before the new super block is written, otherwise we
would have a corrupted log tree if before the new super block is written
a crash/reboot happens and the location of any log tree metadata extent
ended up being reused and rewritten.
Even though we pinned the log tree's metadata extents, we were issuing a
discard against them if the fs was mounted with the -o discard option,
resulting in corruption of the log tree if a crash/reboot happened before
writing the new super block - the next time the fs was mounted, during
the log replay process we would find nodes/leafs of the log btree with
a content full of zeroes, causing the process to fail and require the
use of the tool btrfs-zero-log to wipeout the log tree (and all data
previously fsynced becoming lost forever).
Fix this by not doing a discard when pinning an extent. The discard will
be done later when it's safe (after the new super block is committed) at
extent-tree.c:btrfs_finish_extent_commit().
Fixes: e688b7252f78 (Btrfs: fix extent pinning bugs in the tree log) CC: <stable@vger.kernel.org> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit dcc82f4783ad91d4ab654f89f37ae9291cdc846a)
Filipe Manana [Fri, 20 Mar 2015 17:19:46 +0000 (17:19 +0000)]
Btrfs: fix metadata inconsistencies after directory fsync
We can get into inconsistency between inodes and directory entries
after fsyncing a directory. The issue is that while a directory gets
the new dentries persisted in the fsync log and replayed at mount time,
the link count of the inode that directory entries point to doesn't
get updated, staying with an incorrect link count (smaller then the
correct value). This later leads to stale file handle errors when
accessing (including attempt to delete) some of the links if all the
other ones are removed, which also implies impossibility to delete the
parent directories, since the dentries can not be removed.
Another issue is that (unlike ext3/4, xfs, f2fs, reiserfs, nilfs2),
when fsyncing a directory, new files aren't logged (their metadata and
dentries) nor any child directories. So this patch fixes this issue too,
since it has the same resolution as the incorrect inode link count issue
mentioned before.
This is very easy to reproduce, and the following excerpt from my test
case for xfstests shows how:
# Create our main test file and directory.
$XFS_IO_PROG -f -c "pwrite -S 0xaa 0 8K" $SCRATCH_MNT/foo | _filter_xfs_io
mkdir $SCRATCH_MNT/mydir
# Make sure all metadata and data are durably persisted.
sync
# Add a hard link to 'foo' inside our test directory and fsync only the
# directory. The btrfs fsync implementation had a bug that caused the new
# directory entry to be visible after the fsync log replay but, the inode
# of our file remained with a link count of 1.
ln $SCRATCH_MNT/foo $SCRATCH_MNT/mydir/foo_2
# Add a few more links and new files.
# This is just to verify nothing breaks or gives incorrect results after the
# fsync log is replayed.
ln $SCRATCH_MNT/foo $SCRATCH_MNT/mydir/foo_3
$XFS_IO_PROG -f -c "pwrite -S 0xff 0 64K" $SCRATCH_MNT/hello | _filter_xfs_io
ln $SCRATCH_MNT/hello $SCRATCH_MNT/mydir/hello_2
# Add some subdirectories and new files and links to them. This is to verify
# that after fsyncing our top level directory 'mydir', all the subdirectories
# and their files/links are registered in the fsync log and exist after the
# fsync log is replayed.
mkdir -p $SCRATCH_MNT/mydir/x/y/z
ln $SCRATCH_MNT/foo $SCRATCH_MNT/mydir/x/y/foo_y_link
ln $SCRATCH_MNT/foo $SCRATCH_MNT/mydir/x/y/z/foo_z_link
touch $SCRATCH_MNT/mydir/x/y/z/qwerty
# Now fsync only our top directory.
$XFS_IO_PROG -c "fsync" $SCRATCH_MNT/mydir
# And fsync now our new file named 'hello', just to verify later that it has
# the expected content and that the previous fsync on the directory 'mydir' had
# no bad influence on this fsync.
$XFS_IO_PROG -c "fsync" $SCRATCH_MNT/hello
# Simulate a crash/power loss.
_load_flakey_table $FLAKEY_DROP_WRITES
_unmount_flakey
# Verify the content of our file 'foo' remains the same as before, 8192 bytes,
# all with the value 0xaa.
echo "File 'foo' content after log replay:"
od -t x1 $SCRATCH_MNT/foo
# Remove the first name of our inode. Because of the directory fsync bug, the
# inode's link count was 1 instead of 5, so removing the 'foo' name ended up
# deleting the inode and the other names became stale directory entries (still
# visible to applications). Attempting to remove or access the remaining
# dentries pointing to that inode resulted in stale file handle errors and
# made it impossible to remove the parent directories since it was impossible
# for them to become empty.
echo "file 'foo' link count after log replay: $(stat -c %h $SCRATCH_MNT/foo)"
rm -f $SCRATCH_MNT/foo
# Now verify that all files, links and directories created before fsyncing our
# directory exist after the fsync log was replayed.
[ -f $SCRATCH_MNT/mydir/foo_2 ] || echo "Link mydir/foo_2 is missing"
[ -f $SCRATCH_MNT/mydir/foo_3 ] || echo "Link mydir/foo_3 is missing"
[ -f $SCRATCH_MNT/hello ] || echo "File hello is missing"
[ -f $SCRATCH_MNT/mydir/hello_2 ] || echo "Link mydir/hello_2 is missing"
[ -f $SCRATCH_MNT/mydir/x/y/foo_y_link ] || \
echo "Link mydir/x/y/foo_y_link is missing"
[ -f $SCRATCH_MNT/mydir/x/y/z/foo_z_link ] || \
echo "Link mydir/x/y/z/foo_z_link is missing"
[ -f $SCRATCH_MNT/mydir/x/y/z/qwerty ] || \
echo "File mydir/x/y/z/qwerty is missing"
# We expect our file here to have a size of 64Kb and all the bytes having the
# value 0xff.
echo "file 'hello' content after log replay:"
od -t x1 $SCRATCH_MNT/hello
# Now remove all files/links, under our test directory 'mydir', and verify we
# can remove all the directories.
rm -f $SCRATCH_MNT/mydir/x/y/z/*
rmdir $SCRATCH_MNT/mydir/x/y/z
rm -f $SCRATCH_MNT/mydir/x/y/*
rmdir $SCRATCH_MNT/mydir/x/y
rmdir $SCRATCH_MNT/mydir/x
rm -f $SCRATCH_MNT/mydir/*
rmdir $SCRATCH_MNT/mydir
# An fsck, run by the fstests framework everytime a test finishes, also detected
# the inconsistency and printed the following error message:
#
# root 5 inode 257 errors 2001, no inode item, link count wrong
# unresolved ref dir 258 index 2 namelen 5 name foo_2 filetype 1 errors 4, no inode ref
# unresolved ref dir 258 index 3 namelen 5 name foo_3 filetype 1 errors 4, no inode ref
status=0
exit
The expected golden output for the test is:
wrote 8192/8192 bytes at offset 0
XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 65536/65536 bytes at offset 0
XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
File 'foo' content after log replay: 0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
* 0020000
file 'foo' link count after log replay: 5
file 'hello' content after log replay: 0000000 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
* 0200000
Which is the output after this patch and when running the test against
ext3/4, xfs, f2fs, reiserfs or nilfs2. Without this patch, the test's
output is:
wrote 8192/8192 bytes at offset 0
XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 65536/65536 bytes at offset 0
XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
File 'foo' content after log replay: 0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
* 0020000
file 'foo' link count after log replay: 1
Link mydir/foo_2 is missing
Link mydir/foo_3 is missing
Link mydir/x/y/foo_y_link is missing
Link mydir/x/y/z/foo_z_link is missing
File mydir/x/y/z/qwerty is missing
file 'hello' content after log replay: 0000000 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
* 0200000
rmdir: failed to remove '/home/fdmanana/btrfs-tests/scratch_1/mydir/x/y/z': No such file or directory
rmdir: failed to remove '/home/fdmanana/btrfs-tests/scratch_1/mydir/x/y': No such file or directory
rmdir: failed to remove '/home/fdmanana/btrfs-tests/scratch_1/mydir/x': No such file or directory
rm: cannot remove '/home/fdmanana/btrfs-tests/scratch_1/mydir/foo_2': Stale file handle
rm: cannot remove '/home/fdmanana/btrfs-tests/scratch_1/mydir/foo_3': Stale file handle
rmdir: failed to remove '/home/fdmanana/btrfs-tests/scratch_1/mydir': Directory not empty
Fsck, without this fix, also complains about the wrong link count:
root 5 inode 257 errors 2001, no inode item, link count wrong
unresolved ref dir 258 index 2 namelen 5 name foo_2 filetype 1 errors 4, no inode ref
unresolved ref dir 258 index 3 namelen 5 name foo_3 filetype 1 errors 4, no inode ref
So fix this by logging the inodes that the dentries point to when
fsyncing a directory.
Filipe Manana [Sat, 14 Mar 2015 07:03:27 +0000 (07:03 +0000)]
Btrfs: change the insertion criteria for the qgroup operations rbtree
After looking at Liu Bo's recent patch (titled
"Btrfs: fix comp_oper to get right order") I realized the search made by
qgroup_oper_exists() was buggy because its rbtree navigation comparison
function, comp_oper_exist(), only looks at the fields bytenr and ref_root
of a tree node, ignoring the seq field completely. This was wrong because
when we insert a node into the rbtree we use comp_oper(), which takes a
decision based first on bytenr, then on seq and then on the ref_root field.
That means qgroup_oper_exists() could miss the fact that at least one
operation with given bytenr and ref_root exists.
Consider the following simple example of a 3 nodes qgroup operations
rbtree (created using comp_oper before this patch), where each node's key
is a tuple with the shape (bytenr, seq, ref_root, op):
[ (4096, 2, 20, op X) ]
/ \
/ \
[ (4096, 1, 5, op Y) ] [ (4096, 3, 10, op Z) ]
qgroup_oper_exists() when called to search for an existing operation for
bytenr 4096 and ref root 10 wouldn't find anything because it would go to
the left subtree instead of the right subtree, since comp_oper_exits()
ignores the seq field completely.
Fix this by changing the insertion navigation function to use the ref_root
field right after using the bytenr field and before using the seq field,
so that qgroup_oper_exists() / comp_oper_exist() work as expected.
This patch applies on top of the patch mentioned above from Liu.
Filipe Manana [Thu, 12 Mar 2015 23:23:13 +0000 (23:23 +0000)]
Btrfs: add missing inode item update in fallocate()
If we fallocate(), without the keep size flag, into an area already covered
by an extent previously fallocated, we were updating the inode's i_size but
we weren't updating the inode item in the fs/subvol tree. A following umount
+ mount would result in a loss of the inode's size (and an fsync would miss
too the fact that the inode changed).
Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 3d850dd44889d3aa67d0b8007c2cdd259bff7da4)
Filipe Manana [Thu, 12 Mar 2015 16:04:50 +0000 (16:04 +0000)]
Btrfs: incremental send, remove dead code
The logic to detect path loops when attempting to apply a pending
directory rename, introduced in commit f959492fc15b (Btrfs: send, fix more issues related to directory renames)
is no longer needed, and the respective fstests test case for that commit,
btrfs/045, now passes without this code (as well as all the other test
cases for send/receive).
Filipe Manana [Thu, 12 Mar 2015 15:16:20 +0000 (15:16 +0000)]
Btrfs: incremental send, clear name from cache after orphanization
If a directory's reference ends up being orphanized, because the inode
currently being processed has a new path that matches that directory's
path, make sure we evict the name of the directory from the name cache.
This is because there might be descendent inodes (either directories or
regular files) that will be orphanized later too, and therefore the
orphan name of the ancestor must be used, otherwise we send issue rename
operations with a wrong path in the send stream.
$ mkfs.btrfs -f /dev/sdc
$ mount /dev/sdc /mnt2
$ btrfs receive /mnt2 -f /tmp/1.send
$ btrfs receive /mnt2 -f /tmp/2.send
ERROR: rename data/p1/p2 -> data/n4/p1/p2 failed. no such file or directory
Directories data/p1 (inode 263) and data/p1/p2 (inode 264) in the parent
snapshot are both orphanized during the incremental send, and as soon as
data/p1 is orphanized, we must make sure that when orphanizing data/p1/p2
we use a source path of o263-6-o/p2 for the rename operation instead of
the old path data/p1/p2 (the one before the orphanization of inode 263).
A test case for xfstests follows soon.
Reported-by: Robbie Ko <robbieko@synology.com> Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 8996a48c0a8ed01c55f52e6794625c95a9977ee6)
Filipe Manana [Mon, 2 Mar 2015 20:53:53 +0000 (20:53 +0000)]
Btrfs: send, don't leave without decrementing clone root's send_progress
If the clone root was not readonly or the dead flag was set on it, we were
leaving without decrementing the root's send_progress counter (and before
we just incremented it). If a concurrent snapshot deletion was in progress
and ended up being aborted, it would be impossible to later attempt to
delete again the snapshot, since the root's send_in_progress counter could
never go back to 0.
We were also setting clone_sources_to_rollback to i + 1 too early - if we
bailed out because the clone root we got is not readonly or flagged as dead
we ended up later derreferencing a null pointer because we didn't assign
the clone root to sctx->clone_roots[i].root:
for (i = 0; sctx && i < clone_sources_to_rollback; i++)
btrfs_root_dec_send_in_progress(
sctx->clone_roots[i].root);
So just don't increment the send_in_progress counter if the root is readonly
or flagged as dead.
Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.cz> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 2f1f465ae6da244099af55c066e5355abd8ff620)
Filipe Manana [Mon, 2 Mar 2015 20:53:52 +0000 (20:53 +0000)]
Btrfs: send, add missing check for dead clone root
After we locked the root's root item, a concurrent snapshot deletion
call might have set the dead flag on it. So check if the dead flag
is set and abort if it is, just like we do for the parent root.
Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: David Sterba <dsterba@suse.cz> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 5cc2b17e80cf5770f2e585c2d90fd8af1b901258)
Filipe Manana [Mon, 23 Feb 2015 19:53:35 +0000 (19:53 +0000)]
Btrfs: remove deleted xattrs on fsync log replay
If we deleted xattrs from a file and fsynced the file, after a log replay
the xattrs would remain associated to the file. This was an unexpected
behaviour and differs from what other filesystems do, such as for example
xfs and ext3/4.
Fix this by, on fsync log replay, check if every xattr in the fs/subvol
tree (that belongs to a logged inode) has a matching xattr in the log,
and if it does not, delete it from the fs/subvol tree. This is a similar
approach to what we do for dentries when we replay a directory from the
fsync log.
This issue is trivial to reproduce, and the following excerpt from my
test for xfstests triggers the issue:
# Create out test file and add 3 xattrs to it.
touch $SCRATCH_MNT/foobar
$SETFATTR_PROG -n user.attr1 -v val1 $SCRATCH_MNT/foobar
$SETFATTR_PROG -n user.attr2 -v val2 $SCRATCH_MNT/foobar
$SETFATTR_PROG -n user.attr3 -v val3 $SCRATCH_MNT/foobar
# Make sure everything is durably persisted.
sync
# Now delete the second xattr and fsync the inode.
$SETFATTR_PROG -x user.attr2 $SCRATCH_MNT/foobar
$XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foobar
_crash_and_mount
# After the fsync log is replayed, the file should have only 2 xattrs, the ones
# named user.attr1 and user.attr3. The btrfs fsync log replay bug left the file
# with the 3 xattrs that we had before deleting the second one and fsyncing the
# file.
echo "xattr names and values after first fsync log replay:"
$GETFATTR_PROG --absolute-names --dump $SCRATCH_MNT/foobar | _filter_scratch
# Now write some data to our file, fsync it, remove the first xattr, add a new
# hard link to our file and commit the fsync log by fsyncing some other new
# file. This is to verify that after log replay our first xattr does not exist
# anymore.
echo "hello world!" >> $SCRATCH_MNT/foobar
$XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foobar
$SETFATTR_PROG -x user.attr1 $SCRATCH_MNT/foobar
ln $SCRATCH_MNT/foobar $SCRATCH_MNT/foobar_link
touch $SCRATCH_MNT/qwerty
$XFS_IO_PROG -c "fsync" $SCRATCH_MNT/qwerty
_crash_and_mount
# Now only the xattr with name user.attr3 should be set in our file.
echo "xattr names and values after second fsync log replay:"
$GETFATTR_PROG --absolute-names --dump $SCRATCH_MNT/foobar | _filter_scratch
status=0
exit
The expected golden output, which is produced with this patch applied or
when testing against xfs or ext3/4, is:
xattr names and values after first fsync log replay:
# file: SCRATCH_MNT/foobar
user.attr1="val1"
user.attr3="val3"
xattr names and values after second fsync log replay:
# file: SCRATCH_MNT/foobar
user.attr3="val3"
Without this patch applied, the output is:
xattr names and values after first fsync log replay:
# file: SCRATCH_MNT/foobar
user.attr1="val1"
user.attr2="val2"
user.attr3="val3"
xattr names and values after second fsync log replay:
# file: SCRATCH_MNT/foobar
user.attr1="val1"
user.attr2="val2"
user.attr3="val3"
A patch with a test case for xfstests follows soon.
Josef Bacik [Tue, 17 Mar 2015 14:52:28 +0000 (10:52 -0400)]
Btrfs: fix outstanding_extents accounting in DIO
We are keeping track of how many extents we need to reserve properly based on
the amount we want to write, but we were still incrementing outstanding_extents
if we wrote less than what we requested. This isn't quite right since we will
be limited to our max extent size. So instead lets do something horrible! Keep
track of how many outstanding_extents we reserved, and decrement each time we
allocate an extent. If we use our entire reserve make sure to jack up
outstanding_extents on the inode so the accounting works out properly. Thanks,
Josef Bacik [Fri, 13 Mar 2015 19:01:24 +0000 (15:01 -0400)]
Btrfs: account merges/splits properly
My fix
Btrfs: fix merge delalloc logic
only fixed half of the problems, it didn't fix the case where we have two large
extents on either side and then join them together with a new small extent. We
need to instead keep track of how many extents we have accounted for with each
side of the new extent, and then see how many extents we need for the new large
extent. If they match then we know we need to keep our reservation, otherwise
we need to drop our reservation. This shows up with a case like this
Previously the logic would have said that the number extents required for the
new size (3) is larger than the number of extents required for the largest side
(2) therefore we need to keep our reservation. But this isn't the case, since
both sides require a reservation of 2 which leads to 4 for the whole range
currently reserved, but we only need 3, so we need to drop one of the
reservations. The same problem existed for splits, we'd think we only need 3
extents when creating the hole but in reality we need 4. Thanks,
Josef Bacik [Mon, 16 Mar 2015 21:38:52 +0000 (17:38 -0400)]
Btrfs: add sanity test for outstanding_extents accounting
I introduced a regression wrt outstanding_extents accounting. These are tricky
areas that aren't easily covered by xfstests as we could change MAX_EXTENT_SIZE
at any time. So add sanity tests to cover the various conditions that are
tricky in order to make sure we don't introduce regressions in the future.
Thanks,
Josef Bacik [Mon, 16 Mar 2015 21:38:02 +0000 (17:38 -0400)]
Btrfs: just free dummy extent buffers
If we fail during our sanity tests we could get NULL deref's because we unload
the module before the dummy extent buffers are free'd via RCU. So check for
this case and just free the things directly. Thanks,
Josef Bacik [Fri, 13 Mar 2015 19:12:23 +0000 (15:12 -0400)]
Btrfs: account for the correct number of extents for delalloc reservations
Direct IO can easily pass in an buffer that is greater than
BTRFS_MAX_EXTENT_SIZE, so take this into account when reserving extents in the
delalloc reservation code. Thanks,
Josef Bacik [Fri, 13 Mar 2015 19:12:08 +0000 (15:12 -0400)]
Btrfs: fix merge delalloc logic
My patch to properly count outstanding extents wrt MAX_EXTENT_SIZE introduced a
regression when re-dirtying already dirty areas. We have logic in split to make
sure we are taking the largest space into account but didn't have it for merge,
so it was sometimes making us think we were turning a tiny extent into a huge
extent, when in reality we already had a huge extent and needed to use the other
side in our logic. This fixes the regression that was reported by a user on
list. Thanks,
Reported-by: Markus Trippelsdorf <markus@trippelsdorf.de> Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 8461a3de770477a9a7b8eeaebcc4804dbc26ca38)
Liu Bo [Fri, 13 Mar 2015 06:24:38 +0000 (14:24 +0800)]
Btrfs: fix comp_oper to get right order
Case (oper1->seq > oper2->seq) should differ with case (oper1->seq < oper2->seq).
Signed-off-by: Liu Bo <bo.li.liu@oracle.com> Reviewed-by: David Sterba <dsterba@suse.cz> Reviewed-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 48da5f0a4cb2b6b44579f5737e8be888c0d02526)
Liu Bo [Fri, 6 Mar 2015 12:23:44 +0000 (20:23 +0800)]
Btrfs: catch transaction abortion after waiting for it
This problem is uncovered by a test case: http://patchwork.ozlabs.org/patch/244297.
Fsync() can report success when it actually doesn't. When we
have several threads running fsync() at the same tiem and in one fsync() we
get a transaction abortion due to some problems(in the test case it's disk
failures), and other fsync()s may return successfully which makes userspace
programs think that data is now safely flushed into disk.
It's because that after fsyncs() fail btrfs_sync_log() due to disk failures,
they get to try btrfs_commit_transaction() where it finds that there is
already a transaction being committed, and they'll just call wait_for_commit()
and return. Note that we actually check "trans->aborted" in btrfs_end_transaction,
but it's likely that the error message is still not yet throwed out and only after
wait_for_commit() we're sure whether the transaction is committed successfully.
This add the necessary check and it now passes the test.
Josef Bacik [Mon, 2 Mar 2015 21:37:31 +0000 (16:37 -0500)]
Btrfs: prepare block group cache before writing
Writing the block group cache will modify the extent tree quite a bit because it
truncates the old space cache and pre-allocates new stuff. To try and cut down
on the churn lets do the setup dance first, then later on hopefully we can avoid
looping with newly dirtied roots. Thanks,
Dave could hit this assert consistently running btrfs/078. This is because
when we update the block groups we could truncate the free space, which would
try to delete the csums for that range and dirty the csum root. For this to
happen we have to have already written out the csum root so it's kind of hard to
hit this case. This patch fixes this by changing the logic to only write the
dirty block groups if the dirty_cowonly_roots list is empty. This will get us
the same effect as before since we add the extent root last, and will cover the
case that we dirty some other root again but not the extent root. Thanks,
David Sterba [Tue, 24 Feb 2015 18:40:41 +0000 (19:40 +0100)]
btrfs: switch helper macros to static inlines in sysfs.h
The conversion macros use nested container_of that leads to a warning
fs/btrfs/sysfs.c: In function 'btrfs_feature_visible':
fs/btrfs/sysfs.c:183:8: warning: declaration of '__mptr' shadows a previous local
fs/btrfs/sysfs.c:183:8: warning: shadowed declaration is here
David Sterba [Wed, 25 Feb 2015 14:47:32 +0000 (15:47 +0100)]
btrfs: use explicit initializer for seq_elem
Using {} as initializer for struct seq_elem does not properly initialize
the list_head member, but it currently works because it gets set through
btrfs_get_tree_mod_seq if 'seq' is 0.
David Sterba [Fri, 16 Jan 2015 16:21:12 +0000 (17:21 +0100)]
btrfs: cleanup 64bit/32bit divs, compile time constants
Switch to div_u64 if the divisor is a numeric constant or sum of
sizeof()s. We can remove a few instances of do_div that has the hidden
semtantics of changing the 1st argument.
Small power-of-two divisors are converted to bitshifts, large values are
kept intact for clarity.
Filipe Manana [Sat, 28 Feb 2015 22:29:22 +0000 (22:29 +0000)]
Btrfs: incremental send, don't rename a directory too soon
There's one more case where we can't issue a rename operation for a
directory as soon as we process it. We used to delay directory renames
only if they have some ancestor directory with a higher inode number
that got renamed too, but there's another case where we need to delay
the rename too - when a directory A is renamed to the old name of a
directory B but that directory B has its rename delayed because it
has now (in the send root) an ancestor with a higher inode number that
was renamed. If we don't delay the directory rename in this case, the
receiving end of the send stream will attempt to rename A to the old
name of B before B got renamed to its new name, which results in a
"directory not empty" error. So fix this by delaying directory renames
for this case too.
Filipe Manana [Mon, 23 Feb 2015 19:48:52 +0000 (19:48 +0000)]
Btrfs: fix off-by-one logic error in btrfs_realloc_node
The end_slot variable actually matches the number of pointers in the
node and not the last slot (which is 'nritems - 1'). Therefore in order
to check that the current slot in the for loop doesn't match the last
one, the correct logic is to check if 'i' is less than 'end_slot - 1'
and not 'end_slot - 2'.
Fix this and set end_slot to be 'nritems - 1', as it's less confusing
since the variable name implies it's inclusive rather then exclusive.
Filipe Manana [Sun, 15 Feb 2015 22:38:54 +0000 (22:38 +0000)]
Btrfs: add missing inode update when punching hole
When punching a file hole if we endup only zeroing parts of a page,
because the start offset isn't a multiple of the sector size or the
start offset and length fall within the same page, we were not updating
the inode item. This prevented an fsync from doing anything, if no other
file changes happened in the current transaction, because the fields
in btrfs_inode used to check if the inode needs to be fsync'ed weren't
updated.
This issue is easy to reproduce and the following excerpt from the
xfstest case I made shows how to trigger it:
# Fsync the file, this makes btrfs update some btrfs inode specific fields
# that are used to track if the inode needs to be written/updated to the fsync
# log or not. After this fsync, the new values for those fields indicate that
# a subsequent fsync does not need to touch the fsync log.
$XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foo
# Force a commit of the current transaction. After this point, any operation
# that modifies the data or metadata of our file, should update those fields in
# the btrfs inode with values that make the next fsync operation write to the
# fsync log.
sync
# Punch a hole in our file. This small range affects only 1 page.
# This made the btrfs hole punching implementation write only some zeroes in
# one page, but it did not update the btrfs inode fields used to determine if
# the next fsync needs to write to the fsync log.
$XFS_IO_PROG -c "fpunch 8000 4K" $SCRATCH_MNT/foo
# Another variation of the previously mentioned case.
$XFS_IO_PROG -c "fpunch 15000 100" $SCRATCH_MNT/foo
# Now fsync the file. This was a no-operation because the previous hole punch
# operation didn't update the inode's fields mentioned before, so they remained
# with the values they had after the first fsync - that is, they indicate that
# it is not needed to write to fsync log.
$XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foo
echo "File content before:"
od -t x1 $SCRATCH_MNT/foo
# Simulate a crash/power loss.
_load_flakey_table $FLAKEY_DROP_WRITES
_unmount_flakey
# Enable writes and mount the fs. This makes the fsync log replay code run.
_load_flakey_table $FLAKEY_ALLOW_WRITES
_mount_flakey
# Because the last fsync didn't do anything, here the file content matched what
# it was after the first fsync, before the holes were punched, and not what it
# was after the holes were punched.
echo "File content after:"
od -t x1 $SCRATCH_MNT/foo
This issue has been around since 2012, when the punch hole implementation
was added, commit 2aaa66558172 ("Btrfs: add hole punching").
A test case for xfstests follows soon.
Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit e8c1c76e804b18120e6977fc092769c043876212)
Josef Bacik [Thu, 12 Feb 2015 14:43:51 +0000 (09:43 -0500)]
Btrfs: abort the transaction if we fail to update the free space cache inode
Our gluster boxes were hitting a problem where they'd run out of space when
updating the block group cache and therefore wouldn't be able to update the free
space inode. This is a problem because this is how we invalidate the cache and
protect ourselves from errors further down the stack, so if this fails we have
to abort the transaction so we make sure we don't end up with stale free space
cache. Thanks,
Eric Sandeen [Fri, 1 Aug 2014 23:12:45 +0000 (18:12 -0500)]
btrfs: factor btrfs_read_roots() out of open_ctree()
Also, remove the two local variables create_uuid_tree
and check_uuid_tree; we can use the existence of
the uuid root and/or the RESCAN_UUID_TREE flag to
determine what action to take.
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: David Sterba <dsterba@suse.cz>
(cherry picked from commit 4bbcaa648d23470f04285c52a27d87c8fe906203)
Eric Sandeen [Fri, 1 Aug 2014 23:12:46 +0000 (18:12 -0500)]
btrfs: factor btrfs_replay_log() out of open_ctree()
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: David Sterba <dsterba@suse.cz>
(cherry picked from commit 63443bf54a746fada8ef2829148a29f28f07f7af)
Eric Sandeen [Mon, 16 Feb 2015 15:29:26 +0000 (16:29 +0100)]
btrfs: factor btrfs_init_workqueues() out of open_ctree()
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
[renamed to btrfs_init_workqueues] Signed-off-by: David Sterba <dsterba@suse.cz>
(cherry picked from commit 2a4581983f90d829d9aeaa4cb881031950f54937)
Eric Sandeen [Fri, 1 Aug 2014 23:12:42 +0000 (18:12 -0500)]
btrfs: factor btrfs_init_qgroup() out of open_ctree()
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
[renamed to btrfs_init_qgroup] Signed-off-by: David Sterba <dsterba@suse.cz>
(cherry picked from commit f9e92e40b517bfe94d9e961ff484ab885348f322)
Eric Sandeen [Fri, 1 Aug 2014 23:12:41 +0000 (18:12 -0500)]
btrfs: factor btrfs_init_dev_replace_locks() out of open_ctree()
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
[renamed to btrfs_init_dev_replace_locks] Signed-off-by: David Sterba <dsterba@suse.cz>
(cherry picked from commit ad6183680c0845ff8052a919b97c71b4fec8f33a)
Eric Sandeen [Fri, 1 Aug 2014 23:12:40 +0000 (18:12 -0500)]
btrfs: factor btrfs_init_btree_inode() out of open_ctree()
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
[renamed to btrfs_init_btree_inode] Signed-off-by: David Sterba <dsterba@suse.cz>
(cherry picked from commit f37938e0e22f90740cfbfe74ecd9aad5e8369b9d)
Eric Sandeen [Fri, 1 Aug 2014 23:12:39 +0000 (18:12 -0500)]
btrfs: factor btrfs_init_balance() out of open_ctree()
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
[renamed to btrfs_init_balance] Signed-off-by: David Sterba <dsterba@suse.cz>
(cherry picked from commit 779a65a495b76aee7fae500ac7ae360dc8fd81ff)
Eric Sandeen [Fri, 1 Aug 2014 23:12:38 +0000 (18:12 -0500)]
btrfs: factor btrfs_init_scrub() out of open_ctree()
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
[renamed to btrfs_init_scrub] Signed-off-by: David Sterba <dsterba@suse.cz>
(cherry picked from commit 638aa7ed4667f63ec45b264d1b6e259a79638733)
Eric Sandeen [Fri, 1 Aug 2014 23:12:36 +0000 (18:12 -0500)]
btrfs: consistently use fs_info in close_ctree()
close_ctree() has a local fs_info var for convienience;
use it consistently.
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: David Sterba <dsterba@suse.cz>
(cherry picked from commit 04892340777484351473f1756e6f870500c00a48)
Eric Sandeen [Fri, 1 Aug 2014 23:12:35 +0000 (18:12 -0500)]
btrfs: remove unused fs_info arg from btrfs_close_extra_devices()
The commit: 8dabb74 Btrfs: change core code of btrfs to support the
device replace operations
added the fs_info argument, but never used it -
just remove it again.
Signed-off-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: David Sterba <dsterba@suse.cz>
(cherry picked from commit 9eaed21ef94b5da30eaf1b2503df1d41c2025175)
Fabian Frederick [Sat, 14 Feb 2015 12:10:22 +0000 (13:10 +0100)]
btrfs: fix sizeof format specifier in btrfs_check_super_valid()
This patch fixes mips compilation warning:
fs/btrfs/disk-io.c: In function 'btrfs_check_super_valid':
fs/btrfs/disk-io.c:3927:21: warning: format '%lu' expects argument
of type 'long unsigned int', but argument 3 has type 'unsigned int' [-Wformat]
Signed-off-by: Fabian Frederick <fabf@skynet.be> Signed-off-by: David Sterba <dsterba@suse.cz>
(cherry picked from commit 41d6b13ea7b459cd51d4446c9b264128a52db993)
Zhao Lei [Thu, 12 Feb 2015 07:42:16 +0000 (15:42 +0800)]
btrfs: cleanup: use for() loop in btrfs_map_bio()
for() is obviously better in these code block, and remove noused
init-value to reduce about 6 bytes binary size.
Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Reviewed-by: David Sterba <dsterba@suse.cz> Signed-off-by: David Sterba <dsterba@suse.cz>
(cherry picked from commit 08da757d3f29e9d33480ae1c44b557a2514d70c5)
Zhao Lei [Mon, 9 Feb 2015 12:31:44 +0000 (20:31 +0800)]
btrfs: remove unused chunk_tree argument in several functions
There functions include unused chunk_tree argument from the begining,
it is time to remove them and clean up relative code to prepare value
of this argument in caller.
Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.cz>
(cherry picked from commit a688a04aab72a775e07fe460f563b46f7791cfc1)