Mkfs and mdadm support


#1

I’ve got an ec2 recipe that uses the mount provider and my own custom code
to handle mkfs and mdadm. I’m going to add support directly to chef for
these. I’m thinking they should each be a provider on their own. Thoughts?
Chris


#2

Hi,

I’ve got an ec2 recipe that uses the mount provider and my own
custom code to handle mkfs and mdadm. I’m going to add support
directly to chef for these. I’m thinking they should each be a
provider on their own. Thoughts?

Are you talking about a ‘filesystem’ resource with several providers
(xfs, ext3, etc) and a ‘raid’ resource with some providers (mdadm,
others?)

Best Regards

Miguel


#3

On 9/06/2009, at 11:30 AM, Miguel Cabeça wrote:

Hi,

I’ve got an ec2 recipe that uses the mount provider and my own
custom code to handle mkfs and mdadm. I’m going to add support
directly to chef for these. I’m thinking they should each be a
provider on their own. Thoughts?

Are you talking about a ‘filesystem’ resource with several providers
(xfs, ext3, etc) and a ‘raid’ resource with some providers (mdadm,
others?)

I agree, this should be split up into an abstraction resource such as
raid or filesystem. I imagine an lvm2 provider would be awesome for
the filesystem resource.


AJ Christensen, Software Engineer
Opscode, Inc.
E: aj@opscode.com


#4

Ya pardon my confusion about resources and providers.
So maybe there should be a top level filesystem resource that contain all of
this. Even after you add resources for mkfs, lvm, raid, etc…, you still
need higher level logic to tie it all together, maybe a collection of
definitions?

Chris

On Mon, Jun 8, 2009 at 5:05 PM, Arjuna Christensen aj@opscode.com wrote:

On 9/06/2009, at 11:30 AM, Miguel Cabeça wrote:

Hi,

I’ve got an ec2 recipe that uses the mount provider and my own custom code
to handle mkfs and mdadm. I’m going to add support directly to chef for
these. I’m thinking they should each be a provider on their own. Thoughts?

Are you talking about a ‘filesystem’ resource with several providers (xfs,
ext3, etc) and a ‘raid’ resource with some providers (mdadm, others?)

I agree, this should be split up into an abstraction resource such as raid
or filesystem. I imagine an lvm2 provider would be awesome for the
filesystem resource.


AJ Christensen, Software Engineer
Opscode, Inc.
E: aj@opscode.com adam@opscode.com


#5

Hi,

I agree, this should be split up into an abstraction resource such
as raid or filesystem. I imagine an lvm2 provider would be awesome
for the filesystem resource.

In my head a ‘volume’ resource with lvm2 and evms providers makes
more sense…


#6

Hi,

So maybe there should be a top level filesystem resource that
contain all of this. Even after you add resources for mkfs, lvm,
raid, etc…, you still need higher level logic to tie it all
together, maybe a collection of definitions?

IMHO it would be too complicated to try to fit everything into the
filesystem resource.

It would be simpler (famous last words) to have three resources like:
filesystem
raid
volume

and combine them with definitions to achieve the complete goal (for
example a xfs filesystem on top of an lvm2 volume, on top of a raid1
array)

Best Regards

Miguel Cabeça


#7

That’s kind of the conclusion I came to last night after thinking it over
some more. I’ve forked chef and started last night on the filesystem and
raid resources. Should be easy enough to re arrange if needed.
Would be nice if there was an available tool for detecting filesystem types.
Best I’ve found so far is parsing the output of parted. Anyone know of a
better way to handle this?

Chris

On Tue, Jun 9, 2009 at 9:11 AM, Miguel Cabeça cabeca@ist.utl.pt wrote:

Hi,

So maybe there should be a top level filesystem resource that contain all

of this. Even after you add resources for mkfs, lvm, raid, etc…, you still
need higher level logic to tie it all together, maybe a collection of
definitions?

IMHO it would be too complicated to try to fit everything into the
filesystem resource.

It would be simpler (famous last words) to have three resources like:
filesystem
raid
volume

and combine them with definitions to achieve the complete goal (for example
a xfs filesystem on top of an lvm2 volume, on top of a raid1 array)

Best Regards

Miguel Cabeça


#8

well, if you are fortunate enough to have it already mounted you can
do, with about any user

mount

/dev/root on / type ext4 (rw,relatime,barrier=1,data=ordered)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
rc-svcdir on /lib64/rc/init.d type tmpfs
(rw,nosuid,nodev,noexec,relatime,size=1024k,mode=755)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
securityfs on /sys/kernel/security type securityfs
(rw,nosuid,nodev,noexec,relatime)
udev on /dev type tmpfs (rw,nosuid,relatime,size=10240k,mode=755)
devpts on /dev/pts type devpts
(rw,nosuid,noexec,relatime,gid=5,mode=620)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime)
/dev/sda6 on /home type ext4 (rw)
/dev/sda5 on /var/spool/imap type reiserfs (rw)
/dev/sda7 on /export type ext4 (rw)
usbfs on /proc/bus/usb type usbfs
(rw,noexec,nosuid,devmode=0664,devgid=85)

Obviously won’t work on things that are not mounted, but in a pinch
this will get what you seek… as you see if it’s mounted you know
what filesystem it is and what options it’s mounted with.

Scott

On Jun 9, 2009, at 10:18 AM, snacktime wrote:

That’s kind of the conclusion I came to last night after thinking it
over some more. I’ve forked chef and started last night on the
filesystem and raid resources. Should be easy enough to re arrange
if needed.

Would be nice if there was an available tool for detecting
filesystem types. Best I’ve found so far is parsing the output of
parted. Anyone know of a better way to handle this?

Chris

On Tue, Jun 9, 2009 at 9:11 AM, Miguel Cabeça cabeca@ist.utl.pt
wrote:
Hi,

So maybe there should be a top level filesystem resource that
contain all of this. Even after you add resources for mkfs, lvm,
raid, etc…, you still need higher level logic to tie it all
together, maybe a collection of definitions?

IMHO it would be too complicated to try to fit everything into the
filesystem resource.

It would be simpler (famous last words) to have three resources like:
filesystem
raid
volume

and combine them with definitions to achieve the complete goal (for
example a xfs filesystem on top of an lvm2 volume, on top of a raid1
array)

Best Regards

Miguel Cabeça

!DSPAM:4a2e996b268771804284693!


#9

Don’t know if there are corner cases that don’t work, but “file –s
/dev/<block_device>”

From: snacktime [mailto:snacktime@gmail.com]
Sent: Tuesday, June 09, 2009 10:18 AM
To: chef@lists.opscode.com
Subject: Re: mkfs and mdadm support

That’s kind of the conclusion I came to last night after thinking it over
some more. I’ve forked chef and started last night on the filesystem and
raid resources. Should be easy enough to re arrange if needed.

Would be nice if there was an available tool for detecting filesystem types.
Best I’ve found so far is parsing the output of parted. Anyone know of a
better way to handle this?

Chris
On Tue, Jun 9, 2009 at 9:11 AM, Miguel Cabeça cabeca@ist.utl.pt wrote:
Hi,

So maybe there should be a top level filesystem resource that contain all of
this. Even after you add resources for mkfs, lvm, raid, etc…, you still
need higher level logic to tie it all together, maybe a collection of
definitions?

IMHO it would be too complicated to try to fit everything into the
filesystem resource.

It would be simpler (famous last words) to have three resources like:
filesystem
raid
volume

and combine them with definitions to achieve the complete goal (for example
a xfs filesystem on top of an lvm2 volume, on top of a raid1 array)

Best Regards

Miguel Cabeça


#10

Thanks, I totally missed the -s option. I’m looking at the best way to
safeguard against running mkfs on an existing filesystem. So one of the
options to the provider will be a force flag of some type, and if that flag
is not present it will refuse to re initialize the filesystem. mkfs isn’t
uniform in requiring a force flag to re initialize a device that already has
a filesystem, so can’t rely on that.
Chris

On Tue, Jun 9, 2009 at 10:26 AM, Steven Parkes smparkes@smparkes.netwrote:

Don’t know if there are corner cases that don’t work, but “file –s /dev/<
block_device>”

From: snacktime [mailto:snacktime@gmail.com]
Sent: Tuesday, June 09, 2009 10:18 AM
To: chef@lists.opscode.com
Subject: Re: mkfs and mdadm support

That’s kind of the conclusion I came to last night after thinking it over
some more. I’ve forked chef and started last night on the filesystem and
raid resources. Should be easy enough to re arrange if needed.

Would be nice if there was an available tool for detecting filesystem
types. Best I’ve found so far is parsing the output of parted. Anyone know
of a better way to handle this?

Chris

On Tue, Jun 9, 2009 at 9:11 AM, Miguel Cabeça cabeca@ist.utl.pt wrote:

Hi,

So maybe there should be a top level filesystem resource that contain all
of this. Even after you add resources for mkfs, lvm, raid, etc…, you still
need higher level logic to tie it all together, maybe a collection of
definitions?

IMHO it would be too complicated to try to fit everything into the
filesystem resource.

It would be simpler (famous last words) to have three resources like:
filesystem
raid
volume

and combine them with definitions to achieve the complete goal (for example
a xfs filesystem on top of an lvm2 volume, on top of a raid1 array)

Best Regards

Miguel Cabeça


#11

Wouldn’t raid and lvm (and drbd, etc) all be volume types? It seems that
they all provide a device to place a filesystem on.

For raw disk, sda/hda.
For raid, md0, etc.
For lvm, a VG which a LV is built from.
For drbd, drbd0, etc.

On Tue, Jun 9, 2009 at 10:53 AM, snacktime snacktime@gmail.com wrote:

Thanks, I totally missed the -s option. I’m looking at the best way to
safeguard against running mkfs on an existing filesystem. So one of the
options to the provider will be a force flag of some type, and if that flag
is not present it will refuse to re initialize the filesystem. mkfs isn’t
uniform in requiring a force flag to re initialize a device that already has
a filesystem, so can’t rely on that.
Chris

On Tue, Jun 9, 2009 at 10:26 AM, Steven Parkes smparkes@smparkes.netwrote:

Don’t know if there are corner cases that don’t work, but “file –s
/dev/<block_device>”

From: snacktime [mailto:snacktime@gmail.com]
Sent: Tuesday, June 09, 2009 10:18 AM
To: chef@lists.opscode.com
Subject: Re: mkfs and mdadm support

That’s kind of the conclusion I came to last night after thinking it over
some more. I’ve forked chef and started last night on the filesystem and
raid resources. Should be easy enough to re arrange if needed.

Would be nice if there was an available tool for detecting filesystem
types. Best I’ve found so far is parsing the output of parted. Anyone know
of a better way to handle this?

Chris

On Tue, Jun 9, 2009 at 9:11 AM, Miguel Cabeça cabeca@ist.utl.pt wrote:

Hi,

So maybe there should be a top level filesystem resource that contain all
of this. Even after you add resources for mkfs, lvm, raid, etc…, you still
need higher level logic to tie it all together, maybe a collection of
definitions?

IMHO it would be too complicated to try to fit everything into the
filesystem resource.

It would be simpler (famous last words) to have three resources like:
filesystem
raid
volume

and combine them with definitions to achieve the complete goal (for
example a xfs filesystem on top of an lvm2 volume, on top of a raid1 array)

Best Regards

Miguel Cabeça


#12

Just FYI …

IN case your not aware…

LVM can sit on top of raid, and vice versa.
Ditto with DRBD

Basically it’s a bunch of block devices that can be used by either
other in various ways.

On Tue, Jun 9, 2009 at 11:32 AM, Jeffrey Hultenjhulten@gmail.com wrote:

Wouldn’t raid and lvm (and drbd, etc) all be volume types? It seems that
they all provide a device to place a filesystem on.

For raw disk, sda/hda.
For raid, md0, etc.
For lvm, a VG which a LV is built from.
For drbd, drbd0, etc.

On Tue, Jun 9, 2009 at 10:53 AM, snacktime snacktime@gmail.com wrote:

Thanks, I totally missed the -s option. I’m looking at the best way to
safeguard against running mkfs on an existing filesystem. So one of the
options to the provider will be a force flag of some type, and if that flag
is not present it will refuse to re initialize the filesystem. mkfs isn’t
uniform in requiring a force flag to re initialize a device that already has
a filesystem, so can’t rely on that.
Chris

On Tue, Jun 9, 2009 at 10:26 AM, Steven Parkes smparkes@smparkes.net
wrote:

Don’t know if there are corner cases that don’t work, but “file –s
/dev/<block_device>”

From: snacktime [mailto:snacktime@gmail.com]
Sent: Tuesday, June 09, 2009 10:18 AM
To: chef@lists.opscode.com
Subject: Re: mkfs and mdadm support

That’s kind of the conclusion I came to last night after thinking it over
some more. I’ve forked chef and started last night on the filesystem and
raid resources. Should be easy enough to re arrange if needed.

Would be nice if there was an available tool for detecting filesystem
types. Best I’ve found so far is parsing the output of parted. Anyone know
of a better way to handle this?

Chris

On Tue, Jun 9, 2009 at 9:11 AM, Miguel Cabeça cabeca@ist.utl.pt wrote:

Hi,

So maybe there should be a top level filesystem resource that contain all
of this. Even after you add resources for mkfs, lvm, raid, etc…, you still
need higher level logic to tie it all together, maybe a collection of
definitions?

IMHO it would be too complicated to try to fit everything into the
filesystem resource.

It would be simpler (famous last words) to have three resources like:
filesystem
raid
volume

and combine them with definitions to achieve the complete goal (for
example a xfs filesystem on top of an lvm2 volume, on top of a raid1 array)

Best Regards

Miguel Cabeça


Edward Muller
Sr. Systems Engineer
Engine Yard Inc. : Support, Scalability, Reliability
+1.866.518.9273 x209 - Mobile: +1.417.844.2435
IRC: edwardam - XMPP/GTalk: emuller@engineyard.com
Pacific/US


#13

For now I’m making a resource and provider for each one separately. it
won’t be difficult to either add a higher level abstraction later, once
everyone is in agreement as to what it should be. I hacked together about
50% of a raid resource/provider last night and will probably have time this
weekend to get something done on lvm2.

On Tue, Jun 9, 2009 at 11:32 AM, Jeffrey Hulten jhulten@gmail.com wrote:

Wouldn’t raid and lvm (and drbd, etc) all be volume types? It seems that
they all provide a device to place a filesystem on.

For raw disk, sda/hda.
For raid, md0, etc.
For lvm, a VG which a LV is built from.
For drbd, drbd0, etc.

On Tue, Jun 9, 2009 at 10:53 AM, snacktime snacktime@gmail.com wrote:

Thanks, I totally missed the -s option. I’m looking at the best way to
safeguard against running mkfs on an existing filesystem. So one of the
options to the provider will be a force flag of some type, and if that flag
is not present it will refuse to re initialize the filesystem. mkfs isn’t
uniform in requiring a force flag to re initialize a device that already has
a filesystem, so can’t rely on that.
Chris

On Tue, Jun 9, 2009 at 10:26 AM, Steven Parkes smparkes@smparkes.netwrote:

Don’t know if there are corner cases that don’t work, but “file –s
/dev/<block_device>”

From: snacktime [mailto:snacktime@gmail.com]
Sent: Tuesday, June 09, 2009 10:18 AM
To: chef@lists.opscode.com
Subject: Re: mkfs and mdadm support

That’s kind of the conclusion I came to last night after thinking it over
some more. I’ve forked chef and started last night on the filesystem and
raid resources. Should be easy enough to re arrange if needed.

Would be nice if there was an available tool for detecting filesystem
types. Best I’ve found so far is parsing the output of parted. Anyone know
of a better way to handle this?

Chris

On Tue, Jun 9, 2009 at 9:11 AM, Miguel Cabeça cabeca@ist.utl.pt wrote:

Hi,

So maybe there should be a top level filesystem resource that contain all
of this. Even after you add resources for mkfs, lvm, raid, etc…, you still
need higher level logic to tie it all together, maybe a collection of
definitions?

IMHO it would be too complicated to try to fit everything into the
filesystem resource.

It would be simpler (famous last words) to have three resources like:
filesystem
raid
volume

and combine them with definitions to achieve the complete goal (for
example a xfs filesystem on top of an lvm2 volume, on top of a raid1 array)

Best Regards

Miguel Cabeça


#14

One issue I’m running into is how to handle potentially dangerous operations
like mkfs. mkfs will fail without a force flag if you try to run it on a
device that is already initialized with the same filesystem type, but if you
run mkfs.ext3 on an xfs filesystem, it will happily create it without any
warning. I know there are other checks that can be done, but one bad regex
and bye bye filesystem. Using lvm or raid helps some because you could
probably have a rule that if a device exists, it has a filesystem. Your
recipes for creating filesystems always create the device and filesystem in
one operation, or not at all.
Another option would be to have a role just for initializing servers. You
put a server into that role to initialize it, and manually take it out once
you know it’s initialized correctly. Other roles would not have recipes
that run mkfs at all. ‘Normal’ roles would only do things like
assembling/stopping raid arrays, mounting/unmounting filesystems, etc… If
I need to add a volume to an existing raid array or lvm group, I do that
manually, add it to my json ball, and it just works. Next reboot chef has
the data it needs to bring everything up correctly.

This is one reason why I’m considering have a completely separate
resource/provider for mkfs and friends. I don’t want it anywhere near
critical data, and the thought of an automated script checking to see if my
filesystem needs to be initialized would keep me up at night.

Chris

On Fri, Jun 12, 2009 at 11:34 AM, snacktime snacktime@gmail.com wrote:

For now I’m making a resource and provider for each one separately. it
won’t be difficult to either add a higher level abstraction later, once
everyone is in agreement as to what it should be. I hacked together about
50% of a raid resource/provider last night and will probably have time this
weekend to get something done on lvm2.

On Tue, Jun 9, 2009 at 11:32 AM, Jeffrey Hulten jhulten@gmail.com wrote:

Wouldn’t raid and lvm (and drbd, etc) all be volume types? It seems that
they all provide a device to place a filesystem on.

For raw disk, sda/hda.
For raid, md0, etc.
For lvm, a VG which a LV is built from.
For drbd, drbd0, etc.

On Tue, Jun 9, 2009 at 10:53 AM, snacktime snacktime@gmail.com wrote:

Thanks, I totally missed the -s option. I’m looking at the best way to
safeguard against running mkfs on an existing filesystem. So one of the
options to the provider will be a force flag of some type, and if that flag
is not present it will refuse to re initialize the filesystem. mkfs isn’t
uniform in requiring a force flag to re initialize a device that already has
a filesystem, so can’t rely on that.
Chris

On Tue, Jun 9, 2009 at 10:26 AM, Steven Parkes smparkes@smparkes.netwrote:

Don’t know if there are corner cases that don’t work, but “file –s
/dev/<block_device>”

From: snacktime [mailto:snacktime@gmail.com]
Sent: Tuesday, June 09, 2009 10:18 AM
To: chef@lists.opscode.com
Subject: Re: mkfs and mdadm support

That’s kind of the conclusion I came to last night after thinking it
over some more. I’ve forked chef and started last night on the filesystem
and raid resources. Should be easy enough to re arrange if needed.

Would be nice if there was an available tool for detecting filesystem
types. Best I’ve found so far is parsing the output of parted. Anyone know
of a better way to handle this?

Chris

On Tue, Jun 9, 2009 at 9:11 AM, Miguel Cabeça cabeca@ist.utl.pt
wrote:

Hi,

So maybe there should be a top level filesystem resource that contain
all of this. Even after you add resources for mkfs, lvm, raid, etc…, you
still need higher level logic to tie it all together, maybe a collection of
definitions?

IMHO it would be too complicated to try to fit everything into the
filesystem resource.

It would be simpler (famous last words) to have three resources like:
filesystem
raid
volume

and combine them with definitions to achieve the complete goal (for
example a xfs filesystem on top of an lvm2 volume, on top of a raid1 array)

Best Regards

Miguel Cabeça


#15

On Jun 12, 2009, at 11:23 PM, snacktime wrote:

One issue I’m running into is how to handle potentially dangerous
operations like mkfs. mkfs will fail without a force flag if you
try to run it on a device that is already initialized with the same
filesystem type, but if you run mkfs.ext3 on an xfs filesystem, it
will happily create it without any warning. I know there are other
checks that can be done, but one bad regex and bye bye filesystem.
Using lvm or raid helps some because you could probably have a rule
that if a device exists, it has a filesystem. Your recipes for
creating filesystems always create the device and filesystem in one
operation, or not at all.

Another option would be to have a role just for initializing
servers. You put a server into that role to initialize it, and
manually take it out once you know it’s initialized correctly.
Other roles would not have recipes that run mkfs at all. 'Normal’
roles would only do things like assembling/stopping raid arrays,
mounting/unmounting filesystems, etc… If I need to add a volume to
an existing raid array or lvm group, I do that manually, add it to
my json ball, and it just works. Next reboot chef has the data it
needs to bring everything up correctly.

This is one reason why I’m considering have a completely separate
resource/provider for mkfs and friends. I don’t want it anywhere
near critical data, and the thought of an automated script checking
to see if my filesystem needs to be initialized would keep me up at
night.

Chris

The way I'm working with EBS volumes on ec2 where I want it to format  

the EBS device as ext3 only if it is not already formatted and then
ensure it is properly mounted, I also want to grow the filesystem to
fill the space in case i booted from a snapshot with a larger volume
size.

This works perfectly:

if (grep /dev/sdz1 /etc/fstab == “”)
Chef::Log.info(“EBS device being configured”)

loop do
# loop until /dev/sdz1 is attached before we allow execution of
our recipes to continue
if File.exists?("/dev/sdz1")
directory “/data” do
owner 'root’
group 'root’
mode 0755
end

     bash "format-data-ebs" do
       code "mkfs.ext3 -j -F /dev/sdz1"
       not_if "e2label /dev/sdz1"
     end

     bash "mount-data-ebs" do
       code "mount -t ext3 /dev/sdz1 /data"
     end

     bash "grow-data-ebs" do
       code "resize2fs /dev/sdz1"
     end

     bash "add-data-to-fstab" do
       code "echo '/dev/sdz1 /data ext3 noatime 0 0' >> /etc/fstab"
       not_if "grep /dev/sdz1 /etc/fstab"
     end

   break
 end
 Chef::Log.info("EBS device /dev/sdz1 not available yet...")
 sleep 5

end
end

You could easily wrap that up in a more abstract resource or  

definition.

Cheers-

Ezra Zygmuntowicz
ez@engineyard.com


#16

On Fri, Jun 12, 2009 at 11:41 PM, Ezra Zygmuntowicz ez@engineyard.comwrote:

   The way I'm working with EBS volumes on ec2 where I want it to

format the EBS device as ext3 only if it is not already formatted and then
ensure it is properly mounted, I also want to grow the filesystem to fill
the space in case i booted from a snapshot with a larger volume size.

Thanks Ezra that gave me another tool I didn’t know about, e2label.

So I’ve got most of a filesystem resource/provider working. It can create
linux/raid/lvm partitions with parted, and then add ext3/xfs filesystems to
linux partitions if specified.

Chris


#17

Hey,

On 14/06/2009, at 2:08 PM, snacktime snacktime@gmail.com wrote:

On Fri, Jun 12, 2009 at 11:41 PM, Ezra Zygmuntowicz
ez@engineyard.com wrote:

   The way I'm working with EBS volumes on ec2 where I want it  

to format the EBS device as ext3 only if it is not already
formatted and then ensure it is properly mounted, I also want to
grow the filesystem to fill the space in case i booted from a
snapshot with a larger volume size.

Thanks Ezra that gave me another tool I didn’t know about, e2label.

So I’ve got most of a filesystem resource/provider working. It can
create linux/raid/lvm partitions with parted, and then add ext3/xfs
filesystems to linux partitions if specified.

Awesome! Have you opened up a ticket for this improvement? Providing
the specs are sufficient, I’m happy to include this is 0.7.2 =D

Regards,

AJ


#18

On Sat, Jun 13, 2009 at 7:20 PM, Arjuna Christensen aj@opscode.com wrote:

Hey,

On 14/06/2009, at 2:08 PM, snacktime snacktime@gmail.com wrote:

On Fri, Jun 12, 2009 at 11:41 PM, Ezra Zygmuntowicz < ez@engineyard.com
ez@engineyard.com> wrote:

   The way I'm working with EBS volumes on ec2 where I want it to

format the EBS device as ext3 only if it is not already formatted and then
ensure it is properly mounted, I also want to grow the filesystem to fill
the space in case i booted from a snapshot with a larger volume size.

Thanks Ezra that gave me another tool I didn’t know about, e2label.

So I’ve got most of a filesystem resource/provider working. It can create
linux/raid/lvm partitions with parted, and then add ext3/xfs filesystems to
linux partitions if specified.

Awesome! Have you opened up a ticket for this improvement? Providing the
specs are sufficient, I’m happy to include this is 0.7.2 =D
Regards,

AJ

I’ll open a ticket for this and the raid resource, I should be able to get
them mostly finished this weekend. I have specs for the resources, need to
get some for the providers.

Chris


#19

Finally got around to opening up a couple of tickets on this. I need to
finish up some of the specs, I’ve been concentrating more on real world
testing. Should be ready to have it pulled in this weekend. Here are a
couple quick examples of what it can do so far.
Filesystem provider:

file_system “/dev/sde” do
device '/dev/sde’
partitions [
{‘number’ => 2, ‘type’ => ‘raid’, ‘start’ => ‘5GB’, ‘end’ => ‘10GB’},
{‘number’ => 1, ‘type’ => ‘raid’, ‘start’ => ‘0GB’, ‘end’ => ‘5GB’}
]
action :partition
end

file_system “/dev/sde1” do
device '/dev/sde1’
fs_type 'ext3’
options '-j’
action :create_fs
end

file_system “/dev/sde1” do
device '/dev/sde1’
fs_type 'ext3’
action :resize_fs
end

Raid provider:

raid ‘/dev/md0’ do
device ‘/dev/md0’
raid_devices [’/dev/sdc’,’/dev/sdd’]
level 1
action [:create,:assemble]
end

file_system “/dev/md0” do
device '/dev/md0’
fs_type 'ext3’
options '-j’
action :create_fs
end

raid ‘/dev/md0’ do
device '/dev/md0’
action [:stop]
end

On Sat, Jun 13, 2009 at 8:05 PM, snacktime snacktime@gmail.com wrote:

On Sat, Jun 13, 2009 at 7:20 PM, Arjuna Christensen aj@opscode.comwrote:

Hey,

On 14/06/2009, at 2:08 PM, snacktime snacktime@gmail.com wrote:

On Fri, Jun 12, 2009 at 11:41 PM, Ezra Zygmuntowicz < ez@engineyard.com
ez@engineyard.com> wrote:

   The way I'm working with EBS volumes on ec2 where I want it to

format the EBS device as ext3 only if it is not already formatted and then
ensure it is properly mounted, I also want to grow the filesystem to fill
the space in case i booted from a snapshot with a larger volume size.

Thanks Ezra that gave me another tool I didn’t know about, e2label.

So I’ve got most of a filesystem resource/provider working. It can create
linux/raid/lvm partitions with parted, and then add ext3/xfs filesystems to
linux partitions if specified.

Awesome! Have you opened up a ticket for this improvement? Providing the
specs are sufficient, I’m happy to include this is 0.7.2 =D
Regards,

AJ

I’ll open a ticket for this and the raid resource, I should be able to
get them mostly finished this weekend. I have specs for the resources, need
to get some for the providers.

Chris


#20

On 19/06/2009, at 6:35 PM, snacktime wrote:

Finally got around to opening up a couple of tickets on this. I
need to finish up some of the specs, I’ve been concentrating more on
real world testing. Should be ready to have it pulled in this
weekend. Here are a couple quick examples of what it can do so far.

Filesystem provider:

file_system “/dev/sde” do
device '/dev/sde’
partitions [
{‘number’ => 2, ‘type’ => ‘raid’, ‘start’ => ‘5GB’, ‘end’ =>
‘10GB’},
{‘number’ => 1, ‘type’ => ‘raid’, ‘start’ => ‘0GB’, ‘end’ =>
‘5GB’}
]

Since ‘partitions’ is an array - why not drop the number option of the
hash and assume order?

That’s my only comment at this stage - the rest looks /awesome/. What
platforms does this work on? How’s the test coverage? Any cucumber
integration tests? =D

action :partition
end

file_system “/dev/sde1” do
device '/dev/sde1’
fs_type 'ext3’
options '-j’
action :create_fs
end

file_system “/dev/sde1” do
device '/dev/sde1’
fs_type 'ext3’
action :resize_fs
end

Raid provider:

raid ‘/dev/md0’ do
device ‘/dev/md0’
raid_devices [’/dev/sdc’,’/dev/sdd’]
level 1
action [:create,:assemble]
end

file_system “/dev/md0” do
device '/dev/md0’
fs_type 'ext3’
options '-j’
action :create_fs
end

raid ‘/dev/md0’ do
device '/dev/md0’
action [:stop]
end

On Sat, Jun 13, 2009 at 8:05 PM, snacktime snacktime@gmail.com
wrote:

On Sat, Jun 13, 2009 at 7:20 PM, Arjuna Christensen aj@opscode.com
wrote:
Hey,

On 14/06/2009, at 2:08 PM, snacktime snacktime@gmail.com wrote:

On Fri, Jun 12, 2009 at 11:41 PM, Ezra Zygmuntowicz <ez@engineyard.com

wrote:

   The way I'm working with EBS volumes on ec2 where I want it  

to format the EBS device as ext3 only if it is not already
formatted and then ensure it is properly mounted, I also want to
grow the filesystem to fill the space in case i booted from a
snapshot with a larger volume size.

Thanks Ezra that gave me another tool I didn’t know about, e2label.

So I’ve got most of a filesystem resource/provider working. It can
create linux/raid/lvm partitions with parted, and then add ext3/xfs
filesystems to linux partitions if specified.

Awesome! Have you opened up a ticket for this improvement? Providing
the specs are sufficient, I’m happy to include this is 0.7.2 =D

Regards,

AJ

I’ll open a ticket for this and the raid resource, I should be
able to get them mostly finished this weekend. I have specs for the
resources, need to get some for the providers.

Chris


AJ Christensen, Software Engineer
Opscode, Inc.
E: aj@opscode.com