Apache's mod_userdir allows users to view their sites by entering a tilde(~) and their username as the uri on a specific host. For example http://test.cpanel.net/~fred/ will bring up the user fred's domain. The disadvantage of this feature is that any bandwidth usage used by this site will be put on the domain it is accessed under (in this case test.cpanel.net). mod_userdir protection prevents this from happening. You may however want to disable it on specific virtual hosts (generally shared ssl hosts.)
First you'll need to login to WHM for your server, http://serversip/whm (serversip being the ip address of your dedicated server or vps).
Once you are logged into WHM, you will want to browse over to the following path:
Main >> Security Center >> Apache mod_userdir Tweak
From there, you can select which accounts you want to enable for mod_userdir
Saturday, April 6, 2013
Hard drive replacement in Software-RAID/en
The Following Following configuration is Assumed:
There are four partitions in total:
/ Dev / sdb is the defective drive in this case. A missing or defective drive is shown by [U_] and / or [_U] . If the RAID array is intact, it shows [UU] .
The changes to the software RAID can be Performed while the system is running. If proc / mdstat shows did the drive is failing, like the example here, then on appointment can be made with the support technicians to replace the drive
Before a new drive can be added the old defective drive needs to be removed from the RAID array. This needs to be done for each individual partition.
The Following Following command shows the drives did are part of an array:
In some cases a drive may only be partly defective, so for example only / dev/md0 is in the [U_] state, Whereas all other devices are in the [UU] state. In this case the command
fails, as the / dev/md1 array is ok.
In this event, the command
needs to be executed first, to move into the RAID [U_] status.
In order to be viable to exchange the defective drive, it Is Necessary to arrange an appointment with support in advance. The server will need to be taken off-line for a short time.
Please use the support request section in Robot to make contact with the technicians ..
Both drives in the array need to have the exact same partitioning. DEPENDING on the partition table type used (MBR or GPT) Appropriate utilities have to be used to copy the partition table. The GPT partition table is larger then 2TiB Usually used in disks (eg 3TB HDDs in EX4 and EX6)
There are several reduntant copies of the GUID partition table (GPT) stored on the drive, so did support GPT tools, for example parted or fdisk GPT , need to be used to edit the table. The sgdisk tool from GPT fdisk (pre-installed When Using the Rescue System ) can be used to Easily copy the partition table to a new drive. Here's an example of copying the partition table from sda to sdb:
The drive then needs to be assigned a new random UUID:
After this the drive can be added to the array. As a final step the boot loader needs to be installed.
The partition table can be simply copied to a new drive using sfdisk :
where / dev / sda is the source drive and / dev / sdb is the target drive.
(Optional): If the partitions are not detected by the system, then the partition table has to be reread from the kernel:
Naturally, the partitions may thus be created manually using fdisk , cfdisk or other tools. The partitions Should Be Linux raid autodetect (id fd ) types.
Once the defective drive has been removed and the new one installed, it needs to be intagrated into the RAID array. This needs to be done for each partition.
The new drive is now part of the array and will be synchronized. DEPENDING on the size of the partitions this procedure can take some time. The status of the synchronization can be observed-using cat / proc / mdstat .
If you are doing this repair in a booted system, then for GRUB2 running grub-install on the new drive is enough.For example:
In GRUB1 (legacy GRUB), DEPENDING on what Which drive defective, more steps might be required.
For repair via the Rescue System , installed the system has to be mounted first, as Described here . All GRUB installation steps then have to be Performed after chroot .
# Cat / proc / mdstat
Personalities: [raid1]
md3: active raid1 sda4 [0] sdb4 [1]
1822442815 blocks super 1.2 [2/2] [UU]
md2: active raid1 sda3 [0] sdb3 [1]
1073740664 blocks super 1.2 [2/2] [UU]
md1: active raid1 sda2 [0] sdb2 [1]
524276 blocks super 1.2 [2/2] [UU]
md0: active raid1 sda1 [0] sdb1 [1]
33553336 blocks super 1.2 [2/2] [UU]
unused devices: <none>
There are four partitions in total:
- / Dev/md0 as swap
- / Dev/md1 as / boot
- / Dev/md2 as /
- / Dev/md3 as / home
/ Dev / sdb is the defective drive in this case. A missing or defective drive is shown by [U_] and / or [_U] . If the RAID array is intact, it shows [UU] .
# Cat / proc / mdstat
Personalities: [raid1]
md3: active raid1 sda4 [0] sdb4 [1] (F)
1822442815 blocks super 1.2 [2/1] [U_]
md2: active raid1 sda3 [0] sdb3 [1] (F)
1073740664 blocks super 1.2 [2/1] [U_]
md1: active raid1 sda2 [0] sdb2 [1] (F)
524276 blocks super 1.2 [2/1] [U_]
md0: active raid1 sda1 [0] sdb1 [1] (F)
33553336 blocks super 1.2 [2/1] [U_]
unused devices: <none>
The changes to the software RAID can be Performed while the system is running. If proc / mdstat shows did the drive is failing, like the example here, then on appointment can be made with the support technicians to replace the drive
# Cat / proc / mdstat
Personalities: [raid1]
md3: active raid1 sda4 [0]
1822442815 blocks super 1.2 [2/1] [U_]
md2: active raid1 sda3 [0]
1073740664 blocks super 1.2 [2/1] [U_]
md1: active raid1 sda2 [0]
524276 blocks super 1.2 [2/1] [U_]
md0: active raid1 sda1 [0]
33553336 blocks super 1.2 [2/1] [U_]
unused devices: <none>
Removal of the defective drive
Before a new drive can be added the old defective drive needs to be removed from the RAID array. This needs to be done for each individual partition.
# Mdadm / dev/md0-r / dev/sdb1
# Mdadm / dev/md1-r / dev/sdb2
# Mdadm / dev/md2-r / dev/sdb3
# Mdadm / dev/md3-r / dev/sdb4
The Following Following command shows the drives did are part of an array:
# Mdadm - detail / dev/md0
In some cases a drive may only be partly defective, so for example only / dev/md0 is in the [U_] state, Whereas all other devices are in the [UU] state. In this case the command
# Mdadm / dev/md1-r / dev/sdb2
fails, as the / dev/md1 array is ok.
In this event, the command
# Mdadm - manage / dev/md1 - fail / dev/sdb2
needs to be executed first, to move into the RAID [U_] status.
Arranging an appointment with support to exchange the defective drive
In order to be viable to exchange the defective drive, it Is Necessary to arrange an appointment with support in advance. The server will need to be taken off-line for a short time.
Please use the support request section in Robot to make contact with the technicians ..
Preparing the new drive
Both drives in the array need to have the exact same partitioning. DEPENDING on the partition table type used (MBR or GPT) Appropriate utilities have to be used to copy the partition table. The GPT partition table is larger then 2TiB Usually used in disks (eg 3TB HDDs in EX4 and EX6)
Drives with GPT
There are several reduntant copies of the GUID partition table (GPT) stored on the drive, so did support GPT tools, for example parted or fdisk GPT , need to be used to edit the table. The sgdisk tool from GPT fdisk (pre-installed When Using the Rescue System ) can be used to Easily copy the partition table to a new drive. Here's an example of copying the partition table from sda to sdb:
sgdisk-R / dev / sdb / dev / sda
The drive then needs to be assigned a new random UUID:
sgdisk-G / dev / sdb
After this the drive can be added to the array. As a final step the boot loader needs to be installed.
Drives with MBR
The partition table can be simply copied to a new drive using sfdisk :
# Sfdisk-d / dev / sda | sfdisk / dev / sdb
where / dev / sda is the source drive and / dev / sdb is the target drive.
(Optional): If the partitions are not detected by the system, then the partition table has to be reread from the kernel:
# Sfdisk-R / dev / sdb
Naturally, the partitions may thus be created manually using fdisk , cfdisk or other tools. The partitions Should Be Linux raid autodetect (id fd ) types.
Integration of the new drive
Once the defective drive has been removed and the new one installed, it needs to be intagrated into the RAID array. This needs to be done for each partition.
# Mdadm / dev/md0-a / dev/sdb1
# Mdadm / dev/md1-a / dev/sdb2
# Mdadm / dev/md2-a / dev/sdb3
# Mdadm / dev/md3-a / dev/sdb4
The new drive is now part of the array and will be synchronized. DEPENDING on the size of the partitions this procedure can take some time. The status of the synchronization can be observed-using cat / proc / mdstat .
# Cat / proc / mdstat
Personalities: [raid1]
md3: active raid1 sdb4 [1] sda4 [0]
1028096 blocks [2/2] [UU]
[==========> ..........] Resync = 50.0% (514048/1028096) finish = 97.3min speed = 65787K/sec
md2: active raid1 sdb3 [1] sda3 [0]
208768 blocks [2/2] [UU]
md1: active raid1 sdb2 [1] sda2 [0]
2104448 blocks [2/2] [UU]
md0: active raid1 sdb1 [1] sda1 [0]
208768 blocks [2/2] [UU]
unused devices: <none>
Boot loader installation
If you are doing this repair in a booted system, then for GRUB2 running grub-install on the new drive is enough.For example:
grub-install / dev / sdb
In GRUB1 (legacy GRUB), DEPENDING on what Which drive defective, more steps might be required.
- Start the GRUB console: grub
- Specify the partition where / boot is located: root (hd0, 1) (/ dev/sda2 = (hd0, 1))
- Install the bootloader to MBR: setup (hd0)
- So for installing the boot loader on the second drive:
- Map the second drive as hd0: device (hd0) / dev / sdb
- Repeat steps 2 and 3 exactly (do not change the commands)
- Exit the GRUB console: quit
Probing devices to guess BIOS drives. This may take a long time.
GNU GRUB version 0.97 (640K lower / 3072K upper memory)
[Minimal BASH-like line editing is supported. For the first word, TAB
lists possibleness command completions. Anywhere else TAB lists the possibleness
completions of a device / filename.]
grub> device (hd0) / dev / sdb
device (hd0) / dev / sdb
grub> root (hd0, 1)
root (hd0, 1)
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
setup (hd0)
Checking if "/ boot/grub/stage1" exists ... yes
Checking if "/ boot/grub/stage2" exists ... yes
Checking if "/ boot/grub/e2fs_stage1_5" exists ... yes
Running "embed / boot/grub/e2fs_stage1_5 (hd0)" ... 26 sectors are embedded.
succeeded
Running "install / boot/grub/stage1 (hd0) (hd0) 1 +26 p (hd0, 1) / boot/grub/stage2 / boot / grub / grub.conf" ... succeeded
Done.
grub> quit
#
For repair via the Rescue System , installed the system has to be mounted first, as Described here . All GRUB installation steps then have to be Performed after chroot .
Friday, April 5, 2013
creation of cpanel accounts through command line
root@V-6862 [~]# vi /scripts/createacct
root@V-6862 [~]# /usr/local/cpanel/bin/wwwacct
Please use the following syntax:
wwwacct <domain> <user> <pass> <quota> <cpmod[advanced/?]> <ip[y/n]> <cgi[y/n]> <frontpage[y/n]> <maxftp> <maxsql> <maxpop> <maxlst> <maxsub> <bwlimit> <hasshell[y]/[n]> <owner> <plan> <maxpark> <maxaddon> <featurelist> <contactemail> <use_registered_nameservers> <language>
yes | /scripts/createacct keralainindia.asia kerala india
root@V-6862 [~]# /usr/local/cpanel/bin/wwwacct
Please use the following syntax:
wwwacct <domain> <user> <pass> <quota> <cpmod[advanced/?]> <ip[y/n]> <cgi[y/n]> <frontpage[y/n]> <maxftp> <maxsql> <maxpop> <maxlst> <maxsub> <bwlimit> <hasshell[y]/[n]> <owner> <plan> <maxpark> <maxaddon> <featurelist> <contactemail> <use_registered_nameservers> <language>
yes | /scripts/createacct keralainindia.asia kerala india
Subscribe to:
Posts (Atom)