Category Archives: Hardware

Partitioning in Windows 7

Needed to remove junk from a USB thumb drive. Google search yielded a nice find at http://www.bleepingcomputer.com/forums/t/219693/unable-to-format-usb-stick-to-full-capacity/

1) Type “DISKPART” from the Command Prompt (accessible by clicking on Start and then typing “cmd” into the open field); you will then see the following prompt: DISKPART>

2) Type “LIST DISK” to see what number your USB drive is listed as.

3) Type “SELECT DISK 2” (if your USB is disk 2; replace # with your disk #); Diskpart will confirm that “Disk 2 is now the select disk.”

4) Type “SELECT PARTITION 1” (this command selects what should be the only partition on your USB drive, the small one that you want to delete to get back the larger, full partition size). Diskpart will confirm with “Partition 1 is now the selected partition.”

5) Type “DELETE PARTITION”. This will delete the old partition. There are no warning prompts if you have existing data – make sure you have copied everything off before doing this!

6) Type “CREATE PARTITION PRIMARY” to create a new, full-size partition. Diskpart will confirm with message of “Diskpart succeeded in creating the specified partition.” You can type in “LIST PARTITION” to confirm the new, full-size.

7) Type “EXIT” to leave Diskpart. You can now format your USB drive by using the standard Windows formatting process.

Debian Jumbo Frames

2 NAS servers both with 802.3ad bonded gigE nics based on the Realtek 8169 chip.
The highest MTU I could set was 7000 even though the D-Link DGS-1210-24 Rev. A switch can support up to 10k.

Below is just a single sample, but all tests stayed within 57x Mbits for MTU=1500 and 77xMbits for MTU=7000.

The important bits.
iperf was used for this testing.

MTU 1500:
[ 3] 0.0-10.0 sec 687 MBytes 576 Mbits/sec


MTU 7000:
[ 3] 0.0-10.0 sec 926 MBytes 777 Mbits/sec

Barracuda Green ST2000DL003

http://www.seagate.com/files/staticfiles/support/docs/manual/desktop/Barracuda%20Green/100649225c.pdf

Reformatted a bit to fit on page.

I have 6 of these (2TB) in each NAS Raid5 setup plus motherboard, AMD cpus, RAM etc.
This page is for researching a replacement PSU.

2.8.1 Power consumption
Power requirements for the drives are listed in Table 2 on page 15. Typical power measurements are based on an average of drives tested, under nominal conditions, using 5.0V and 12.0V input voltage at 25°C ambient temperature.

• Spinup power Spinup power is measured from the time of power-on to the time that the drive spindle reaches operating speed.

• Read/write power and current
Read/write power is measured with the heads on track, based on a 16-sector write followed by a 32-ms delay, then a 16-sector read followed by a 32-ms delay.

• Operating power and current
Operating power is measured using 40 percent random seeks, 40 percent read/write mode (1 write for each 10
reads) and 20 percent drive idle mode.

• Idle mode power
Idle mode power is measured with the drive up to speed, with servo electronics active and with the heads in a random track location.

• Standby mode
During Standby mode, the drive accepts commands, but the drive is not spinning, and the servo and read/write electronics are in power-down mode.

Table 2 DC power requirements
Avg (watts 25°C) Avg 5V typ amps Avg 12V typ amps
Spinup — — 2.1
Idle* † 4.5 0.151 0.303
Operating 5.8 0.440 0.300
Standby 0.50 0.085 0.01
Sleep 0.50 0.085 0.01

Avg (watts 25°C) Avg 5V typ amps Avg 12V typ amps
Spinup 2.1
Idle* † 4.5 0.151 0.303
Operating 5.8 0.440 0.300
Standby 0.50 0.085 0.01
Sleep 0.50 0.085 0.01

*During periods of drive idle, some offline activity may occur according to the S.M.A.R.T. specification, which may increase acoustic and power to operational levels.
†5W IDLE, Standby and Sleep,with DIPLM Enabled

Debian Squeeze 802.3ad

Debian Lenny and Squeeze
2x Realtek 8169
1x Reaktek 8169, 1x nForce
D-Link DGS-1210-24 Rev. A

Testing:
iptraf
cat /proc/net/bonding/bond0

[1]
mode=0 (balance-rr)
Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.


mode=1 (active-backup)
One slave interface is active at any time. If one interface fails, another interface takes over the MAC address and becomes the active interface. Provides fault tolerance only. Doesn’t require special switch support.


mode=2 (balance-xor)
Tranmissions are balanced across the slave interfaces based on ((source MAC) XOR (dest MAC)) modula slave count. The same slave is selected for each destination MAC. Provides load balancing and fault tolerance.


mode=3 (broadcast)
Transmits everything on all slave interfaces. Provides fault tolerance.


mode=4 (802.3ad)
This is classic IEEE 802.3ad Dynamic link aggregation. This requires 802.3ad support in the switch and driver support for retrieving the speed and duplex of each slave.


mode=5 (balance-tlb)
Adaptive Transmit Load Balancing. Incoming traffic is received on the active slave only, outgoing traffic is distributed according to the current load on each slave. Doesn’t require special switch support.


mode=6 (balance-alb)
Adaptive Load Balancing – provides both transmit load balancing (TLB) and receive load balancing for IPv4 via ARP negotiation. Doesn’t require special switch support, but does require the ability to change the MAC address of a device while it is open.


[2]
miimon
Specifies the MII link monitoring frequency in milliseconds. This determines how often the link state of each slave is inspected for link failures. A value of zero disables MII link monitoring. A value of 100 is a good starting point. The use_carrier option, below, affects how the link state is determined. See the High Availability section for additional information. The default value is 0.


[3]
bond-downdelay 200 : Set the time, t0 200 milliseconds, to wait before disabling a slave after a link failure has been detected. This option is only valid for the bond-miimon.


bond-updelay 200 : Set the time, to 200 milliseconds, to wait before enabling a slave after a link recovery has been detected. This option is only valid for the bond-miimon.


[4]
J.A. Sullivan on the debian-user list writes:

There are a couple of issues in bonding which can bite the unsuspecting (as they did me!). Round robin will load balance across multiple
interfaces but can produce serious issues with managing out of order TCP
packets. Thus, the performance gain decreases dramatically with the
number of interfaces. In other words, 2 NICs in RR mode will not give 2x the performance nor 3 NICs 3x performance. I do not recall the exact
numbers off the top of my head but averages are something like:
2 NICs – 1.6x performance
3 NICs – 1.9x performance

The other modes (other than failover) eliminate the out of order TCP
problem but do so at a cost. All traffic for a single traffic flow goes
across a single path. The most common way to identify a single traffic
flow is matching source and destination MAC addresses. Some bonding algorithms allow matches on layer 3 or even layer 4 data but, if the switch through which they flow only supports MAC to MAC flow assignments, it will all devolve to matching MAC addresses anyway.

So what is the practical outcome using non-RR bonding? You have only one
combination of source and destination MAC address for each socket, e.g.,
if you are measuring a single FTP connection, there is only one
combination of source and destination MAC address. Thus, no matter how
many NICs you have, all the traffic will flow across one combination of
NICs. You will see no performance improvement.

In fact, depending on how the MAC addresses are advertised from the
systems with multiple NICs, all traffic between two systems may flow
across the same pair of NICs even if there are multiple, different
traffic streams.

On the other hand, if you are using bonding to provide a trunk carrying
traffic from many different source and destination MAC address
combinations, each separate stream will be limited to the maximum of the
individual NICs but the aggregate throughput should increase almost
linearly with the number of NICs. Hope that helps – John



Resources:
[1] http://www.howtoforge.com/nic-bonding-on-debian-lenny
[2] http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding
[3] http://www.cyberciti.biz/tips/debian-ubuntu-teaming-aggregating-multiple-network-connections.html
[4] http://comments.gmane.org/gmane.linux.debian.user/405553

Seagate ST32000542AS 2TB Setup

A lot of ST32000542AS drives come with the CC34 firmware. Apparently it has various known problems, one of which is an annoying click (click of death). The first thing you’ll want to do is upgrade the firmware to CC35. A Link to the instructions is in the references section below.

Once that is done, the next step, if it exists, is removing HPA from the drive.
You’ll know it has HPA enabled by running hparm. HPA results in less capacity and so it’s not a good thing in an array.

We’ll be using Debian 6.0 (squeeze).

hparm -N /dev/sdb

You should see a difference in the numbers here. I chose to take the highest number. This completely disables HPA.

hdparm -N p3907029168 /dev/sdb

Finally, we should end up with full usability of the drive.

fdisk -l /dev/sdb

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn’t contain a valid partition table

Power cycle (not reboot) to confirm settings survive.

References:
Updating the firmware on the drives:
Seagate 2TB ST32000542AS CC35 Firmware upgrade

Disabling HPA using hdparm:
unRAID Server Community parity