Apache和ProFTPD的Order区别

Order Allow,Deny Apache ProFTPD
仅匹配Allow Allow Allow
仅匹配Deny Deny Deny
没有匹配 默认Deny 默认Allow
匹配Allow和Deny 最后匹配Deny 首先匹配Allow

Order Deny,Allow Apache ProFTPD
仅匹配Allow Allow Allow
仅匹配Deny Deny Deny
没有匹配 默认Allow 默认Deny
匹配Allow和Deny 最后匹配Allow 首先匹配Deny

参考:
http://httpd.apache.org/docs/1.3/mod/mod_access.html
http://www.proftpd.org/docs/directives/linked/config_ref_Order.html
http://www.proftpd.org/docs/howto/Limit.html

FreeBSD Tomcat 安装后注意事项

编辑 /usr/local/etc/rc.d/tomcat6
在java_command=中增加如下内容禁用IPv6
-Djava.net.preferIPv4Stack="true" \
-Djava.net.preferIPv4Address="true" \

相关版本信息:
Apache/2.2.14 (FreeBSD) mod_ssl/2.2.14 OpenSSL/0.9.8k DAV/2 PHP/5.2.12 with Suhosin-Patch mod_jk/1.2.30
javavmwrapper-2.3.4
diablo-jdk-1.6.0.07.02_8
jdk-1.6.0.3p4_14
tomcat-6.0.24
tomcat-native-1.1.20
mod_jk-ap2-1.2.30_1

FreeBSD下ZFS RaidZ硬盘替换扩容实践

创建一个raidz1的ZFS pool
test# zpool create zfspool raidz da1 da2 da3
test# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zfspool 23.9G 192K 23.9G 0% ONLINE –
test# zpool status
pool: zfspool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zfspool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0

errors: No known data errors

用更大的da4、da5、da6替换原来小的da1、da2、da3 Continue reading

FreeBSD下ZFS mirror升降级、硬盘替换和在线/离线扩容实践

创建一个非冗余的ZFS pool
test# zpool create zfspool da1
test# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zfspool 7.94G 110K 7.94G 0% ONLINE –
test# zpool status
pool: zfspool
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zfspool ONLINE 0 0 0
da1 ONLINE 0 0 0

errors: No known data errors

增加一个盘,升级为双路mirror
test# zpool attach zfspool da1 da2
test# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zfspool 7.94G 112K 7.94G 0% ONLINE –
test# zpool status
pool: zfspool
state: ONLINE
scrub: resilver completed with 0 errors on Tue Jul 21 21:24:27 2009
config:

NAME STATE READ WRITE CKSUM
zfspool ONLINE 0 0 0
mirror ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0

errors: No known data errors

如果想直接建立一个双路mirror的ZFS pool用
test# zpool create zfspool mirror da1 da2

再增加一个盘,升级为三路mirror Continue reading

openSUSE 安装 VMware Tools 之前的准备

openSUSE 11.2:
安装make, gcc, kernel-source
/usr/bin/vmware-config-tools.pl –clobber-kernel-modules=vmci –clobber-kernel-modules=vsock –clobber-kernel-modules=vmxnet3 –clobber-kernel-modules=pvscsi –clobber-kernel-modules=vmmemctl –clobber-kernel-modules=vmhgfs –clobber-kernel-modules=vmxnet –clobber-kernel-modules=vmblock

openSUSE 11.1:
删除/lib/modules/2.6.27.7-9-default/updates中的vmxnet.ko vmblock.ko vmmemctl.ko vmhgfs.ko vmci.ko vmsync.ko 这些文件
安装make, gcc, kernel-source

openSUSE 10.3:
安装less, psmisc, make, gcc, kernel-source

FreeBSD下PT client选择

HDChina和HDBits上可用的client有Azureus、 BitTornado、 KTorrent、 rtorrent、 Transmission、 uTorrent。uTorrent需要Wine;Azureus(Vuze)和KTorrent需要X,因此都不考虑了。BitTornado 0.3.18 HDChina不认,Transmission 1.61 HDBits不认。rtorrent里凡是没有下载完成的任务,重启后都需要重新hash,Transmission则没有该问题,而且且速度比rtorrent要快。难道用uTorrent + samba?

FreeBSD使用VMware Tools无法关闭电源

  FreeBSD安装了VMware Tools以后,可以通过VI关闭客户机,但是系统停留在“The operating system has halted.Please press any key to reboot.”不能够自动关闭电源。解决方法为
ee /usr/local/etc/rc.d/vmware-tools.sh
  查找vmware_start_guestd()可见
vmware_start_guestd() {
cd "$vmdb_answer_SBINDIR" && "$vmdb_answer_SBINDIR"/vmware-guestd \
--background "$GUESTD_PID_FILE"
}

  在vmware-guestd命令下增加参数–halt-command “/sbin/shutdown -p now”,修改为
vmware_start_guestd() {
cd "$vmdb_answer_SBINDIR" && "$vmdb_answer_SBINDIR"/vmware-guestd \
--background "$GUESTD_PID_FILE" --halt-command "/sbin/shutdown -p now"
}

  保存退出。执行 /usr/local/etc/rc.d/vmware-tools.sh restart,重启vmware tools即可。

FreeBSD下ZFS在线替换硬盘扩容实践

替换前
test# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zfspool 9.94G 1.96G 7.98G 19% ONLINE –

用da2替换da1,都是/dev/下的设备
test# zpool replace zfspool da1 da2

开始替换了
test# zpool status
pool: zfspool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 15.24% done, 0h4m to go
config:

NAME STATE READ WRITE CKSUM
zfspool ONLINE 0 0 0
replacing ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0

errors: No known data errors

替换完成了
test# zpool status
pool: zfspool
state: ONLINE
scrub: resilver completed with 0 errors on Sat May 9 16:49:35 2009
config:

NAME STATE READ WRITE CKSUM
zfspool ONLINE 0 0 0
da2 ONLINE 0 0 0

errors: No known data errors

容量增加了
test# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zfspool 17.9G 1.96G 16.0G 10% ONLINE –

整个过程中应用没有中断

使用PF实现基于来源的策略路由

  FreeBSD做服务器,要实现从哪个网卡进来的连接请求,返回时还从哪个网卡出去。这样可以让客户端自己选择线路,无需收集路由表。

ee /etc/rc.conf
pf_enable="YES"
pf_rules="/etc/pf.conf"

//启用PF
defaultrouter="192.168.1.1"
//这个是本机自己发起连接的默认路由

ee /etc/pf.conf

if_cernet="em0"
if_ct="em1"
gw_cernet="192.168.1.1"
gw_ct="192.168.0.1"
block all
pass quick on lo0 all
pass in quick on $if_cernet reply-to ( $if_cernet $gw_cernet ) proto {tcp,udp,icmp} to any keep state
pass in quick on $if_ct reply-to ( $if_ct $gw_ct ) proto {tcp,udp,icmp} to any keep state
pass out keep state

VMware ESXi 挂载 iSCSI 和 NFS 性能测试

  iSCSI-target和NFS Server由一台Raid10(4*2.5′ 10Krpm 146GB)的VMware ESXi 3.5里的FreeBSD服务机提供,在另一台Raid1(2*3.5′ 15Krpm 146GB)的VMware ESXi 3.5里挂载iSCSI和NFS,然后分别以虚拟磁盘添加入FreeBSD测试机中。使用/usr/local/bin/iozone -i 0 -i 1 -i 2 -r 1024 -s 1G -t 2 -C测试。测试结果如下:
  iSCSI测试:
Initial write = 5443.42 KB/sec
Rewrite = 4840.85 KB/sec
Read = 19823.13 KB/sec
Re-read = 19298.97 KB/sec
Random read = 44114.65 KB/sec
Random write = 4024.72 KB/sec

  NFS测试:
Initial write = 952.76 KB/sec
Rewrite = 975.36 KB/sec
Read = 14782.20 KB/sec
Re-read = 16085.16 KB/sec
Random read = 41878.42 KB/sec
Random write = 794.31 KB/sec

  CPU占用率上NFS只有iSCSI的一半,服务机和测试机都差不多。iSCSI时CPU占用率为15%左右,中间还有一段是30%多。NFS时基本都8%左右。两台机器均为2*Intel E5405,分配给虚拟机2个核。

  测试机直接加载NFS测试:
Initial write = 2361.99 KB/sec
Rewrite = 2130.92 KB/sec
Read = 17595.85 KB/sec
Re-read = 18904.29 KB/sec
Random read = 13139.79 KB/sec
Random write = 2001.82 KB/sec

  测试机本地测试:
Initial write = 8233.32 KB/sec
Rewrite = 12511.68 KB/sec
Read = 34969.73 KB/sec
Re-read = 34179.26 KB/sec
Random read = 82272.52 KB/sec
Random write = 4620.50 KB/sec

  服务机本地测试:
Initial write = 6236.64 KB/sec
Rewrite = 9016.30 KB/sec
Read = 47051.42 KB/sec
Re-read = 47444.12 KB/sec
Random read = 27243.86 KB/sec
Random write = 3251.88 KB/sec