-bash: /bin/rm: Argument list too long

联系:手机/微信(+86 17813235971) QQ(107644445)QQ咨询惜分飞

标题:-bash: /bin/rm: Argument list too long

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

linux批量删除大量文件,当使用rm -rf *报-bash: /bin/rm: Argument list too long错误可以使用find+xargs搞定

[grid@xifenfei audit]$ rm -rf +ASM2_ora_1*_2017*.aud
-bash: /bin/rm: Argument list too long
[grid@xifenfei audit]$ ls|wc -l
111650450
[grid@xifenfei audit]$ find ./ -name "*.aud" |xargs rm -r
[grid@xifenfei audit]$ ls
[grid@xifenfei audit]$

Using mlock ulimits for SHM_HUGETLB is deprecated

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:Using mlock ulimits for SHM_HUGETLB is deprecated

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

Oracle 数据库运行在linux 6中,启用大页之后,我们经常在/var/log/messages里面会看到类似这样的记录:Mar 11 12:12:33 i-q2ghx82t kernel: oracle (3677): Using mlock ulimits for SHM_HUGETLB is deprecated,我这里的环境也有重现该问题
环境说明

--系统配置
[root@i-q2ghx82t ~]# more /etc/issue
CentOS release 6.8 (Final)
Kernel \r on an \m
[root@i-q2ghx82t ~]# ulimit  -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 128331
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 128331
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
[oracle@i-q2ghx82t ~]$ cat /proc/meminfo|grep Hu
AnonHugePages:         0 kB
HugePages_Total:   10752
HugePages_Free:    10752
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
[root@i-q2ghx82t ~]# more  /proc/sys/vm/hugetlb_shm_group
[root@i-q2ghx82t ~]# id oracle
uid=1000(oracle) gid=1000(oinstall) groups=1000(oinstall),1001(dba),1002(oper),1005(asmdba)
--数据库参数
use_large_pages=only

在本环境中数据库启动正常,大页也可以正常使用,但是在系统日志中有类似Mar 11 12:12:33 i-q2ghx82t kernel: oracle (3677): Using mlock ulimits for SHM_HUGETLB is deprecated 这样的告警.通过分析,是由于少配置了hugetlb_shm_group参数导致(vm.hugetlb_shm_group 参数设置为有权使用 HugePages 的操作系统组。默认情况下,此参数设置为 0,从而允许所有组使用 HugePages。可以将此参数设置为 Oracle 数据库进程所属的操作系统组,如 oinstall),在本系统中在sysctl.conf中增加vm.hugetlb_shm_group=1000,然后重启系统(测试中,如果只是重启数据库,非系统重启后第一次重启数据库,不会出现该告警),系统日志没有出现相关告警.
在Linux 6中配置大页建议加上对应的hugetlb_shm_group参数

redhat和oracle linux kernel对应关系

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:redhat和oracle linux kernel对应关系

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

Red Hat Enterprise Linux Version / Update

Red Hat Enterprise Linux – Kernel version / redhat-release string

Oracle Linux – Kernel version / release strings

 Red Hat Enterprise Linux 7

 Red Hat Enterprise Linux 7 Update 6

 3.10.0-957.el7.x86_64
redhat-release: Red Hat Enterprise Linux Server release 7.6 (Maipo)

 4.14.35-1818.3.3.el7uek.x86_64 ^ * (x86_64 only)
3.10.0-957.el7.x86_64 ^ (x86_64 only)
redhat-release: Red Hat Enterprise Linux Server release 7.6 (Maipo)
system-release: Oracle Linux Server release 7.6
oracle-release: Oracle Linux Server release 7.6

 Red Hat Enterprise Linux 7 Update 5

 3.10.0-862.el7.x86_64
redhat-release: Red Hat Enterprise Linux Server release 7.5 (Maipo)

 4.1.12-112.16.4.el7uek ^ * (x86_64 only)
3.10.0-862.el7.x86_64 ^ (x86_64 only)
redhat-release: Red Hat Enterprise Linux Server release 7.5 (Maipo)
system-release: Oracle Linux Server release 7.5
oracle-release: Oracle Linux Server release 7.5

 Red Hat Enterprise Linux 7 Update 4

 3.10.0-693.el7.x86_64
redhat-release: Red Hat Enterprise Linux Server release 7.4 (Maipo)

 4.1.12-94.3.9.el7uek ^ * (x86_64 only)
3.10.0-693.el7.x86_64 ^ (x86_64 only)
redhat-release: Red Hat Enterprise Linux Server release 7.4 (Maipo)
system-release: Oracle Linux Server release 7.4
oracle-release: Oracle Linux Server release 7.4

 Red Hat Enterprise Linux 7 Update 3

 3.10.0-514.el7.x86_64
redhat-release: Red Hat Enterprise Linux Server release 7.3 (Maipo)

 4.1.12-61.1.18.el7uek ^ * (x86_64 only)
3.10.0-514.el7.x86_64 ^ (x86_64 only)
redhat-release: Red Hat Enterprise Linux Server release 7.3 (Maipo)
system-release: Oracle Linux Server release 7.3
oracle-release: Oracle Linux Server release 7.3

 Red Hat Enterprise Linux 7 Update 2

 3.10.0-327.el7.x86_64
redhat-release: Red Hat Enterprise Linux Server release 7.2 (Maipo)

 3.8.13-98.6.1.el7uek ^ * (x86_64 only)
3.10.0-327.el7.x86_64 ^ (x86_64 only)
redhat-release: Red Hat Enterprise Linux Server release 7.2 (Maipo)
system-release: Oracle Linux Server release 7.2
oracle-release: Oracle Linux Server release 7.2

 Red Hat Enterprise Linux 7 Update 1

 3.10.0-229.el7.x86_64
redhat-release: Red Hat Enterprise Linux Server release 7.1 (Maipo)

 3.8.13-55.1.6.el7uek ^ * (x86_64 only)
3.10.0-229.el7.x86_64 ^ (x86_64 only)
redhat-release:Red Hat Enterprise Linux Server release 7.1 (Maipo)
system-release:Oracle Linux Server release 7.1
oracle-release: Oracle Linux Server release 7.1

 Red Hat Enterprise Linux 7 GA

 3.10.0-123.el7.x86_64
redhat-release: Red Hat Enterprise Linux Server release 7.0 (Maipo)

 3.8.13-35.3.1.el7uek ^ * (x86_64 only)
3.10.0-123.el7.x86_64 ^ (x86_64 only)
redhat-release:Red Hat Enterprise Linux Server release 7.0 (Maipo)
system-release:Oracle Linux Server release 7.0
oracle-release: Oracle Linux Server release 7.0

 Red Hat Enterprise Linux 6

 Red Hat Enterprise Linux Server 6 Update 10

 2.6.32-754.el6
 redhat-release: Red Hat Enterprise Linux Server release 6.10 (Santiago)

 4.1.12-124.16.4.el6uek ^ * (x86_64 only)
2.6.39-400.299.3.el6uek ^ * (x86 only)
2.6.32-754.el6 ^ (x86, x86_64)
redhat-release: Red Hat Enterprise Linux Server release 6.10 (Santiago)
system-release: Oracle Linux Server release 6.10
oracle-release: Oracle Linux Server release 6.10

 Red Hat Enterprise Linux Server 6 Update 9

 2.6.32-696.el6
redhat-release: Red Hat Enterprise Linux Server release 6.9 (Santiago)

 4.1.12-61.1.28.el6uek ^ * (x86_64 only)
2.6.39-400.294.3.el6uek ^ * (x86 only)
2.6.32-696.el6 ^ (x86, x86_64)
redhat-release: Red Hat Enterprise Linux Server release 6.9 (Santiago)
system-release: Oracle Linux Server release 6.9
oracle-release: Oracle Linux Server release 6.9

 Red Hat Enterprise Linux Server 6 Update 8

 2.6.32-642.el6
redhat-release: Red Hat Enterprise Linux Server release 6.8 (Santiago)

 4.1.12-37.3.1.el6uek ^ * (x86_64 only)
2.6.39-400.278.2.el6uek ^ * (x86 only)
2.6.32-642.el6 ^ (x86, x86_64)
redhat-release: Red Hat Enterprise Linux Server release 6.8 (Santiago)
system-release: Oracle Linux Server release 6.8
oracle-release: Oracle Linux Server release 6.8

 Red Hat Enterprise Linux Server 6 Update 7

 2.6.32-573.el6
redhat-release: Red Hat Enterprise Linux Server release 6.7 (Santiago)

 3.8.13-68.3.4.el6uek ^ * (x86_64 only)
2.6.39-400.250.7.el6uek ^ * (x86 only)
2.6.32-573.el6 ^ (x86, x86_64)
redhat-release: Red Hat Enterprise Linux Server release 6.7 (Santiago)
system-release: Oracle Linux Server release 6.7
oracle-release: Oracle Linux Server release 6.7

 Red Hat Enterprise Linux Server 6 Update 6

 2.6.32-504.el6
redhat-release: Red Hat Enterprise Linux Server release 6.6 (Santiago)

 3.8.13-44.1.1.el6uek ^ * (x86_64 only)
2.6.39-400.215.10.el6uek ^ * (x86 only)
2.6.32-504.el6 ^ (x86, x86_64)
redhat-release: Red Hat Enterprise Linux Server release 6.6 (Santiago)
system-release: Oracle Linux Server release 6.6
oracle-release: Oracle Linux Server release 6.6

 Red Hat Enterprise Linux Server 6 Update 5

 2.6.32-431.el6
redhat-release: Red Hat Enterprise Linux Server release 6.5 (Santiago)

 3.8.13-16.121.el6uek ^ * (x86_64 only)
2.6.39-400.211.1.el6uek ^ * (x86 only)
2.6.32-431.el6 ^ (x86, x86_64)
redhat-release: Red Hat Enterprise Linux Server release 6.5 (Santiago)
system-release: Oracle Linux Server release 6.5
oracle-release: Oracle Linux Server release 6.5

 Red Hat Enterprise Linux Server 6 Update 4

 2.6.32-358.el6
redhat-release: Red Hat Enterprise Linux Server release 6.4 (Santiago)

 2.6.39-400.17.1.el6uek ^ *
2.6.32-358.el6 ^
redhat-release: Red Hat Enterprise Linux Server release 6.4 (Santiago)
system-release: Oracle Linux Server release 6.4
oracle-release: Oracle Linux Server release 6.4

 Red Hat Enterprise Linux Server 6 Update 3

 2.6.32-279.el6
redhat-release: Red Hat Enterprise Linux Server release 6.3 (Santiago)

 2.6.39-200.24.1.el6uek ^ *
2.6.32-279.el6 ^
redhat-release: Red Hat Enterprise Linux Server release 6.3 (Santiago)
system-release: Oracle Linux Server release 6.3
oracle-release: Oracle Linux Server release 6.3

 Red Hat Enterprise Linux Server 6 Update 2

 2.6.32-220.el6
redhat-release: Red Hat Enterprise Linux Server release 6.2 (Santiago)

 2.6.32-100.34.1.el6uek ^ *
2.6.32-220.el6 ^
redhat-release: Red Hat Enterprise Linux Server release 6.2 (Santiago)
system-release: Oracle Linux Server release 6.2
oracle-release: Oracle Linux Server release 6.2

 Red Hat Enterprise Linux Server 6 Update 1

 2.6.32-131.el6
redhat-release: Red Hat Enterprise Linux Server release 6.1 (Santiago)

 2.6.32-100.34.1.el6uek ^ *
2.6.32-131.0.15.el6 ^
redhat-release: Red Hat Enterprise Linux Server release 6.1 (Santiago)
system-release: Oracle Linux Server release 6.1
oracle-release: Oracle Linux Server release 6.1

 Red Hat Enterprise Linux Server 6 GA

 2.6.32-71.el6
redhat-release: Red Hat Enterprise Linux Server release 6.0 (Santiago)

 2.6.32-100.28.5.el6uek ^ *
2.6.32-71.el6 ^
redhat-release: Red Hat Enterprise Linux Server release 6.0 (Santiago)
system-release: Oracle Linux Server release 6.0
oracle-release: Oracle Linux Server release 6.0

 Red Hat Enterprise Linux 5

 Red Hat Enterprise Linux Server 5 Update 11

 2.6.18-398.el5
redhat-release: Red Hat Enterprise Linux Server release 5.11 (Tikanga)

 2.6.39-400.215.10.el5uek ^ * (x86, x86_64)
2.6.18-398.el5 ^ (x86, x86_64)
2.6.18-398.0.0.0.1.el5 (x86, x86_64)
redhat-release: Red Hat Enterprise Linux Server release 5.11 (Tikanga)
enterprise-release: Enterprise Linux Enterprise Linux Server release 5.11 (Carthage)
oracle-release: Oracle Linux Server release 5.11

 Red Hat Enterprise Linux Server 5 Update 10

 2.6.18-371.el5
redhat-release: Red Hat Enterprise Linux Server release 5.10 (Tikanga)

 2.6.39-400.209.1.el5uek ^ * (x86, x86_64)
2.6.18-371.el5 ^ (x86, x86_64)
2.6.18-371.0.0.0.1.el5 (x86, x86_64)
redhat-release: Red Hat Enterprise Linux Server release 5.10 (Tikanga)
enterprise-release: Enterprise Linux Enterprise Linux Server release 5.10 (Carthage)
oracle-release: Oracle Linux Server release 5.10

 Red Hat Enterprise Linux Server 5 Update 9

 2.6.18-348.el5
redhat-release: Red Hat Enterprise Linux Server release 5.9 (Tikanga)

 2.6.39-300.26.1.el5uek ^ * (x86, x86_64)
2.6.18-348.el5 ^ (x86, x86_64)
2.6.18-348.0.0.0.1.el5 (x86, x86_64)
redhat-release: Red Hat Enterprise Linux Server release 5.9 (Tikanga)
enterprise-release: Enterprise Linux Enterprise Linux Server release 5.9 (Carthage)
oracle-release: Oracle Linux Server release 5.9

 Red Hat Enterprise Linux Server 5 Update 8

 2.6.18-308.el5
redhat-release: Red Hat Enterprise Linux Server release 5.8 (Tikanga)

 2.6.32-300.10.1.el5uek ^ * (x86, x86_64)
2.6.18-308.el5 ^ (x86, x86_64)
2.6.18-308.0.0.0.1.el5 (x86, x86_64)
redhat-release: Red Hat Enterprise Linux Server release 5.8 (Tikanga)
enterprise-release: Enterprise Linux Enterprise Linux Server release 5.8 (Carthage)
oracle-release: Oracle Linux Server release 5.8

 Red Hat Enterprise Linux Server 5 Update 7

 2.6.18-274.el5
redhat-release: Red Hat Enterprise Linux Server release 5.7 (Tikanga)

 2.6.32-200.13.1.el5uek ^ * (x86, x86_64)
2.6.18-274.el5 ^ (x86, x86_64)
2.6.18-274.0.0.0.1.el5 (x86, x86_64)
redhat-release: Red Hat Enterprise Linux Server release 5.7 (Tikanga)
enterprise-release: Enterprise Linux Enterprise Linux Server release 5.7 (Carthage)
oracle-release: Oracle Linux Server release 5.7

 Red Hat Enterprise Linux Server 5 Update 6

 2.6.18-238.el5
redhat-release: Red Hat Enterprise Linux Server release 5.6 (Tikanga)

 2.6.32-100.26.2.el5uek ^ * (x86_64 only)
2.6.18-238.el5 ^ (x86, x86_64)
2.6.18-238.0.0.0.1.el5 (x86, x86_64)
redhat-release: Red Hat Enterprise Linux Server release 5.6 (Tikanga)
enterprise-release: Enterprise Linux Enterprise Linux Server release 5.6 (Carthage)
oracle-release: Oracle Linux Server release 5.6

 Red Hat Enterprise Linux Server 5 Update 5

 2.6.18-194.el5
redhat-release: Red Hat Enterprise Linux Server release 5.5 (Tikanga)

 2.6.18-194.el5 ^ * (x86, x86_64, ia64)
2.6.32-100.24.1.el5 # (x86_64 only)
2.6.18-194.0.0.0.3.el5 (x86, x86_64, ia64)
redhat-release: Red Hat Enterprise Linux Server release 5.5 (Tikanga)
enterprise-release: Enterprise Linux Enterprise Linux Server release 5.5 (Carthage)
oracle-release: Oracle Linux Server release 5.5

 Red Hat Enterprise Linux Server 5 Update 4

 2.6.18-164.el5
redhat-release: Red Hat Enterprise Linux Server release 5.4 (Tikanga)

 2.6.18-164.el5 ^ * (x86, x86_64, ia64)
2.6.18-164.0.0.0.1.el5 (x86, x86_64, ia64)
UEK unavailable
redhat-release: Red Hat Enterprise Linux Server release 5.4 (Tikanga)
enterprise-release: Enterprise Linux Enterprise Linux Server release 5.4 (Carthage)

 Red Hat Enterprise Linux Server 5 Update 3

 2.6.18-128.el5
redhat-release: Red Hat Enterprise Linux Server release 5.3 (Tikanga)

 2.6.18-128.el5 ^ * (x86, x86_64, ia64)
2.6.18-128.0.0.0.2.el5 (x86, x86_64, ia64)
UEK unavailable
redhat-release: Red Hat Enterprise Linux Server release 5.3 (Tikanga)
enterprise-release: Enterprise Linux Enterprise Linux Server release 5.3 (Carthage)

 Red Hat Enterprise Linux Server 5 Update 2

 2.6.18-92.el5
redhat-release: Red Hat Enterprise Linux Server release 5.2 (Tikanga)

 2.6.18-92.el5 ^ * (x86, x86_64, ia64)
2.6.18-92.0.0.0.1.el5 (x86, x86_64, ia64)
UEK unavailable
redhat-release: Enterprise Linux Enterprise Linux Server release 5.2 (Carthage)
enterprise-release: Enterprise Linux Enterprise Linux Server release 5.2 (Carthage)

 Red Hat Enterprise Linux Server 5 Update 1

 2.6.18-53.el5
redhat-release: Red Hat Enterprise Linux Server release 5.1 (Tikanga)

 2.6.18-53.el5 ^ * (x86, x86_64, ia64)
2.6.18-53.0.0.0.1.el5 (x86, x86_64, ia64)
UEK unavailable
redhat-release: Enterprise Linux Enterprise Linux Server release 5.1 (Carthage)
enterprise-release: Enterprise Linux Enterprise Linux Server release 5.1 (Carthage)

 Red Hat Enterprise Linux Server 5 GA

 2.6.18-8.el5
redhat-release: Red Hat Enterprise Linux Server release 5 (Tikanga)

 2.6.18-8.el5 ^ * (x86, x86_64, ia64)
2.6.18-8.0.0.4.1.el5 (x86, x86_64, ia64)
UEK unavailable
redhat-release: Enterprise Linux Enterprise Linux Server release 5 (Carthage)
enterprise-release: Enterprise Linux Enterprise Linux Server release 5 (Carthage)

 Red Hat Enterprise Linux 4

 Red Hat Enterprise Linux 4 Update 9

 2.6.9-100.EL
redhat-release: Red Hat Enterprise Linux AS release 4 (Nahant Update 9)

 2.6.9-100.0.0.0.1.EL ^ * (x86, x86_64, ia64)
 2.6.9-100.EL (x86, x86_64, ia64)
 UEK unavailable
 redhat-release: Red Hat Enterprise Linux AS release 4 (Nahant Update 9)
 enterprise-release: Enterprise Linux Enterprise Linux AS release 4 (October Update 9)

 Red Hat Enterprise Linux 4 Update 8

 2.6.9-89.EL
redhat-release: Red Hat Enterprise Linux AS release 4 (Nahant Update 8)

 2.6.9-89.0.0.0.1.EL ^ * (x86, x86_64, ia64)
2.6.9-89.EL (x86, x86_64, ia64)
UEK unavailable
redhat-release: Red Hat Enterprise Linux AS release 4 (Nahant Update 8)
enterprise-release: Enterprise Linux Enterprise Linux AS release 4 (October Update 8)

 Red Hat Enterprise Linux 4 Update 7

 2.6.9-78.EL
redhat-release: Red Hat Enterprise Linux AS release 4 (Nahant Update 7)

 2.6.9-78.0.0.0.1.EL ^ * (x86, x86_64, ia64)
2.6.9-78.EL (x86, x86_64, ia64)
UEK unavailable
redhat-release: Enterprise Linux Enterprise Linux AS release 4 (October Update 7)
enterprise-release: Enterprise Linux Enterprise Linux AS release 4 (October Update 7)

 Red Hat Enterprise Linux 4 Update 6

 2.6.9-67.EL
redhat-release: Red Hat Enterprise Linux AS release 4 (Nahant Update 6)

 2.6.9-67.0.0.0.1.EL ^ * (x86, x86_64, ia64)
2.6.9-67.EL ^ * (x86, x86_64, ia64)
UEK unavailable
redhat-release: Enterprise Linux Enterprise Linux AS release 4 (October Update 6)
enterprise-release: Enterprise Linux Enterprise Linux AS release 4 (October Update 6)

 Red Hat Enterprise Linux 4 Update 5

 2.6.9-55.EL
redhat-release: Red Hat Enterprise Linux AS release 4 (Nahant Update 5)

 2.6.9-55.0.0.0.2.EL ^ * (x86, x86_64)
2.6.9-55.EL (x86, x86_64)
UEK unavailable
redhat-release: Enterprise Linux Enterprise Linux AS release 4 (October Update 5)
enterprise-release: Enterprise Linux Enterprise Linux AS release 4 (October Update 5)

 Red Hat Enterprise Linux 4 Update 4

 2.6.9-42.EL
redhat-release: Red Hat Enterprise Linux AS release 4 (Nahant Update 4

 2.6.9-42.0.0.0.1.EL ^ * (x86, x86_64)
2.6.9-42.EL (x86, x86_64)
UEK unavailable
redhat-release: Enterprise Linux Enterprise Linux AS release 4 (October Update 4)
enterprise-release: Enterprise Linux Enterprise Linux AS release 4 (October Update 4)

 Red Hat Enterprise Linux 4 Update 3

 2.6.9-34.EL
redhat-release: Red Hat Enterprise Linux AS release 4 (Nahant Update 3)

No corresponding version

 Red Hat Enterprise Linux 4 Update 2

 2.6.9-22.EL
redhat-release: Red Hat Enterprise Linux AS release 4 (Nahant Update 2)

No corresponding version

 Red Hat Enterprise Linux 4 Update 1

 2.6.9-11.EL
redhat-release: Red Hat Enterprise Linux AS release 4 (Nahant Update 1)

No corresponding version

 Red Hat Enterprise Linux 4 GA

 2.6.9-5.EL
redhat-release: Red Hat Enterprise Linux AS release 4 (Nahant)

No corresponding version

参考:Comparison of Red Hat and Oracle Linux kernel versions and release strings (文档 ID 560992.1)

mount: wrong fs type, bad option, bad superblock恢复Oracle

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:mount: wrong fs type, bad option, bad superblock恢复Oracle

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

有朋友找到我们,说对lv进行收缩操作之后,导致文件系统无法mount,提示超级块损坏.
尝试mount失败

[root@GZGSDB data]# mount /dev/vg_gzgsdb/lv_home /home
mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg_gzgsdb-lv_home,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

系统日志报错

Aug 16 21:23:24 GZGSDB kernel: EXT4-fs (dm-5): ext4_check_descriptors: Block bitmap for group 1 not in group (block 0)!
Aug 16 21:23:24 GZGSDB kernel: EXT4-fs (dm-5): group descriptors corrupted!

主要操作
cz


这里可以看出来,明显操作错误,在没有收缩文件系统之前,直接收缩的lv.
list_history


这里可以看出来,在发生故障之后,又做了很多操作,包括fsck和testdisk等,最终无法恢复,请求我们协助处理。

找出来备份超级块

[root@GZGSDB dul]# dumpe2fs /dev/vg_gzgsdb/lv_home |grep superblock
dumpe2fs 1.41.12 (17-May-2010)
ext2fs_read_bb_inode: A block group is missing an inode table
  Primary superblock at 0, Group descriptors at 1-52
  Backup superblock at 32768, Group descriptors at 32769-32820
  Backup superblock at 98304, Group descriptors at 98305-98356
  Backup superblock at 163840, Group descriptors at 163841-163892
  Backup superblock at 229376, Group descriptors at 229377-229428
  Backup superblock at 294912, Group descriptors at 294913-294964
  Backup superblock at 819200, Group descriptors at 819201-819252
  Backup superblock at 884736, Group descriptors at 884737-884788
  Backup superblock at 1605632, Group descriptors at 1605633-1605684
  Backup superblock at 2654208, Group descriptors at 2654209-2654260
  Backup superblock at 4096000, Group descriptors at 4096001-4096052
  Backup superblock at 7962624, Group descriptors at 7962625-7962676
  Backup superblock at 11239424, Group descriptors at 11239425-11239476
  Backup superblock at 20480000, Group descriptors at 20480001-20480052
  Backup superblock at 23887872, Group descriptors at 23887873-23887924
  Backup superblock at 71663616, Group descriptors at 71663617-71663668
  Backup superblock at 78675968, Group descriptors at 78675969-78676020
  Backup superblock at 102400000, Group descriptors at 102400001-102400052
  Backup superblock at 214990848, Group descriptors at 214990849-214990900

使用fsck修复

[root@GZGSDB data]# fsck -y /dev/vg_gzgsdb/lv_home
fsck from util-linux-ng 2.17.2
e2fsck 1.41.12 (17-May-2010)
fsck.ext4: Group descriptors look bad... trying backup blocks...
fsck.ext4: The ext2 superblock is corrupt when using the backup blocks
fsck.ext4: going back to original superblock
fsck.ext4: Device or resource busy while trying to open /dev/mapper/vg_gzgsdb-lv_home
Filesystem mounted or opened exclusively by another program?
--指定超级块恢复
[root@GZGSDB data]# fsck -y -b 102400000 /dev/vg_gzgsdb/lv_home
…………
Illegal block #0 (1296647770) in inode 354315.  CLEARED.
Illegal block #1 (1398362886) in inode 354315.  CLEARED.
Illegal block #3 (453538936) in inode 354315.  CLEARED.
Illegal block #5 (808333361) in inode 354315.  CLEARED.
Illegal block #6 (775434798) in inode 354315.  CLEARED.
Illegal block #8 (1180306180) in inode 354315.  CLEARED.
Illegal block #9 (1413893971) in inode 354315.  CLEARED.
Illegal block #10 (1229347423) in inode 354315.  CLEARED.
Illegal block #11 (1498613325) in inode 354315.  CLEARED.
Illegal indirect block (1296389203) in inode 354315.  CLEARED.
Inode 354315 is too big.  Truncate? yes
Block #1074301965 (69632) causes directory to be too big.  CLEARED.
Warning... fsck.ext4 for device /dev/mapper/vg_gzgsdb-lv_home exited with signal 11.
[root@GZGSDB data]# mount /dev/vg_gzgsdb/lv_home /home
mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg_gzgsdb-lv_home,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

至此基本上可以判断,直接修复该文件系统,让其正常mount的概率很小.直接通过lun层面来恢复.

通过工具解析lun
disk_size


比较明显,客户经过一系列操作现在的现象是lv 1.03T,文件系统只有821G,明显和客户给我反馈的操作不符(应该是lv 821G),证明客户做了大量操作,已经导致文件系统损坏
inode_recovery


由于文件系统严重异常,工具获取到的文件都是没有名称,直接从inode里面读取的数据.获取到这些数据之后,然后结合oracle的特性,判断出来对应的文件号关系(这里有大量文件重复);另外有个别文件通过inode恢复丢失,通过底层碎片重组进行恢复出来文件,然后恢复出来其中数据(asm disk header 彻底损坏恢复).这个客户比较幸运,system文件在另外一个分区中,不然工作量会大很多.
再次提醒:对lv操作一定要谨慎,特别是lvreduce操作,另外出现发生误操作之后,应该第一时间保护现场,而不是百度着去乱操作,可能导致故障更加悲剧

Disable Transparent HugePages

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:Disable Transparent HugePages

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

从redhat 6开始引入了Transparent HugePages,但是oracle一直建议disable 它,而使用标准HugePages方式.对于6和7的禁用方式有一些区别.
linux 6
修改/etc/grub.conf之后重启系统生效

vi /etc/grub.conf
title Oracle Linux Server (2.6.32-300.25.1.el6uek.x86_64)
        root (hd0,0)
        kernel /vmlinuz-2.6.32-300.25.1.el6uek.x86_64 ro root=LABEL=/  transparent_hugepage=never
        initrd /initramfs-2.6.32-300.25.1.el6uek.x86_64.img

linux 7
修改/etc/default/grub然后执行grub2-mkconfig并重启系统生效

[root@xifenfei u01]# vi /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="nomodeset vconsole.font=latarcyrheb-sun16 vconsole.keymap=us crashkernel=auto
                    biosdevname=0 transparent_hugepage=never"
GRUB_DISABLE_RECOVERY="true"
~
[root@xifenfei u01]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.10.0-514.26.2.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-514.26.2.el7.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-123.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-123.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-cb0b6b4de89a4fe4acfc8774c2f01486
Found initrd image: /boot/initramfs-0-rescue-cb0b6b4de89a4fe4acfc8774c2f01486.img
done

临时禁用
该方法对于linux 6和7均有效,不用重启系统

if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
   echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
   echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi

ext3/ext4 超级块修复

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:ext3/ext4 超级块修复

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

创建ext4文件系统

[root@localhost ~]# mkfs.ext4 /dev/sdb1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5242624 blocks
262131 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2153775104
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@localhost ~]# mkdir /sdb
[root@localhost ~]# mount /dev/sdb1 /sdb
[root@localhost ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/ol-root   36G  4.0G   32G  12% /
devtmpfs             1.8G     0  1.8G   0% /dev
tmpfs                1.8G     0  1.8G   0% /dev/shm
tmpfs                1.8G  8.9M  1.8G   1% /run
tmpfs                1.8G     0  1.8G   0% /sys/fs/cgroup
/dev/sda1            497M  195M  303M  40% /boot
tmpfs                369M     0  369M   0% /run/user/0
/dev/sdb1             20G   45M   19G   1% /sdb

准备测试数据

[root@localhost sdb]# cd /etc/sysctl.d/
[root@localhost sysctl.d]# ls
99-sysctl.conf
[root@localhost sysctl.d]# cp 99-sysctl.conf /sdb
[root@localhost sysctl.d]# more 99-sysctl.conf
# System default settings live in /usr/lib/sysctl.d/00-system.conf.
# To override those settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file
#
# For more information, see sysctl.conf(5) and sysctl.d(5).

破坏ext4文件系统

[root@localhost ~]#  dd if=/dev/zero of=/dev/sdb1 bs=1024 count=5
5+0 records in
5+0 records out
5120 bytes (5.1 kB) copied, 0.00270838 s, 1.9 MB/s
[root@localhost ~]# mount /dev/sdb1 /sdb
mount: unknown filesystem type '(null)'

日志报错

[ 8868.362628] sd 32:0:1:0: [sdb] Cache data unavailable
[ 8868.362632] sd 32:0:1:0: [sdb] Assuming drive cache: write through
[ 8868.363714]  sdb: sdb1
[ 8868.390297] sd 32:0:1:0: [sdb] Cache data unavailable
[ 8868.390301] sd 32:0:1:0: [sdb] Assuming drive cache: write through
[ 8868.391462]  sdb: sdb1
[ 8900.130143] EXT4-fs (sdb1): mounted filesystem with ordered data mode. Opts: (null)
[ 8900.130163] SELinux: initialized (dev sdb1, type ext4), uses xattr
[ 8902.803966] sdb1: WRITE SAME failed. Manually zeroing.

fsck修复

[root@localhost ~]# fsck -t ext4 /dev/sdb1
fsck from util-linux 2.23.2
e2fsck 1.42.9 (28-Dec-2013)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
/dev/sdb1 was not cleanly unmounted, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Free blocks count wrong for group #1 (31740, counted=31739).
Fix<y>? yes
Free blocks count wrong (5116302, counted=5116301).
Fix<y>? yes
Free inodes count wrong for group #0 (8181, counted=8180).
Fix<y>? yes
Free inodes count wrong (1310709, counted=1310708).
Fix<y>? yes
/dev/sdb1: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sdb1: 12/1310720 files (0.0% non-contiguous), 126323/5242624 blocks

测试修复结果

[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# mount /dev/sdb1 /sdb
[root@localhost ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/ol-root   36G  4.0G   32G  12% /
devtmpfs             1.8G     0  1.8G   0% /dev
tmpfs                1.8G     0  1.8G   0% /dev/shm
tmpfs                1.8G  8.9M  1.8G   1% /run
tmpfs                1.8G     0  1.8G   0% /sys/fs/cgroup
/dev/sda1            497M  195M  303M  40% /boot
tmpfs                369M     0  369M   0% /run/user/0
/dev/sdb1             20G   45M   19G   1% /sdb
[root@localhost ~]# cd /sdb
[root@localhost sdb]# ls
99-sysctl.conf  lost+found
[root@localhost sdb]# more 99-sysctl.conf
# System default settings live in /usr/lib/sysctl.d/00-system.conf.
# To override those settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file
#
# For more information, see sysctl.conf(5) and sysctl.d(5).

ext4文件系统修复

[root@localhost ~]# mkfs.ext3 /dev/sdb1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5242624 blocks
262131 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
[root@localhost ~]# mount /dev/sdb1 /sdb
[root@localhost ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/ol-root   36G  4.0G   32G  12% /
devtmpfs             1.8G     0  1.8G   0% /dev
tmpfs                1.8G     0  1.8G   0% /dev/shm
tmpfs                1.8G  8.9M  1.8G   1% /run
tmpfs                1.8G     0  1.8G   0% /sys/fs/cgroup
/dev/sda1            497M  195M  303M  40% /boot
tmpfs                369M     0  369M   0% /run/user/0
/dev/sdb1             20G   45M   19G   1% /sdb
[root@localhost ~]# dd if=/dev/zero of=/dev/sdb1 bs=1024 count=5
5+0 records in
5+0 records out
5120 bytes (5.1 kB) copied, 0.0138915 s, 369 kB/s
[root@localhost ~]# fsck -t ext3 /dev/sdb1
fsck from util-linux 2.23.2
e2fsck 1.42.9 (28-Dec-2013)
ext2fs_open2: Bad magic number in super-block
fsck.ext3: Superblock invalid, trying backup blocks...
/dev/sdb1 was not cleanly unmounted, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/sdb1: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sdb1: 11/1310720 files (0.0% non-contiguous), 126322/5242624 blocks
[root@localhost ~]# mount /dev/sdb1 /sdb
[root@localhost ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/ol-root   36G  4.0G   32G  12% /
devtmpfs             1.8G     0  1.8G   0% /dev
tmpfs                1.8G     0  1.8G   0% /dev/shm
tmpfs                1.8G  8.9M  1.8G   1% /run
tmpfs                1.8G     0  1.8G   0% /sys/fs/cgroup
/dev/sda1            497M  195M  303M  40% /boot
tmpfs                369M     0  369M   0% /run/user/0
/dev/sdb1             20G   45M   19G   1% /sdb

fsck修复危险性较大,建议先备份对应的分区(dd命令备份分区)然后再处理,有导致分区数据全部或者部分丢失的风险,如果超级块彻底损坏无法恢复,请联系我们,提供专业ORACLE数据库恢复技术支持
Phone:17813235971    Q Q:107644445QQ咨询惜分飞    E-Mail:dba@xifenfei.com

linux 7(redhat,oracle linux,centos)中使用udev

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:linux 7(redhat,oracle linux,centos)中使用udev

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

慢慢的linux 7的使用人越来越多了,但是linux 7相对于5和6的版本,变动确实比较大,本文主要描写在linux 7中如何实现udev,实现设备持久化,权限和所属组的修改
linux版本

Oracle Linux Server release 7.1
[root@www.xifenfei.com ~]# uname -a
Linux www.xifenfei.com 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 04:05:24 PST 2015 x86_64 x86_64 x86_64 GNU/Linux

VMware Workstation中显示uuid需要在vmx文件中增加

disk.enableUUID = "TRUE"

查看磁盘分区

[root@www.xifenfei.com ~]# fdisk -l
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xf60fe217
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048     2099199     1048576   83  Linux
Disk /dev/sda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000bce7c
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048     4204543     2101248   8e  Linux LVM
/dev/sda2   *     4204544    79702015    37748736   83  Linux
Disk /dev/sdc: 32.2 GB, 32212254720 bytes, 62914560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/ol-swap: 2147 MB, 2147483648 bytes, 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

查看磁盘uuid

[root@www.xifenfei.com ~]# /usr/lib/udev/scsi_id -g -u /dev/sdb1
36000c29e91831cedbe69afe6cc08daf7
[root@www.xifenfei.com ~]# /usr/lib/udev/scsi_id -g -u /dev/sdc
36000c292495e9d9de6f21640cc7b53b9

udev绑定

[root@www.xifenfei.com ~]# more /etc/udev/rules.d/99-my-asmdevices.rules
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",
 RESULT=="36000c292495e9d9de6f21640cc7b53b9", RUN+="/bin/sh -c 'mknod /dev/xifenfei-sdc b $major $minor;
chown oracle:dba /dev/xifenfei-sdc; chmod 0660 /dev/xifenfei-sdc'"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id -g -u -d /dev/$parent",
RESULT=="36000c29e91831cedbe69afe6cc08daf7", SYMLINK+="xifenfei-sdb1", OWNER="oracle", GROUP="dba", MODE="0660"

绑定结果

[root@www.xifenfei.com ~]# ls -l /dev/xifenfei-*
lrwxrwxrwx. 1 root   root     4 Aug  7 22:49 /dev/xifenfei-sdb1 -> sdb1
brw-rw----. 1 oracle dba  8, 32 Aug  7 22:36 /dev/xifenfei-sdc
[root@www.xifenfei.com ~]# ls -l /dev/sdb1
brw-rw----. 1 oracle dba 8, 17 Aug  7 22:49 /dev/sdb1

udev只修改磁盘权限

[root@www.xifenfei.com ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type:
   p   primary (1 primary, 0 extended, 3 free)
   e   extended
Select (default p): p
Partition number (2-4, default 2):
First sector (2099200-41943039, default 2099200):
Using default value 2099200
Last sector, +sectors or +size{K,M,G} (2099200-41943039, default 41943039): +1G
Partition 2 of type Linux and of size 1 GiB is set
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@www.xifenfei.com ~]# more /etc/udev/rules.d/99-my-asmdevices.rules
KERNEL=="sd?2", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id -g -u -d /dev/$parent",
 RESULT=="36000c29e91831cedbe69afe6cc08daf7",  OWNER="oracle", GROUP="dba", MODE="0660"
[root@www.xifenfei.com ~]# /sbin/udevadm trigger --type=devices --action=change
[root@www.xifenfei.com ~]# ls -l /dev/sdb2
brw-rw----. 1 oracle dba 8, 18 Aug  7 23:14 /dev/sdb2

这里可以发现在linux 7中使用了两种方法绑定udev,一种是真实生成udev设备,另外一种是通过软连接实现.感谢lunar(Lunar的oracle实验室)在linux 7学习中的帮助

使用losetup实现linux普通文件做asm disk

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:使用losetup实现linux普通文件做asm disk

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

上一篇文章《使用_asm_allow_only_raw_disks实现普通文件做asm disk》中已经介绍使用_asm_allow_only_raw_disks参数使得oracle asm可以使用文件作为asm disk,这篇文章介绍在linux中还可以通过losetup来实现文件系统模拟磁盘实现使用文件系统做asm disk的效果
通过dd构造文件

[oracle@xifenfei ~]$ mkdir /u01/oracle/oradata/asmdisk
[oracle@xifenfei ~]$ dd if=/dev/zero of=/u01/oracle/oradata/asmdisk/xifenfei01.dd bs=10240k count=100
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB) copied, 21.9158 seconds, 47.8 MB/s
[oracle@xifenfei ~]$ dd if=/dev/zero of=/u01/oracle/oradata/asmdisk/xifenfei02.dd bs=10240k count=100
100+0 records in
100+0 records out
1048576000 bytes (1.0 GB) copied, 22.392 seconds, 46.8 MB/s
[oracle@xifenfei ~]$ ls -lh /u01/oracle/oradata/asmdisk/
total 3.0G
-rw-r--r-- 1 oracle oinstall 1000M Feb 27 22:58 xifenfei01.dd
-rw-r--r-- 1 oracle oinstall 1000M Feb 27 23:00 xifenfei02.dd

使用losetup模拟磁盘

[root@xifenfei asmdisk]# ls -l /dev/lo
log    loop0  loop1  loop2  loop3  loop4  loop5  loop6  loop7
[root@xifenfei asmdisk]# losetup /dev/loop1 xifenfei01.dd
[root@xifenfei asmdisk]# losetup /dev/loop2 xifenfei02.dd

使用raw实现磁盘转换为裸设备

[root@xifenfei asmdisk]# raw  /dev/raw/raw10 /dev/loop1
/dev/raw/raw10: bound to major 7, minor 1
[root@xifenfei asmdisk]# raw  /dev/raw/raw11 /dev/loop2
/dev/raw/raw11: bound to major 7, minor 2
[root@xifenfei asmdisk]# ls -l /dev/raw/raw1[0-1]
crw------- 1 root root 162, 10 Feb 27 23:16 /dev/raw/raw10
crw------- 1 root root 162, 11 Feb 27 23:16 /dev/raw/raw11
[root@xifenfei asmdisk]# chown oracle.dba /dev/raw/raw1[0-1]
[root@xifenfei asmdisk]# ls -l /dev/raw/raw1[0-1]
crw------- 1 oracle dba 162, 10 Feb 27 23:16 /dev/raw/raw10
crw------- 1 oracle dba 162, 11 Feb 27 23:16 /dev/raw/raw11

创建磁盘组

[oracle@xifenfei ~]$ export ORACLE_SID=+ASM
[oracle@xifenfei ~]$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.4.0 - Production on Thu Feb 27 23:19:28 2014
Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL>  create diskgroup xff external redundancy disk '/dev/raw/raw10','/dev/raw/raw11';
Diskgroup created.
SQL> select group_number,name from v$asm_diskgroup;
GROUP_NUMBER NAME
------------ ------------------------------------------------------------
           1 DATA
           2 XFF
SQL> select path,TOTAL_MB from v$asm_disk where group_number=2;
PATH                   TOTAL_MB
-------------------- ----------
/dev/raw/raw11             1000
/dev/raw/raw10             1000

通过kfed验证asm disk是数据文件

[oracle@xifenfei tmp]$ kfed read /dev/raw/raw10|grep XFF
kfdhdb.dskname:                XFF_0000 ; 0x028: length=8
kfdhdb.grpname:                     XFF ; 0x048: length=3
kfdhdb.fgname:                 XFF_0000 ; 0x068: length=8
[oracle@xifenfei tmp]$ kfed read /dev/raw/raw11|grep XFF
kfdhdb.dskname:                XFF_0001 ; 0x028: length=8
kfdhdb.grpname:                     XFF ; 0x048: length=3
kfdhdb.fgname:                 XFF_0001 ; 0x068: length=8
[oracle@xifenfei tmp]$ kfed read /u01/oracle/oradata/asmdisk/xifenfei01.dd |grep XFF
kfdhdb.dskname:                XFF_0000 ; 0x028: length=8
kfdhdb.grpname:                     XFF ; 0x048: length=3
kfdhdb.fgname:                 XFF_0000 ; 0x068: length=8
[oracle@xifenfei tmp]$ kfed read /u01/oracle/oradata/asmdisk/xifenfei02.dd |grep XFF
kfdhdb.dskname:                XFF_0001 ; 0x028: length=8
kfdhdb.grpname:                     XFF ; 0x048: length=3
kfdhdb.fgname:                 XFF_0001 ; 0x068: length=8

通过kfed命令,确定asm本质是用了dd出来的数据文件做asm disk.

Linux 7 新命令之—lscpu和systemctl

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:Linux 7 新命令之—lscpu和systemctl

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

redhat 7 系列发布已经有一段时间了,最近抽时间看了下官方文档,对比了一些命令,对于其中比较关注的进行了记录,本篇主要是列出来了服务的管理改进,引进了systemctl管理和一个非常方便看cpu信息的命令lscpu
系统版本

[root@em12cdb ~]# uname -a
Linux em12cdb 3.8.13-55.1.6.el7uek.x86_64 #2 SMP Wed Feb 11 14:18:22 PST 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@em12cdb ~]# more /etc/oracle-release
Oracle Linux Server release 7.1

查看cpu信息

[root@em12cdb ~]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                2
On-line CPU(s) list:   0,1
Thread(s) per core:    1
Core(s) per socket:    2
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 60
Model name:            Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
Stepping:              3
CPU MHz:               3997.739
BogoMIPS:              7995.47
Hypervisor vendor:     VMware
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              8192K
NUMA node0 CPU(s):     0,1

systemctl命令管理服务和开机启动
以前主要是通过/etc/init.d/或者service命令管理服务,通过chkconfig管理是否开机启动

--查看某个服务状态
[root@em12cdb ~]# systemctl status crond.service
crond.service - Command Scheduler
   Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled)
   Active: active (running) since Mon 2015-07-27 11:59:16 CST; 1 months 7 days ago
 Main PID: 1245 (crond)
   CGroup: /system.slice/crond.service
           └─1245 /usr/sbin/crond -n
Jul 27 11:59:16 em12cdb systemd[1]: Started Command Scheduler.
Jul 27 11:59:16 em12cdb crond[1245]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 33% if used.)
Jul 27 11:59:17 em12cdb crond[1245]: (CRON) INFO (running with inotify support)
--停止某个服务
[root@em12cdb ~]# systemctl stop crond.service
[root@em12cdb ~]# systemctl status crond.service
crond.service - Command Scheduler
   Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled)
   Active: inactive (dead) since Thu 2015-09-03 00:27:55 CST; 2s ago
 Main PID: 1245 (code=exited, status=0/SUCCESS)
Jul 27 11:59:16 em12cdb systemd[1]: Started Command Scheduler.
Jul 27 11:59:16 em12cdb crond[1245]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 33% if used.)
Jul 27 11:59:17 em12cdb crond[1245]: (CRON) INFO (running with inotify support)
Sep 03 00:27:55 em12cdb systemd[1]: Stopping Command Scheduler...
Sep 03 00:27:55 em12cdb systemd[1]: Stopped Command Scheduler.
--启动某个服务
[root@em12cdb ~]# systemctl start crond.service
[root@em12cdb ~]# systemctl status crond.service
crond.service - Command Scheduler
   Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled)
   Active: active (running) since Thu 2015-09-03 00:28:08 CST; 1s ago
 Main PID: 7294 (crond)
   CGroup: /system.slice/crond.service
           └─7294 /usr/sbin/crond -n
Sep 03 00:28:08 em12cdb systemd[1]: Started Command Scheduler.
Sep 03 00:28:08 em12cdb crond[7294]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 56% if used.)
Sep 03 00:28:09 em12cdb crond[7294]: (CRON) INFO (running with inotify support)
Sep 03 00:28:09 em12cdb crond[7294]: (CRON) INFO (@reboot jobs will be run at computer's startup.)
--重启某个服务
[root@em12cdb ~]# systemctl restart crond.service
[root@em12cdb ~]# systemctl status crond.service
crond.service - Command Scheduler
   Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled)
   Active: active (running) since Thu 2015-09-03 00:28:24 CST; 2s ago
 Main PID: 7323 (crond)
   CGroup: /system.slice/crond.service
           └─7323 /usr/sbin/crond -n
Sep 03 00:28:24 em12cdb systemd[1]: Starting Command Scheduler...
Sep 03 00:28:24 em12cdb systemd[1]: Started Command Scheduler.
Sep 03 00:28:24 em12cdb crond[7323]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 61% if used.)
Sep 03 00:28:24 em12cdb crond[7323]: (CRON) INFO (running with inotify support)
Sep 03 00:28:24 em12cdb crond[7323]: (CRON) INFO (@reboot jobs will be run at computer's startup.)
--检查某个服务是否开机启动
[root@em12cdb ~]# systemctl is-enabled crond.service
enabled
--禁止某个服务开机启动
[root@em12cdb ~]# systemctl disable crond.service
rm '/etc/systemd/system/multi-user.target.wants/crond.service'
[root@em12cdb ~]# systemctl is-enabled crond.service
disabled
--查看所有服务开机启动情况(这里使用了grep便于说明)
[root@em12cdb ~]# systemctl list-unit-files --type service|grep cron
crond.service                               disabled
[root@em12cdb ~]# systemctl enable crond.service
ln -s '/usr/lib/systemd/system/crond.service' '/etc/systemd/system/multi-user.target.wants/crond.service'
[root@em12cdb ~]# systemctl list-unit-files --type service|grep cron
crond.service                               enabled

systemctl chkconfig 对比


systemctl命令修改启动模式
以前版本中,直接通过vi修改/etc/inittab文件

--多用户图形界面
[root@em12cdb ~]# systemctl get-default
graphical.target
--多用户字符界面
[root@em12cdb ~]# systemctl set-default multi-user.target
rm '/etc/systemd/system/default.target'
ln -s '/usr/lib/systemd/system/multi-user.target' '/etc/systemd/system/default.target'
[root@em12cdb ~]# systemctl get-default
multi-user.target

systemd runlevel


注意系统bug—linux在E5、E5 V2、E7 V2 cpu之上的bug 765720

联系:手机/微信(+86 17813235971) QQ(107644445)

标题:注意系统bug—linux在E5、E5 V2、E7 V2 cpu之上的bug 765720

作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]

今天晚上群里面兄弟说了一个linux 6上面bug,会导致系统在运行200天以上(hardware uptime),然后进行热重启后,可能在几分钟或者几个小时内出发该bug,导致系统异常.

主要影响条件为:
Red Hat Enterprise Linux 6.1 (kernel-2.6.32-131.26.1.el6 and newer)
Red Hat Enterprise Linux 6.2 (kernel-2.6.32-220.4.2.el6 and newer)
Red Hat Enterprise Linux 6.3 (kernel-2.6.32-279 series)
Red Hat Enterprise Linux 6.4 (kernel-2.6.32-358 series)
Any Intel® Xeon® E5, Intel® Xeon® E5 v2, or Intel® Xeon® E7 v2 series processor
从这里可以看出来该问题主要影响E5、E5 V2、E7 V2 cpu上的redhat 6.1-6.4版本,在6.5版本中修复,具体参考:bug 765720
另外对已ORACLE Linux,如果使用EL Kernel影响和redhat一致,如果使用Unbreakable Enterprise Kernel则在6.2版本中进行了修复该问题。
MOS上类似文章:Oracle Linux 6 RHCK system hang: processes blocked in ext4_file_open(), pick_next_task_fair()

补充说明:
1. 在Red Hat/OEL 5.x版本中不存在。
2. 在32和64位操作系统都有可能发生
3. 鉴于该bug短期内无法修复,而且真的发生了,考虑冷重启主机,临时规避

再次提醒:系统版本选定也很重要,大家在选择Linux版本之时尽量选择避开该bug(el kernel 6.5及其以后版本,uek kernel 6.2及其以后版本)。个人倾向:如果是部署ORACLE db,而且还是redhat系列Linux,更加倾向OEL(省事,相信Oracle)