Various bug fixes for Black Synology

Reprinted from https://blog.csdn.net/sachin_woo/article/details/100066529

Whitewash

First of all, you should understand whether whitewashing is necessary. Whitewashing has two functions. One is to use QuickConnect; the other is to use Video Station to decode. MACWhitewashing needs to find the correct S/Ncode. As for its source, there are various ways, one is to have a number calculator; the other is to use the loopholes in the return and exchange policy. But there is a risk. If Qunhui finds that the same MAC/SNtwo are online at the same time, it may ban the account. If you don't use the above two functions, you don't need to wash it.

To modify MAC/SN, you need to modify the startup configuration file grub.cfg, there are two methods:

  • Start PE directly, and then load the first partition of ssd to find the file
  • SSH online modification

Personally, I think SSH online modification is more convenient, the specific operation is as follows:

1. Open the SSH port

In the control panel –> terminal and SNMP, enable the SSH function.
insert image description here

2. SSH tool mounts the synoboot1 partition

Use ssh tools such as putty to connect to the ip address of Synology, and log in with the administrative user who created Synology.
For example, user name: admin password 123456
and enter the following command:

sudo -i                                           //获取root超级权限
mkdir -p /tmp/boot                                //在/tmp目录下创建一个临时目录,名字随意,如:boot
cd /dev                                           //切换到dev目录
mount -t vfat synoboot1 /tmp/boot/                //将synoboot1 分区挂载到boot
cd /tmp/boot/grub                                 //切换到grub目录
vim grub.cfg                                      //修改grub.cfg文件

insert image description here
Press the i key on the keyboard (lowercase state) to enter the document editing mode, then you can enter the new SN, the new value of MAC1, and delete the old value.

After the modification is completed.

Press the Esc key on the keyboard to return to the command mode, enter: wq, and then press Enter to save and exit. If the modification is messed up and you want to quit without saving, enter: q, then press Enter.

At this point, you can enter vi grub.cfg to see if the modification is successful.

Finally restart the host:

reboot

Hard disk display

For problem 2, the hard drive letter is messed up. This B-type snail has two SATA controllers and 6 SATA interfaces (including one mSATA interface). The processor controls 2 bootable interfaces (the one next to the memory and mSATA), and the onboard controller controls 4 interfaces of the hard disk shelf but cannot boot.

1. The sequence of the hard disk

After the DSM is installed, the order of the hard disks should be that the two interfaces controlled by the processor are in front (assumed to be 1, 2), and the four interfaces controlling the hard disk shelf are behind (assumed to be 3, 4, 5, 6). Therefore, as long as the hard disks placed on the hard disk shelf are marked in the DSM, they will be marked between No. 3 and No. 6.

If you need to change the order on the hard disk shelf to 1, 2, 3, and 4, you can modify the grub.conf configuration file in the boot disk to achieve it. To modify the
disk serial number, you need to add two values ​​SataPortMap=24 and DiskIdxMap=0400 in the extra_args_918 variable.
insert image description here

Right now:

# /grub/grub.conf
# 从第31行开始
......
set extra_args_918='SataPortMap=24 DiskIdxMap=0400' #将两项加在这后面

set common_args_918='syno_hdd_powerup_seq=0 HddHotplug=0 syno_hw_version=DS918+ vender_format_version=2 console=ttyS0,115200n8 withefi elevator=elevator quiet syno_port_thaw=1'
# for testing on VM
set sata_args='SataPortMap=1'
......

After the modification is completed, save and restart. My hard disk is placed in the left two disk slots from left to right, so it is No. 3 and No. 4. If the
order of the disk slots is still wrong, you need to physically replace the SATA connected to the motherboard, and the swap position will be normal.

Briefly explain these two values:
for the specific meaning, please refer to lines 229 and 249 here:

SataPortMap=24

Configure the system to have two SATA controllers, the first controller has 2 ports and the second controller has 4 ports.

DiskIdxMap=0400

Set the interface number of the first SATA controller to start from 5, and the interface number of the second SATA controller to start from 1 (both 04 and 00 are hexadecimal).

2. Hide the boot disk after booting with SSD

Directly write the boot image into the mSATA disk, and there will be a 14G disk in the storage space manager that starts from the unused state, which is the remaining space in the mSATA disk except the boot partition, as follows:

insert image description here

It can be initialized and used, but the 14G space is useless, and the built-in SSD is very weak, and there is a certain risk of crashing when used to store data. In order to prevent it from being an eyesore, you can use the above method to hide this disk. You still need to modify the grub.conf configuration file in the boot disk to achieve it. You need to add the value DiskIdxMap=1000 to the sata_args variable, and select the third startup item (VMware/ESXI) to start when
starting
.

insert image description here
Right now:

# /grub/grub.conf
# 从第31行开始
......
set extra_args_918=''

set common_args_918='syno_hdd_powerup_seq=0 HddHotplug=0 syno_hw_version=DS918+ vender_format_version=2 console=ttyS0,115200n8 withefi elevator=elevator quiet syno_port_thaw=1'
# for testing on VM
set sata_args='SataPortMap=24 DiskIdxMap=1000'# 将两项加在这后面(10,00都为16进制)
......

3. The model number of the processor displayed in the information center

After the DSM system is installed, the information center displays the processor information of the Baiqunhui machine. For example, the DS3617 system displays the information of the Xeon D processor, which is obviously directly written to death.

  • Download ch_cpuinfo_en.tar on your computer, [download here]
  • Upload downloaded files to DSM via FileStation
  • Connect to DSM with Putty or other SSH tools
  • Operate in SSH tools
# 切换到root账户;
sudo -i

# 打开ch_cpuinfo.tar文件所在目录;
cd /volume1/tmp

# 解压ch_cpuinfo.tar文件;
tar xvf ch_cpuinfo.tar

# 运行ch_cpuinfo文件;
./ch_cpuinfo

# 运行后,按“1”选择“First Run”,再按“y”键;

# 关闭SSH工具,重新登陆后信息中心显示J1900信息;

sleep

1. Turn on hibernate debug log

This option is hidden deep, in the upper left corner menu→Technical Support Center→Technical Support Services on the left→Start system sleep debugging mode

insert image description here

2. Waiting to trigger hibernation problem

Keep the NAS idle until the set time. Remember to turn off the NAS webpage and various clients, otherwise the following logs may be too long to analyze. I open the logs before going to bed and analyze them. When I sleep, no other devices are turned on except the NAS and router, and the logs are very accurate.

3. Analyze logs

Two logs will be generated, namely /var/log/hibernation.log and /var/log/hibernationFull.log. The latter is the original data, and the former is a simplified version that removes some worthless "chain" operations, but it is sometimes overly streamlined, so I will analyze the latter as an example here.

First, manually exclude the log of dirty blocks written to disk. Usually, the kernel does not perform a large number of disk operations spontaneously. Most write blocks are the result of dirty blocks in user mode, so the lines containing WRITE block and sync can be deleted to save a lot of layout.

Secondly, exclude non-hard disk writes. Just delete the line containing on tmpfs or on proc, and the remaining non-hard disk file systems are ignored by naked eyes.

The rest of the entries can be analyzed. For example, when I take a nap here, each record looks like this:

***********Clear*********
[140146.388709] btsync(15253): dirtied inode 11404 (sync.log) on md2
[140146.388721] btsync(15253): dirtied inode 11404 (sync.log) on md2
[140146.388723] btsync(15253): dirtied inode 11404 (sync.log) on md2
[140151.820668] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[140151.820682] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[140151.820684] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[140152.332689] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[140152.332696] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[140152.332698] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[140153.783855] btsync(15253): dirtied inode 11404 (sync.log) on md2
[140153.783870] btsync(15253): dirtied inode 11404 (sync.log) on md2
[140153.783872] btsync(15253): dirtied inode 11404 (sync.log) on md2
[140170.602870] synologrotated(4963): dirtied inode 28083 (.SYNOSYSDB-wal) on md0
[140170.602888] synologrotated(4963): dirtied inode 29789 (.SYNOSYSDB-shm) on md0
[140170.603221] synologrotated(4963): dirtied inode 21538 (.SYNOCONNDB-wal) on md0
[140170.603235] synologrotated(4963): dirtied inode 22044 (.SYNOCONNDB-shm) on md0
[140173.443684] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[140173.443696] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[140173.443698] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[140173.955999] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[140173.956006] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[140173.956009] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[140272.465248] btsync(15253): dirtied inode 11404 (sync.log) on md2
[140272.465265] btsync(15253): dirtied inode 11404 (sync.log) on md2
[140272.465267] btsync(15253): dirtied inode 11404 (sync.log) on md2
[140278.386378] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[140278.386390] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[140278.386393] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[140278.898561] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[140278.898569] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[140278.898571] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[140631.564198] btsync(15253): dirtied inode 11404 (sync.log) on md2
[140631.564209] btsync(15253): dirtied inode 11404 (sync.log) on md2
[140631.564211] btsync(15253): dirtied inode 11404 (sync.log) on md2
[140637.298101] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[140637.298113] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[140637.298115] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[140637.811061] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[140637.811068] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[140637.811071] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[141346.340822] btsync(15253): dirtied inode 11404 (sync.log) on md2
[141346.340833] btsync(15253): dirtied inode 11404 (sync.log) on md2
[141346.340836] btsync(15253): dirtied inode 11404 (sync.log) on md2
[141351.508216] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[141351.508226] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[141351.508228] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[141352.021228] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[141352.021235] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[141352.021238] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[141352.494749] btsync(15253): dirtied inode 11404 (sync.log) on md2
[141352.494758] btsync(15253): dirtied inode 11404 (sync.log) on md2
[141352.494760] btsync(15253): dirtied inode 11404 (sync.log) on md2
[141371.039633] synologrotated(4963): dirtied inode 28083 (.SYNOSYSDB-wal) on md0
[141371.039654] synologrotated(4963): dirtied inode 29789 (.SYNOSYSDB-shm) on md0
[141371.039992] synologrotated(4963): dirtied inode 21538 (.SYNOCONNDB-wal) on md0
[141371.040007] synologrotated(4963): dirtied inode 22044 (.SYNOCONNDB-shm) on md0
[141377.244527] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[141377.244539] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[141377.244541] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[141377.757046] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[141377.757054] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[141377.757056] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[141535.911703] dhclient(16778): dirtied inode 19635 (sh) on md0
[141535.911717] dhclient(16778): dirtied inode 19626 (bash) on md0
[141535.911909] dhclient-script(16778): dirtied inode 14958 (libncursesw.so.5) on md0
[141535.911917] dhclient-script(16778): dirtied inode 13705 (libncursesw.so.5.9) on md0
[141535.914460] awk(16782): dirtied inode 13819 (libm.so.6) on md0
[141535.914470] awk(16782): dirtied inode 11177 (libm-2.20-2014.11.so) on md0
[141542.431766] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[141542.431778] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[141542.431781] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[141542.944314] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[141542.944322] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[141542.944324] syno_hibernatio(25655): dirtied inode 5348 (hibernation.log) on md0
[142073.169495] btsync(15253): dirtied inode 11404 (sync.log) on md2
[142073.169512] btsync(15253): dirtied inode 11404 (sync.log) on md2
[142073.169515] btsync(15253): dirtied inode 11404 (sync.log) on md2
[142078.947137] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[142078.947150] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
[142078.947152] syno_hibernatio(25655): dirtied inode 5885 (hibernationFull.log) on md0
uptime : [142078.753468]
======Idle 536 seconds======
Sat Oct 27 14:34:19 CST 2018

There are not many processes, judge one by one:

btsync: BTSync suite, sync.log is just as the name suggests. It seems that its frequent log writing is an obvious reason for hindering hibernation. Anyway, I just pretend to not configure it, so I can delete it.

syno_hibernatio: ps | grep Look at it and find that the full name is syno_hibernation_debug, plus the file name of the operation, it is sure to be the tool itself that records the hibernation log, and it will be gone after it is closed

synologrotated: It should be a tool for recording system logs. If it is dormant, there should be no logs. This is also a passive source

dhclient and dhclient-script: routine operation of the DHCP client, cannot be blocked

Then in this round, we can only come to the conclusion that BTSync needs to be stopped. Do this first and then talk about it. The hibernation log can be turned off in no hurry.
Try it for another day. Check the system log:

insert image description here

Judging from the log, the above operation is effective, the hard disk can finally enter hibernation, and there are many "Internal disks woke up from hibernation". But this is one every half an hour, which is equivalent to being woken up after a few seconds of hibernation.

So continue to analyze the hibernation log:

***********Clear*********
[236666.547745] syslog-ng(4331): dirtied inode 18 (scemd.log) on md0
[236687.650564] syslog-ng(13085): dirtied inode 18 (scemd.log) on md0
[236687.650585] syslog-ng(13085): dirtied inode 18 (scemd.log) on md0
[236687.650592] syslog-ng(13085): dirtied inode 18 (scemd.log) on md0
[236687.658884] syslog-ng(5016): dirtied inode 28581 (.SYNOSYSDB-shm) on md0
[236687.658893] syslog-ng(5016): dirtied inode 28581 (.SYNOSYSDB-shm) on md0
[236687.658946] syslog-ng(5016): dirtied inode 24584 (.SYNOSYSDB-wal) on md0
[236687.658952] syslog-ng(5016): dirtied inode 24584 (.SYNOSYSDB-wal) on md0
[236687.658954] syslog-ng(5016): dirtied inode 24584 (.SYNOSYSDB-wal) on md0
[236687.664164] logrotate(13090): dirtied inode 41594 (synolog) on md0
[236687.666146] logrotate(13090): dirtied inode 6900 (logrotate.status) on md0
[236687.671082] logrotate(13090): dirtied inode 7905 (logrotate.status.tmp) on md0
[236689.662143] synologaccd(4840): dirtied inode 22952 (.SYNOACCOUNTDB) on md0
[236689.662355] synologaccd(4840): dirtied inode 6900 (.SYNOACCOUNTDB-wal) on md0
[236689.662383] synologaccd(4840): dirtied inode 21526 (.SYNOACCOUNTDB-shm) on md0
[236689.763593] synologaccd(4840): dirtied inode 22952 (.SYNOACCOUNTDB) on md0
[236689.763629] synologaccd(4840): dirtied inode 22952 (.SYNOACCOUNTDB) on md0
[236691.547334] synologrotated(5000): dirtied inode 28581 (.SYNOSYSDB-shm) on md0
[236691.547681] synologrotated(5000): dirtied inode 23485 (.SYNOCONNDB-wal) on md0
[236691.547695] synologrotated(5000): dirtied inode 24677 (.SYNOCONNDB-shm) on md0
[238511.431135] syslog-ng(4331): dirtied inode 18 (scemd.log) on md0
uptime : [238516.475108]
======Idle 1807 seconds======
Wed Oct 24 03:52:06 CST 2018
#####################################################
Only idle 44 seconds, pass
Wed Oct 24 03:52:51 CST 2018
#####################################################
***********Clear*********
[238522.209123] synologrotated(5000): dirtied inode 24584 (.SYNOSYSDB-wal) on md0
[238522.209173] synologrotated(5000): dirtied inode 28581 (.SYNOSYSDB-shm) on md0
[238522.210082] synologrotated(5000): dirtied inode 23485 (.SYNOCONNDB-wal) on md0
[238522.210122] synologrotated(5000): dirtied inode 24677 (.SYNOCONNDB-shm) on md0
[238522.224252] logrotate(19321): dirtied inode 41594 (synolog) on md0
[238522.229880] logrotate(19321): dirtied inode 7905 (logrotate.status) on md0
[238522.244528] logrotate(19321): dirtied inode 6900 (logrotate.status.tmp) on md0
[238531.967854] syslog-ng(19324): dirtied inode 18 (scemd.log) on md0
[238531.967874] syslog-ng(19324): dirtied inode 18 (scemd.log) on md0
[238531.967882] syslog-ng(19324): dirtied inode 18 (scemd.log) on md0
[238531.990488] logrotate(19329): dirtied inode 6900 (logrotate.status.tmp) on md0
[238533.979174] synologaccd(4840): dirtied inode 22952 (.SYNOACCOUNTDB) on md0
[238533.979348] synologaccd(4840): dirtied inode 7905 (.SYNOACCOUNTDB-wal) on md0
[238533.979378] synologaccd(4840): dirtied inode 21526 (.SYNOACCOUNTDB-shm) on md0
[238534.076345] synologaccd(4840): dirtied inode 22952 (.SYNOACCOUNTDB) on md0
[238534.076385] synologaccd(4840): dirtied inode 22952 (.SYNOACCOUNTDB) on md0
[240368.320927] syslog-ng(4331): dirtied inode 18 (scemd.log) on md0
uptime : [240374.147000]
======Idle 1811 seconds======
Wed Oct 24 04:23:02 CST 2018

synocrond: It sounds like a task scheduler, there is a DSM automatic update check, the trigger frequency is not high, it should not affect much

builtin-synodat: don't know what it is

logrotate: Probably also a log program

synologaccd: continues to be the log program

syslog-ng: I don't know why there are so many log management programs in Synology

Nothing can be seen in a single log, but several blocks are consistent with being woken up as soon as it goes to sleep (the idle time is the set 30 minutes plus a dozen seconds), and the last one is written to (/var/log/)scemd.log. This is a bit interesting, open it and see what's inside:

2018-10-24T07:00:13+08:00 Hamster-DS scemd: led/led_brightness.c:244 Fail to read /usr/sbin/i2cget
2018-10-24T07:00:13+08:00 Hamster-DS scemd: led.c:35 SYNOGetLedBrightness fail()
2018-10-24T07:00:34+08:00 Hamster-DS scemd: event_disk_hibernation_handler.c:42 The internal disks wake up, hibernate from [Oct 24 07:00:11]
2018-10-24T07:31:09+08:00 Hamster-DS scemd: led/led_brightness.c:244 Fail to read /usr/sbin/i2cget
2018-10-24T07:31:09+08:00 Hamster-DS scemd: led.c:35 SYNOGetLedBrightness fail()
2018-10-24T07:31:30+08:00 Hamster-DS scemd: event_disk_hibernation_handler.c:42 The internal disks wake up, hibernate from [Oct 24 07:31:07]
2018-10-24T08:01:53+08:00 Hamster-DS scemd: led/led_brightness.c:244 Fail to read /usr/sbin/i2cget
2018-10-24T08:01:53+08:00 Hamster-DS scemd: led.c:35 SYNOGetLedBrightness fail()
2018-10-24T08:02:14+08:00 Hamster-DS scemd: event_disk_hibernation_handler.c:42 The internal disks wake up, hibernate from [Oct 24 08:01:53]
2018-10-24T08:32:37+08:00 Hamster-DS scemd: led/led_brightness.c:244 Fail to read /usr/sbin/i2cget
2018-10-24T08:32:37+08:00 Hamster-DS scemd: led.c:35 SYNOGetLedBrightness fail()
2018-10-24T08:32:59+08:00 Hamster-DS scemd: event_disk_hibernation_handler.c:42 The internal disks wake up, hibernate from [Oct 24 08:32:37]

It clearly shows that this is the reason for waking up immediately after hibernation: since the black group does not have I2C devices, DSM will make an error reading the i2c device node when trying to change the LED brightness (or color, blinking pattern?) after hibernation, scemd records this error message in its own log, triggers hard disk writing, and the hard disk is woken up after more than ten seconds of hibernation.

4. Repair

If it is basically repaired, I2C adapter on the hardware, and even add as many lights as the white group to the black group by the way. But this is unrealistic, then we can only take the mainstream method: solve the log that raises the problem. The expected solution is to point this log file to the memory, let scemd write to the memory, and it will not wake up the hard disk. Find the file
:

vim /etc.defaults/syslog-ng/patterndb.d/scemd.conf
修改
destination d_scemd {
    
     file("/var/log/scemd.log"); };
为
destination d_scemd {
    
     file("/tmp/scemd.log"); };

Restart the system to sleep perfectly.

Guess you like

Origin blog.csdn.net/zxsean/article/details/105358471