Thank you, Albert.
Sure, From 08-25-2023, there are many Task 13807:
****
1705 Warning 2023-08-28 11:55:01 Task 13807 (/opt/mercury-main/13.9.6815.02/bin/mfb.elf) holds 513 open file descriptors (warning threshold 512 - 50% of task's open file limit).
1706 Information 2023-08-28 11:55:06 Task 13807 (/opt/mercury-main/13.9.6815.02/bin/mfb.elf) is no longer over warning threshold (512 - 50% of task's open file limit) for open file descriptors.
Cause: The specified task has either exited or closed file descriptors.
****
Also, Look through the below "trouble" output, there are many "quota exceeded" problems. Do you think that the key point leads to the "memory leak" problem?
============================================================================
Trouble
============================================================================
<trouble --execute --in-diags for admin vnode>
storage:scsi (on MMB; base priority 60)
Storage rack VSP G800 s/n 412450 :
Priority 60: Pnode 1 MMB:
This logical storage array is presenting GAD volumes. SCSI ports 0,2,5,7 are reported as being on the local storage array (ports with WWN
50:06:0E:80:12:30:E2:xx) but SCSI ports 1,3,4,6 are reported as also being on the local storage array. (Ports with WWN 50:06:0E:80:12:30:A2:xx).
The HMO78 bit must be set on this storage array to indicate which ports are connections to the remote array for this node.
To see: pn 1 scsi-racks 0
Priority 60: Pnode 2 MMB:
This logical storage array is presenting GAD volumes. SCSI ports 0,3,5,7 are reported as being on the local storage array (ports with WWN
50:06:0E:80:12:30:E2:xx) but SCSI ports 1,2,4,6 are reported as also being on the local storage array. (Ports with WWN 50:06:0E:80:12:30:A2:xx).
The HMO78 bit must be set on this storage array to indicate which ports are connections to the remote array for this node.
To see: pn 2 scsi-racks 0
storage:loadbal (on MMB; base priority 70)
File server:
Priority 79: Pnode 1 MMB:
Host ports are unbalanced.
To see: pn 1 fc-host-port-load -v
To fix: pn 1 sdpath --rebalance
Priority 79: Pnode 2 MMB:
Host ports are unbalanced.
To see: pn 2 fc-host-port-load -v
To fix: pn 2 sdpath --rebalance
storage:span (on MMB; base priority 70)
Span B9-SP-FMD (ID 98FDEB746CA86C96):
Priority 77: Pnode 1 MMB:
Span is full. Its filesystems can't be expanded.
To see: span-list -fT B9-SP-FMD
To fix: man span-expand
If you wish to suppress this warning in future:
To fix: span-set-cap-warn-thresh B9-SP-FMD 0
fs:fsstatus (on MMB; base priority 100)
Filesystem B9FAFILE:
Priority 102: Pnode 1 MMB:
Quota for virtual volume CF-GYJSB-QGC has exceeded a critical threshold.
To see: vn 3 quota list --filter vt "B9FAFILE" "CF-GYJSB-QGC"
Free up space in the quota by deleting files or increasing its limits
Priority 102: Pnode 1 MMB:
Quota for virtual volume Cell-CellZZB-PZYYK has exceeded a critical threshold.
To see: vn 3 quota list --filter vt "B9FAFILE" "Cell-CellZZB-PZYYK"
Free up space in the quota by deleting files or increasing its limits
Priority 102: Pnode 1 MMB:
Quota for virtual volume Cell-CellZZB-SCYYK has exceeded a critical threshold.
To see: vn 3 quota list --filter vt "B9FAFILE" "Cell-CellZZB-SCYYK"
Free up space in the quota by deleting files or increasing its limits
Priority 102: Pnode 1 MMB:
Quota for virtual volume PZBZ-OQA has exceeded a critical threshold.
To see: vn 3 quota list --filter vt "B9FAFILE" "PZBZ-OQA"
Free up space in the quota by deleting files or increasing its limits
network:network-interfaces (on MMB; base priority 200)
Interface eth0:
Priority 201: Pnode 1 MMB:
Link eth0 is down.
Check the connection for eth0.
Priority 201: Pnode 2 MMB:
Link eth0 is down.
Check the connection for eth0.
//
//
// pn 1 fc-host-port-load -v
//
//
1: 2 devices: 2 11
2: 1 devices: 3
3: 5 devices: 0 5 6 8 10
4: 4 devices: 1 4 7 9
//
//
// pn 1 scsi-racks 0
//
//
|---------------------------------------|
| HITACHI OPEN-V 8301 |
| VSP G800 s/n 412450 |
| |
|======= Controller 1 504030A2-1 =======|
| |
| NODE 50:06:0E:80:12:30:A2:00 |
| PORT 50:06:0E:80:12:30:A2:00 |
| | |p 2 LUN 2 [602] : OK HDP 2 "B9-SP-SAS"
HPort 1 -->| 1A : SPort 1 addrs 10000 : online |->|
Up 8Gb N | | |p 11 LUN 11 [703] : OK HDP 0 "B9-SP-FMD"
|---------------------------------------|
| |
| NODE 50:06:0E:80:12:30:A2:20 |
| PORT 50:06:0E:80:12:30:A2:20 |
| |
HPort 2 -->| 3A : SPort 3 addrs 10100 : online |->|p 3 LUN 3 [603] : OK HDP 2 "B9-SP-SAS"
Up 8Gb N | |
|---------------------------------------|
| |
| NODE 50:06:0E:80:12:30:E2:40 |
| PORT 50:06:0E:80:12:30:E2:40 |
| |
HPort 1 -->| 5A : SPort 0 addrs 11200 : online |
Up 8Gb N | |
|---------------------------------------|
| |
| NODE 50:06:0E:80:12:30:E2:60 |
| PORT 50:06:0E:80:12:30:E2:60 |
| |
HPort 2 -->| 7A : SPort 2 addrs 11300 : online |
Up 8Gb N | |
| |
|======= Controller 2 504030A2-2 =======|
| | |p 0 LUN 0 [600] : OK HDP 2 "B9-SP-SAS"
| NODE 50:06:0E:80:12:30:A2:10 | |
| PORT 50:06:0E:80:12:30:A2:10 | |p 5 LUN 5 [605] : OK HDP 2 "B9-SP-SAS"
| | |
HPort 3 -->| 2A : SPort 4 addrs 20000 : online |->|p 6 LUN 6 [606] : OK HDP 2 "B9-SP-SAS"
Up 8Gb N | | |
|---------------------------------------| |p 8 LUN 8 [700] : OK HDP 0 "B9-SP-FMD"
| | |
| NODE 50:06:0E:80:12:30:A2:30 | |p 10 LUN 10 [702] : OK HDP 0 "B9-SP-FMD"
| PORT 50:06:0E:80:12:30:A2:30 |
| | |p 1 LUN 1 [601] : OK HDP 2 "B9-SP-SAS"
| | |
| | |p 4 LUN 4 [604] : OK HDP 2 "B9-SP-SAS"
HPort 4 -->| 4A : SPort 6 addrs 20100 : online |->|
Up 8Gb N | | |p 7 LUN 7 [607] : OK HDP 2 "B9-SP-SAS"
|---------------------------------------| |
| | |p 9 LUN 9 [701] : OK HDP 0 "B9-SP-FMD"
| NODE 50:06:0E:80:12:30:E2:50 |
| PORT 50:06:0E:80:12:30:E2:50 |
| |
HPort 3 -->| 6A : SPort 5 addrs 21200 : online |
Up 8Gb N | |
|---------------------------------------|
| |
| NODE 50:06:0E:80:12:30:E2:70 |
| PORT 50:06:0E:80:12:30:E2:70 |
| |
HPort 4 -->| 8A : SPort 7 addrs 21300 : online |
Up 8Gb N | |
|---------------------------------------|
//
//
// pn 2 fc-host-port-load -v
//
//
1: 4 devices: 0 3 4 9
2: 5 devices: 1 2 5 6 10
3: 1 devices: 7
4: 2 devices: 8 11
//
//
// pn 2 scsi-racks 0
//
//
|---------------------------------------|
| HITACHI OPEN-V 8301 |
| VSP G800 s/n 412450 |
| |
|======= Controller 1 504030A2-1 =======|
| |
| NODE 50:06:0E:80:12:30:A2:00 | |p 0 LUN 0 [600] : OK HDP 2 "B9-SP-SAS"
| PORT 50:06:0E:80:12:30:A2:00 | |
| | |p 3 LUN 3 [603] : OK HDP 2 "B9-SP-SAS"
HPort 1 -->| 1A : SPort 1 addrs 10000 : online |->|
Up 8Gb N | | |p 4 LUN 4 [604] : OK HDP 2 "B9-SP-SAS"
|---------------------------------------| |
| | |p 9 LUN 9 [701] : OK HDP 0 "B9-SP-FMD"
| NODE 50:06:0E:80:12:30:A2:20 |
| PORT 50:06:0E:80:12:30:A2:20 | |p 1 LUN 1 [601] : OK HDP 2 "B9-SP-SAS"
| | |
| | |p 2 LUN 2 [602] : OK HDP 2 "B9-SP-SAS"
| | |
HPort 2 -->| 3A : SPort 2 addrs 10100 : online |->|p 5 LUN 5 [605] : OK HDP 2 "B9-SP-SAS"
Up 8Gb N | | |
|---------------------------------------| |p 6 LUN 6 [606] : OK HDP 2 "B9-SP-SAS"
| | |
| NODE 50:06:0E:80:12:30:E2:40 | |p 10 LUN 10 [702] : OK HDP 0 "B9-SP-FMD"
| PORT 50:06:0E:80:12:30:E2:40 |
| |
HPort 1 -->| 5A : SPort 0 addrs 11200 : online |
Up 8Gb N | |
|---------------------------------------|
| |
| NODE 50:06:0E:80:12:30:E2:60 |
| PORT 50:06:0E:80:12:30:E2:60 |
| |
HPort 2 -->| 7A : SPort 3 addrs 11300 : online |
Up 8Gb N | |
|======= Controller 2 504030A2-2 =======|
| |
| NODE 50:06:0E:80:12:30:A2:10 |
| PORT 50:06:0E:80:12:30:A2:10 |
| |->|p 7 LUN 7 [607] : OK HDP 2 "B9-SP-SAS"
HPort 3 -->| 2A : SPort 4 addrs 20000 : online |
Up 8Gb N | |
|---------------------------------------|
| |
| NODE 50:06:0E:80:12:30:A2:30 |
| PORT 50:06:0E:80:12:30:A2:30 | |p 8 LUN 8 [700] : OK HDP 0 "B9-SP-FMD"
| |->|
HPort 4 -->| 4A : SPort 6 addrs 20100 : online | |p 11 LUN 11 [703] : OK HDP 0 "B9-SP-FMD"
Up 8Gb N | |
|---------------------------------------|
| |
| NODE 50:06:0E:80:12:30:E2:50 |
| PORT 50:06:0E:80:12:30:E2:50 |
| |
HPort 3 -->| 6A : SPort 5 addrs 21200 : online |
Up 8Gb N | |
|---------------------------------------|
| |
| NODE 50:06:0E:80:12:30:E2:70 |
| PORT 50:06:0E:80:12:30:E2:70 |
| |
HPort 4 -->| 8A : SPort 7 addrs 21300 : online |
Up 8Gb N | |
|---------------------------------------|
//
//
// span-list -fT B9-SP-FMD
//
//
Span instance name OK? Free Cap/GiB System drives Con
--------------------- --- ---- ------- ------------------------------- ---
B9-SP-FMD Yes 0% 6144 8,9,10,11 90%
fs FA-UPM Mount, EVS 1, cap 1027, con 1024
fs FA-vDisk Mount, EVS 1, cap 2054, con 2048
fs OA1-UPM Mount, EVS 2, cap 1027, con 1024
fs OA1-vDisk Mount, EVS 2, cap 2036, con 2048
//
//
// vn 3 quota list --filter vt "B9FAFILE" "CF-GYJSB-QGC"
//
//
Type : Explicit
Target : ViVol: CF-GYJSB-QGC
Usage : 78.36 GB
Limit : 80 GB (Hard)
Warning : 85% (68 GB)
Critical : 90% (72 GB)
Reset : 5% (4 GB)
File Count : 15621
Limit : Unset
Warning : 75% (0)
Critical : 85% (0)
Reset : 5% (0)
Generate Events : Disabled
//
//
// vn 3 quota list --filter vt "B9FAFILE" "Cell-CellZZB-PZYYK"
//
//
Type : Explicit
Target : ViVol: Cell-CellZZB-PZYYK
Usage : 9.433 GB
Limit : 10 GB (Hard)
Warning : 85% (8.5 GB)
Critical : 90% (9 GB)
Reset : 5% (512 MB)
File Count : 23538
Limit : Unset
Warning : 75% (0)
Critical : 85% (0)
Reset : 5% (0)
Generate Events : Disabled
//
//
// vn 3 quota list --filter vt "B9FAFILE" "Cell-CellZZB-SCYYK"
//
//
Type : Explicit
Target : ViVol: Cell-CellZZB-SCYYK
Usage : 9.417 GB
Limit : 10 GB (Hard)
Warning : 85% (8.5 GB)
Critical : 90% (9 GB)
Reset : 5% (512 MB)
File Count : 16493
Limit : Unset
Warning : 75% (0)
Critical : 85% (0)
Reset : 5% (0)
Generate Events : Disabled
//
//
// vn 3 quota list --filter vt "B9FAFILE" "PZBZ-OQA"
//
//
Type : Explicit
Target : ViVol: PZBZ-OQA
Usage : 9.57 GB
Limit : 10 GB (Hard)
Warning : 85% (8.5 GB)
Critical : 90% (9 GB)
Reset : 5% (512 MB)
File Count : 22591
Limit : Unset
Warning : 75% (0)
Critical : 85% (0)
Reset : 5% (0)
Generate Events : Disabled
============================================================================
End of output
============================================================================
------------------------------
Andre Chen
Customer Care Manager
H3C
------------------------------
Original Message:
Sent: 08-29-2023 08:22
From: Albert Hagopian
Subject: Warning: Task 13807 (/opt/mercury-main/13.9.6815.02/bin/mfb.elf) holds 513 open file descriptors (warning threshold 512 - 50% of task's open file limit).
These type of events are not uncommon and provide support/engineering insight into the system. Unfortunately yes, the logs can get chatty and cause confusion as most(if not all) field or customers won't know what to do with such a message.
Feel free to log into the CLI and run the command "trouble", I'm sure you will see many items that the server firmware considers troublesome - and one will be related to this message. Not everything in the output of "trouble" is actionable, though.
The first thing to ask yourself is why am I using code that is well below MGA (14.6.7520.04 is MGA, 14.7.7623.07 is GA) - so you must upgrade to at least the MGA version. There could be a memory leak, but this forum isn't available to troubleshoot such esoteric items.
I suggest the upgrade to MGA code, then monitor for recurrence and open a case with GSC for a more detailed analysis of the condition.
------------------------------
Albert Hagopian
Software Development Engineer - Specialist
Hitachi Vantara
Original Message:
Sent: 08-29-2023 03:21
From: Andre Chen
Subject: Warning: Task 13807 (/opt/mercury-main/13.9.6815.02/bin/mfb.elf) holds 513 open file descriptors (warning threshold 512 - 50% of task's open file limit).
Recently, the HNAS4060 has been shown the below warning messages From the Event log of NAS manager.
Warning: Task 13807 (/opt/mercury-main/13.9.6815.02/bin/mfb.elf) holds 513 open file descriptors (warning threshold 512 - 50% of task's open file limit).
My questions:
- What is the Task 13807?
- What is the function of the /opt/mercury-main/13.9.6815.02/bin/mfb.elf?
- Why the /opt/mercury-main/13.9.6815.02/bin/mfb.elf holds more than 512 open file descriptors?
- How to adjust the open file limit from 512 to more?
- Is it a bug or deffect related the 13.9.6815.02 version?
------------------------------
Andre Chen
Customer Care Manager
H3C
------------------------------