Marvell Driver Release Notes Marvell, Inc. All rights reserved Table of Contents 1. Change History 2. Known Issues 3. Notices 4. Contacting Support 1. Change History Current version: 4.1.34.0 - Jan 14, 2021 This section contains: * 1.1 Hardware Support * 1.2 Software Component * 1.3 Bug Fixes 1.1 Hardware Support Initial Drop 4.1.1.0: * Support for 2400/2500/2600/2700/2800 Series Fibre Channel adapters * Support for 8100/8200/8300 Series Converged Network Adapters Between versions 4.1.1.0 and 4.1.4.0: * None Between versions 4.1.4.0 and 4.1.5.0: * Remove support for 4G adapters Between versions 4.1.5.0 and 4.1.34.0: * None 1.2 Software Components Initial Drop 4.1.1.0: * Simplified Fabric Discovery support Between versions 4.1.1.0 and 4.1.2.0: * vmkMgmt improvment to support NVMe vmhbas Between versions 4.1.2.0 and 4.1.3.0: * Update ISP25XX firmware to version 8.08.206 Between versions 4.1.3.0 and 4.1.7.0: * None Between versions 4.1.7.0 and 4.1.8.0: * SAN Congestion Management support * Enhanced ZIO support Between versions 4.1.8.0 and 4.1.9.0: * Update ISP25XX fw to version 8.08.207 Between versions 4.1.9.0 and 4.1.14.0: * None Between versions 4.1.14.0 and 4.1.15.0: * NVMe support enabled by default. * FC-NVMe-2 Sequence Level Error Recovery support Between versions 4.1.15.0 and 4.1.16.0: * Statistic collection for error detection * Disable FW command timer by default. Between versions 4.1.16.0 and 4.1.17.0: ERXXXXXX: SAN Congestion Management support, phase 2 Change: Enhancements to SAN Congestion Management feature Relevance: 27XX and 28XX adapters ERXXXXXX: Enhanced Abort support Change: Enhancements to abort handling for NVMe I/O Relevance: 28XX adapters Between versions 4.1.17.0 and 4.1.18.0: ERXXXXXX: Lockdown support Change: Added support for management security lockdown feature Relevance: 27XX and 28XX adapters Between versions 4.1.18.0 and 4.1.34.0: * None 1.3 Bug Fixes Between versions 4.1.1.0 and 4.1.2.0: Problem Description: DDV tests during SCSI IOVP were failing Solution: Various fixes and improvments to the driver detach routine as well as memory allocation failure paths. Problem Description: FW Dump on PSOD was failing when ql2xattemptdumponpanic was set. Solution: Don't do an ISP abort after fw dump. Between versions 4.1.2.0 and 4.1.3.0: Problem Description: On systems with ql2xmqcpuaffinity disabled OR if the number of NUMA nodes is 1, a PSOD could occur during command processing when the targets go offline. Solution: Properly initialize qpair pointer when ql2xcpuaffinity is not enabled in the command submission path. Problem Description: Target remained OFFLINE during port toggle testing after a FW panic (8002). Solution: During target session deletion, set FCF_ASYNC_SENT flag to prevent other session management commands from being executed. Also, other improvements to the fcport deletion state handling. Known Issues None Problem Description: Data corruption was seen when driver encountered a FW panic. Solution: Properly clean up outstanding I/O and populate with correct host status when aborted during 8002 Async Event processing. Between versions 4.1.3.0 and 4.1.4.0: Problem Description: FCP failover was not working properly during testing Solution: Fix to return status when target no longer ONLINE - this was a rebasing/merging error with the old code. Between versions 4.1.4.0 and 4.1.5.0: Problem Description: Data corruption seen following a FW panic (8002). Solution: Populate NVMe commands that are aborted with proper status during 8002 handling Problem Description: PSOD during driver load on one NUMA node server. (ER146523) Solution: Do not attempt to set fw_started bit on qpairs using the queue pair map if it hasn't bee allocated. Problem Description: Driver unload testing was seeing failures. Solution: Short circuit and prevent some NVMe code paths when driver unload is active Between versions 4.1.5.0 and 4.1.6.0: Problem Description: PSOD was seen during testing on blade servers with 8G and Hilda adapters (ER146525) Solution: Removed reads from request queue in pointers during normal operation. Also, correctly initialize request queue in pointer address during non multi-queue operation. Between versions 4.1.6.0 and 4.1.7.0: Problem Description: PSOD during driver load on one NUMA node server. Solution: Do not attempt to set fw_started bit on qpairs using the queue pair map if it hasn't bee allocated. [ER146639] Problem Description: PSOD seen when disabling ql2xmqcpuaffinity and ql2xmqqos when running NVMe I/O. Solution: Correctly initialize qpair pointer in non-MQ path.[ER146600] Problem Description: PSOD observed while running concurrent FILESYSTEM IO against NVME and non-NVME targets Solution: In NVMe abort path, only reference NVMe command while holding the lock as it could be freed once the lock is released. Problem Description Targets failed to get discovered when connenct in N2N mode. Solution: Added N2N support for NVMe [ER146606] Problem Description: Component validation failing in 7.0 IOVP testing Solution: Re-name component to qlnativefc_component to make it unique from driver name [ER146649] Between versions 4.1.7.0 and 4.1.8.0: Problem Description: MPI fw dump and reset was done with ISP fw dump and reset which was unecessary. Solution: Improve MPI and ISP fw dump and reset handling to the agreed upon specification. Allow for either dump to be performed separately. ER146608 Problem Description: Application Services, when enabled (ql2xvmidsupport) was not getting registered on the Brocade switch. Solution: Enable Application Services FC4 type when sending RFT_ID command. ER146749 Problem Description: DDV testing failure Solution: Clean up vmk_nvme_adapter during attach failure. dcpn60266 Problem Description: Code inspection Solution: Don't reference sp after decrementing reference count in I/O completion path Problem Description: NVMe discover with invalid WWNN would succeed Solution: Put in check for valid WWNN during NVMe connection attempt dcpn60008 Problem Description: PSOD occurred when 2 FCP and 2 NVMe ports connected in N2N mode. Solution: Correctly decrement SRB reference when N2N PLOGI command completes. Otherwise the SRB never gets freed and deallocating the SRB mem pool during driver unload will fail with PSOD. ER146701 Problem Description: vmkmgmt interface showed both NVMe and FCP targets regardless of type of vmhba Solution: Only print out target types based on the type of vmhba requested ER146697 Problem Description: Direct connect FCP and NVMe: PSOD while running load and unload driver. Solution: Properly clean up ELS buffers used for PLOGI commands in N2N mode ER146716 Between versions 4.1.8.0 and 4.1.9.0: Problem Description: A PSOD would occur when encountering an MPI pause during MPI fw dump handling Solution: Defer MPI dump collection to DPC thread. ER146769 Between versions 4.1.9.0 and 4.1.10.0: Problem Description: FCP targets don't come back online after first cable pull with NVMe enabled. Solution: Don't re-use IFWCB buffer for N2N PLOGI template. Also, some improvements to N2N handling with NVMe enabled. ER146763 Problem Description: Prevent panic in vmkmgmt I/O timeout path with verbose logging turned on. Solution: Check cmd pointer in qlnativefcPrintScsiCmd function before printing to log. Problem Description: Namespace discovery and path claiming for NVMe storage was not completing and so paths would always show up dead. Solution: Data underruns from FW with "good" NVMe status should be returned "good" status to the NVMe layer. ER146708 Problem Description: I/O timeouts and aborts observed when 8G HBA FW loaded from flash that did not support multiqueue. Solution: When CPU affinity setup fails, queue pointers need to be re-initialized to correct non-MQ location on 8G adapters. Problem Description: FCP_RSP frame status qualifier field not supported. Solution: FCP-4 (referred FCP-4 rev-2b) identifies the earlier known "retry delay timer" field as "status qualifier", which is described in SAM-5 and later specs. This fix makes appropriate driver side modifications to honor the new definition. The SAM document referred was SAM-6 rev-5. Problem Description: MPI dump execution taking a long time. Solution: Check MPI dump flag in timer function when attempting to wake up DPC thread. Also, made improvments to the functions involved in collectin the MPI dump. ER146608 Problem Description: vmkmgmt key-value info wasn't showing pDIF support on LUNs Solution: Added pDIF info to the vmkmgmt output for debugging purposes. Problem Description: It may takes more than twenty minutes to unload the driver - ESX7.0 inbox testing. Solution: Check WaitForHbaReady every 10ms as opposed to every sec (10ms). DCPN58478 Between versions 4.1.10.0 and 4.1.11.0: Problem Description: Support for NVMe target reporting to apps in ESX 7.0 Solution: Create functions out of code that assigns target IDs and correctly return NVMe targets in API calls. Problem Description: DDV test failing with PSOD when forcing slab alloc errors on RFT_ID command Solution: Retries done on CT passthru commands need timer to be restarted. Also, attempt retry even if timer not active. Problem Description: "Mid-layer underflow" error messages seen in logs as a result of pDIF I/O sent to a non-pDIF LUN. Solution: Properly protect the data buffer(s) used to parse the Inquiry response for each LUN before determining if pDIF is supported or not. This buffer was not protected which could cause incorrect Inquiry data read leading to incorrectly enabling pDIF. ER146982 Problem Description: Code review - improvement needed with debugging issues when handling errors in response queue processing. Solution: Force reset and fw dump collection on various response queue error conditions Between versions 4.1.11.0 and 4.1.12.0: Problem Description: DDV failures in VMWare inbox testing Solution: Various fixes in DMA alloc failure path and driver detach routine. Problem Description: SCM support was not getting enabled in FW on Qlipper/Baker adapters. Solution: FW team had defined bit 10 in FW extended attributes lower to indicate SCM support, which confliced with secure flash support in Mach adapters. New bit 12 is now used to indicate SCM support. ER147063 Problem Description: Fabric priority QoS was not working with ql2xfabricpriority module parameter Solution: Fix initialization of GFO work thread when ql2xfabricpriority is turned on. ER147121 Problem Description: PSOD observed when running NVMe I/O on a PowerMax array. Solution: NVMe status IOCBs returned with FW underrun were incorrectly returning a host status of "good" to the NVMe core - only need to do that on a Get Log Page command and return "transport error" for any other commands. Problem Description: Incorrect vmhba number in NVMe target list in vmkmgmt key-value output Solution: Provide NVMe adapter name when printing out NVMe target list Problem Description: When NVMe is enabled, vmkmgmt output show invalid or stale target data Solution: Clear out contents of DMA buffer before sending NVMe GPN_FT. ER147124 Between versions 4.1.12.0 and 4.1.13.0: Problem Description: DDV EILoading test was failing with non-zero heap. Solution: During driver unload, short circuit RFT_ID/RFF_ID/RNN_ID/RSNN_ID retries in the "done" handler and just clean-up the DMA buffers. Problem Description: Debug print statement in LUN reset path was printing incorrect target and LUN Ids Solution: Print correct values. No impact for LUN reset handling - cosmetic. Problem Description: Peg core dump would fail with invalid address error. Solution: Populate RAM ID field in MB Register 10 Between versions 4.1.13.0 and 4.1.14.0: ER147165 : N2N logins not occurring with NVMe support enabled. Resolution : Correctly copy over PLOGI template retrieved from FW. Also, use flag to preserve FC4 type across PRLI retries. Relevance : Qlipper and Baker adapters, ESX 7.0 Between versions 4.1.14.0 and 4.1.15.0: Code improvement : Various code improvements after code review. Resolution : Code improvement when setting up qpair in I/O path. Also, improved log messages in NVMe discovery path. Relevance : all adapters ER147305: PSOD seen with NVMe enabled and NPIV ports created that are zoned with NVMe targets. Resolution: Prevent attempts to register NVMe targets discovered with NPIV ports. Relevance: Qlipper/Baker/Mach adapters, ESX 7.0 Feature review: Added complete NVMe First Burst support Resolution: Initial implementation had left out enabling in I/O path. Relevance: Qlipper/Baker/Mach adatpers, ESX 7.0 Between versions 4.1.15.0 and 4.1.16.0: DCPN22882 : Use function number for logical device creation Resolution : This will esnure persistent logical device names across driver reload and system reboot. Relevance : all adapters Code improvement: Improvements to NVMe SLER code after code review Resolution: Print statement not formatted correctly in NVMe SLER code. fcNve2SLER flag incorrectly initialized. Relevance: NVMe supported adapters only. Between versions 4.1.16.0 and 4.1.17.0: ER147415: Driver not relogging in during storage bounce test Resolution: When the driver receives a login status of "Port ID used", logout the nport handle before attempting a relogin. Relevance: All adapters ERXXXXXX: MPI and FW dump processing improvements Resolution: Improvements to dump processing to conform with additions to specification Relevance: 27XX adapters and later ERXXXXXX: Code review - vmkmgmt calls were missing cleanup function Resolution: Ensure all exits paths call the qlnativefcExitApi routine Relevance: All adapters ERXXXXXX: Improvements to enhanced error detection stats collection feature Resolution: Additions required by customer; also put API into separate source and header files Relevance: 27XX adapters and later ERXXXXXX: Code review - various improvemnts for NVMe Resolution: Return BUSY when FW returns an NVMe transport error. Use per fcport association lock Relevance: 27xx adapters and later Between versions 4.1.17.0 and 4.1.18.0: ER147525: vmkmgmt interface for NVMe hosts shows FCP targets in EED and SCM stats Resolution: Display the NVMe or FCP target lists based on host vmhba type. Relevance: 27xx adapters and later ER147526: ql2xenhancedabort support shows available on ESX 6.7 driver Resolution: Enhanced abort only supported with NVMe supported drivers - move ql2xenhancedabortsupport to NVME compile switch. Relevance: 27XX adapters and later ER147524: PSOD seen when handling SCM peer congestion event Resolution: System did not have multi-queue enabled so check for using slow queue to throttle I/O needs to have a check for multi-queue as well. Relevance: 27XX adapters and later ER147606: CPU affinity multi-queue disabled with SCM phase 2 support Resolution: Slow queue pair implementation was improved to use unique MSI-X vector when initializing Relevance: 27XX adapters and later ER147415: Driver not relogging in during storage bounce test, part 2 Resolution: When the driver receives a port logout 8014 Async Event, logout the nport handle before attempting a relogin. Relevance: All adapters ERXXXXXX: Use MPI hang trigger to do PEGTUNE halt. Resolution: Utilize vmkmgmt interface for MPI pause to do a PEGTUNE halt on 83XX adapters Relevance: 83XX adapters DCPN65467: PSOD occurrs during DDV testing when forcing queue creation failures in CPU affinity setup. Resolution: Don't disable mqenable flag when CPU affinity qpair creation fails. Relevance: All adapters ER147603: Enhanced Error Detection API would return 0 for entry count when retrieving host stats. Resolution: Populate entry count correctly before returning stat values. Relevance: 27XX adapters and later ERXXXXXX: Code review - enhanced abort MB Cmd timeout value. Reslution: Increase MB Cmd 0x54 timeout value to be greater than FW timeout value. Relevance: 27XX adapters and later ER147561: Host side stats were not being updated for FPIN warnings. Resolution: FPIN descriptor structure had been modified incorrectly for phase 2 of SCM feature. Fix was to revert to correct endianess of original structure Relevance: 27XX adapters and later ERXXXXXX: Code review - vmkmgmt MPI flags need updating after MPI reset Resolution: Capture MPI flags for display after MPI reset. Also, include MPI Reporting support in output. Relevance: 27XX adapters and later Between versions 4.1.18.0 and 4.1.19.0: ER147621: PSOD when NVMe I/O is returned with UNDERRUN. Resolution: Ensure good NVMe completion status from target before handling UNDERRUN. Also, only return SUCCESS status if data was transferred. Relevance: 27XX adapters and later ERXXXXXX: Enhanced abort logging improvements for NVMe. Resolution: When enhanced abort is used, print Abort IOCB completion and NVMe completion status by default Relevance: 27XX adapters and later ER147533: ESX CLI doesn't show UCSCM stats. Resolution: SCM Phase 2 broke backward compatibility with Phase 1. interface; bring back old interface and create new API calls for Phase 2 Relevance: 27XX adapters and later ER146879: Support to clear SCM/SCMR stats. Resolution: Added an interface in the driver to clear stats via qaucli. Relevance: 27XX adapters and later ER147630: SCM throttling increase did not happen as expected after congestion event. Resolution: Ensure throttle change value is at least 1 after calculating the new value. Relevance: 27XX adapters and later ERXXXXXX: Various SCM improvements Resolution: API structure additions to bring inline with latest spec SCMR algorithm improvements Clear peer congestion after removal from fabric SCMR support for NVMe Log message improvments Relevance: 27XX adapters and later Between versions 4.1.19.0 and 4.1.20.0: ER147694: NVMe I/Os with SCM caused VM to be hung Resolution: Do not throttle NVMe Keep Alive and fused commands. Relevance: 27xx adapters and later ER147576: PSOD and data inconsistencies preceeded by numerous invalid parameter in response IOCBs - "Process error entry. type/count/sys/status/comp = 18:2:0:8:1000" Resolution: Prevent data issues by passing back correct error status when invalid entry error seen on response queue. Relevance: All adapters ER147689: MPI reset was timing out and status not showing up with link down. Resolution: Driver was only grabbing MPI Status info if FW state was ready, i.e. link up. Change was to get this info regardless of FW state. Also, added improvement to MPI reset messages. Relevance: 27xx adapters and later ERXXXXXX: SCM: Remove call to notify FW of slow device Resolution: Prevent driver from sending MB Cmd 0x1A (Set Port Params) to notify FW of slow device until this feature is fully supported Relevance: 27xx adapters and later ER147733: Enabling ql2xvmidsupport and ql2xenablesmartsan at the same time on a boot from SAN setup caused driver load failures. Resolution: Prevent both module parameters from being set at the same time - disable ql2xvmidsupport in that event. Relevance: 27xx adapters and later Between versions 4.1.20.0 and 4.1.21.0: ER147620/ER147772/ER147780: Various issues with NVMe connections after chip reset. Resolution: Re-enabled FW command timer. Some commands were not getting cleaned up otherwise causing issues with re-establishing the NVMe connection. Relevance: 27xx adapters and later ER147768: SCMR: When ql2x_scmr_flow_ctl_tgt and ql2x_scmr_flow_ctl_host set to 0, congestion state for both target and host would not get cleared. Resolution: Allow congestion clearing to be called even when not actively throttling. Relevance: 27xx adapters and later ER147781: SCMR: Running NVMe I/O with host side throttling led to keep alive I/O errors Resolution: Do not throttle any NVMe admin commands Relevance: 27xx adapters and later ERXXXXXX: SCMR: Remove remaining code to notify FW of slow device Resolution: Some code had been left in from previous attempt. Also, some various log message improvments. Relevance: 27xx adapters and later ER147747: DDV EILoading Test Hits PSOD Resolution: PSOD was due to an invalid NVMe adapter pointer passed into the vmk_NvmeGetAdapterName call. Need to check return status in vmk_NvmeAllocateAdapter call. Relevance: 27xx adapters and above Between versions 4.1.21.0 and 4.1.22.0: ER146879: Alarm and warning counters were not getting cleared in QL_SET_PORT_SCM vmkmgmt callback. Also, NVMe target counters were not getting cleared. Resolution: Clear counters correctly in callback. Relevance: 27xx adapters and later ER147795: Host side congestion was not honoring event period sent in FPIN for when to clear congestion state. Resolution: Copy over correct FPIN field when determining event period for a host congestion scenario. Also print out correct value of event period. Relevance: 27xx adapters and later. ERXXXXXX: Slow queue could be incorrectly used in NVMe I/O path Resolution: Move slow queue check after normal multi queue setup. Relevance: 27xx adapters and later ER147772: NVMe traffic along with VM hangs after 30 minutes. Resolution: In enhanced abort handling, ensure flag is set with qpair lock held and check Abort IOCB return status to ensure if command needs to be returned to the stack in the abort context. Relevance: 28xx adapter Between versions 4.1.22.0 and 4.1.31.0: ERXXXXXX: Various improvements to Enhanced Error Detection support Resolution: Short link down counter fix, logging improvments and API callback wait time improvements Relevance: 27XX adapters and later ER148040: Performance regression with ESX 7.0 inbox testing (DPCN64008) Resolution: GPSC failures were leading to IIDMA for target being set to 1GB/s. Fix was to initialize target port speed to "unknown" so GPSC failure path would not lead to IIDMA being incorrectly set. Relevance: All adapters ER147839: Enhanced Abort for NVMe is not working, IO returned without waiting for ABTS response. Resolution: Arm FW timer with I/O stack timer + 10 seconds so that the stack will time out those commands before FW. Relevance: 27xx adapters and later ER147303: Fix response queue handler reading stale packets Resolution: Two module parameters are introduced (qlx2rsq_follow_inptr and ql2xrspq_follow_inptr_legacy) to control response queue processing logic to follow in pointer rather than signature (on by default) Relevance: All adapters ERXXXXXX: Improvement to SAN Congestion Managment algorithm Resolution: Added queue depth based throttling and turned on by default (ql2x_scmr_throttle_mode = 2) Relevancce: 27xx adapters and later ER148010: Output of "esxcli qlfc qcc port scmchk get" for "Seconds Since Last Event" always shows 0. Resolution: Keep track of and return elapsed time for each congestion event. Relevance: 27xx adapter and later ER147961: Target VP failover/failback shows "Process error entry" messages Resolution: Mark target port as "OFFLINE" immediately before modifying port ID in driver database Relevance: All adapters ER147093/ ER147095: No interface available for user space applications to discover NVMe target info Resolution: Implemented new vmkmgmt API to report NVMe target info and send NVMe pass through commands. Relevance: 27xx adapters and later ER148013: NVMe SGL Descriptor fields all 0's in SQE Resolution: Populate SGL Descriptor Type with 5h (Transport SGL Data Block descriptor) and SGL Descriptor Subtype with Ah (NVMe Transport Specific) Relevance: 27xx adapter and later Between versions 4.1.31.0 and 4.1.32.0: ERXXXXXX: Various improvements to Enhanced Error Detection support Resolution: Prevent API callbacks when port isolated, separate NVMe and SCSI fw timer values, logging improvements Relevance: 27XX adapters and later ER148066: Host side throttling - qdepth value shows negative Resolution: Call qdepth increment function regardless if throttle attempt was made. Relevance: 27xx and later ER148045: CPU lockup occurs during FPIN processing Resolution: Problem was due to the FPIN processing loop potentially spinning indefinitely. Fix was to wait only for 2 seconds for extra FPIN packets before failing. Relevance: 27xx adapters and later ER148082: Host side throttling: seeing continous FPIN error messages in the logs Resolution: A new snapshot of the response queue in pointer needed to be read after processing FPIN entries to avoid getting out of sync with firmware and reading stale entries. Relevance: 27xx adapters and later ER148105: Stale NVMe association IDs being used after RSCN Resolution: Synchronize connection state with NVMe layer by setting the connection "offline" until the new Create Association is submitted and not relying only on FC target state. Relevance: 27xx adapters and later ER148097: Host stays in congested state with SCM signals Resolution: Set correct event and throttle period values when receiving signals. Also, check for cleared congestion only after handling throttling state. Relevance: 28xx adapters ER147839(p2): NVMe I/O commands were timing out in the FW before the enhanced abort could trigger. Resolution: Only attach FW timer to admin NVMe commands. Relevance: 28xx adapters ER148013(p2): NVMe SGL Descriptor fields all 0's in SQE Resolution: Add data length value to SGL descriptor Relevance: 27xx adapter and later Between versions 4.1.32.0 and 4.1.33.0: ER148066: Q depth throttling messages show negative values Resolution: Manage qdepth using SRB flag to account for commands that are not counted towards qdepth calculation. Relevance: 27xx adapters and later ER148138: Congestion Severity shows None for Signals in esxcli Resolution: Populate congestion severity when receiving signals based on warning or alarms. Relevance: 27xx adapters and later ER148140: Congestion Severity doesn't reset after clear event. Resolution: Clear congestion severity either when congestion clears naturally or through an FPIN event. Relevance: 27xx adapters and later ERXXXXXX: PSOD observed when running FW panic and esxcli testing Resolution: Add missing unlock of spinlock in API NVMe cmd passthru failure path Relevance: 27xx adapters and later ERXXXXXX: Certain Enhanced Error Detection API calls need to be allowed to execute when port isolated. Resolution: Changed API interface to allow or not allow EED calls to go through based on requirements. Also, increment target short link down timeout when target logs out from the initiator. Relevance: 27xx adapters and later ERXXXXXX: ISP Abort was taking a long time to trigger when MB Cmds were active Resolution: Reduce MB Cmd timeout to 5 seconds when an ISP abort is needed Relevance: All adapters Between versions 4.1.33.0 and 4.1.34.0: ERXXXXXX: MB Cmd timeouts seen during Enhanced Error Detection port disable/enable testing Resolution: Don't return enable API call until FW is initialized. Also, clear initDone flag with port disabled to allow MB Cmd polling instead of waiting for interrupt. Relevance: 27xx adapters and later ER148213: Disable Host and Peer throttling using module parameter system don't show congested state. Resolution: Make calls to set/clear congestion separate from the throttling functions and put in appropriate module parameter check only for throttling Relevance: 27xx adapters and later ER148079: DDV EILoading runs for an indefinite of time if the datastores are mapped Resolution: Don't destory DPC world until after the priority workqueues have been destroyed Relevance: All adapters ER148145: Cable unplugged messages are not seen on vmkernel when SFP is present but no cable Resolution: In timer task, print cable unplugged after loop down time if FW never became ready Relevance: All adapters ERXXXXXX: Adapters older than 27xx do not have shadow registers so need to directly read chip registers to determine response in pointer Resolution: Turn off ql2xrspq_follow_inptr_legacy by default for now to avoid reading response queue in register during interrupt processing on older chips. Relevance: 25xx and Hilda adapters 3. Notices Information furnished in this document is believed to be accurate and reliable. However, Marvell, Inc. assumes no responsibility for its use, nor for any infringements of patents or other rights of third parties which may result from its use. Marvell, Inc. reserves the right to change product specifications at any time without notice. Applications described in this document for any of these products are only for illustrative purposes. Marvell, Inc. makes no representation nor warranty that such applications are suitable for the specified use without further testing or modification. Marvell, Inc. assumes no responsibility for any errors that may appear in this document. 4. Contacting Support For further assistance, contact Marvell Technical Support at: http://support.marvell.com (c) Copyright 2021. All rights reserved worldwide. Marvell, Inc, the Marvell logo, and the Powered by Marvell logo are registered trademarks of Marvell, Inc. All other brand and product names are trademarks or registered trademarks of their respective owners.